Re: ZFS Pool, what happen when disk failure
Edward Ned Harvey <solaris2 <at> nedharvey.com>
2010-04-24 12:45:45 GMT
> From: zfs-discuss-bounces <at> opensolaris.org [mailto:zfs-(Continue reading)
> bounces <at> opensolaris.org] On Behalf Of Haudy Kazemi
> Your remaining space can be configured as slices. These slices can be
> added directly to a second pool without any redundancy. If any drive
> fails, that whole non-redundant pool will be lost.
For clarification: In the above description, you're creating a stripe.
zpool create secondpool deviceA deviceB deviceC
(350G + 850G + 1350G)
According to the strictest definition, that's not technically a stripe. But
only because the ZFS implementation supersedes a simple stripe, making it
obsolete. So this is what we'll commonly call a stripe in ZFS.
When we say "stripe" in ZFS, we really mean: Configure the disk controller
(if you have a raid controller card) not to do any hardware raid. The disk
controller will report to the OS, that it has just a bunch of disks (jbod).
And then the OS will have the option of doing software raid (striping,
mirroring, raidz, etc). The OS doing software raid, in most ZFS cases, is
smarter than the hardware doing hardware raid. Because the OS has intimate
knowledge of the filesystem and the blocks on disk, while the hardware only
has knowledge of blocks on disk. Therefore, the OS is able to perform
optimizations that would otherwise be impossible in hardware.
But I digress. In the OS, if you simply add devices to a pool in the manner
described above (zpool create ...) then you're implementing software raid,
and it's no longer what you would normally call JBOD. In reality, this
configuration shares some of the characteristics of a concatentation set and
a stripe set, but again, ZFS implementation makes both of those obsolete, so