aneip | 24 Apr 07:40 2010
Picon

ZFS Pool, what happen when disk failure

I really new to zfs and also raid.

I have 3 hard disk, 500GB, 1TB, 1.5TB.

On each HD i wanna create 150GB partition + remaining space.

I wanna create raidz for 3x150GB partition. This is for my document + photo.

As for the remaining I wanna create my video library. This one no need any redundancy since I can simply
backup my dvd again.

The question would be, if I create strip pool from the remaining space (350 + 850 + 1350 GB space). What happen
if 1 of the HD failure. Do I loose some file of I loose the whole pool?
--

-- 
This message posted from opensolaris.org
Haudy Kazemi | 24 Apr 08:36 2010
Picon

Re: ZFS Pool, what happen when disk failure

aneip wrote:
> I really new to zfs and also raid.
>
> I have 3 hard disk, 500GB, 1TB, 1.5TB.
>
> On each HD i wanna create 150GB partition + remaining space.
>
> I wanna create raidz for 3x150GB partition. This is for my document + photo.
>   
You should be able to create 150 GB slices on each drive, and then 
create a RAIDZ1 out of those 3 slices.

> As for the remaining I wanna create my video library. This one no need any redundancy since I can simply
backup my dvd again.
>
> The question would be, if I create strip pool from the remaining space (350 + 850 + 1350 GB space). What
happen if 1 of the HD failure. Do I loose some file of I loose the whole pool?
>   
Your remaining space can be configured as slices.  These slices can be 
added directly to a second pool without any redundancy.  If any drive 
fails, that whole non-redundant pool will be lost.  Data recovery 
attempts will likely find that any recoverable video is like 
swiss-cheese, with gaps in it.  This is because files are spread across 
striped devices as they're written to increase read and write 
performance.  In a JBOD arrangement, however, some files might still be 
complete, but I don't believe ZFS supports JBOD-style non-redundant 
pools.  For most people that is not a big deal, as part of the point of 
ZFS is to focus on data integrity and performance, neither of which is 
offered by JBOD (as it is still ruined by single device failures, it is 
just that it is easier to carve files out of a JBOD than a broken RAID).
(Continue reading)

Edward Ned Harvey | 24 Apr 14:45 2010

Re: ZFS Pool, what happen when disk failure

> From: zfs-discuss-bounces <at> opensolaris.org [mailto:zfs-discuss-
> bounces <at> opensolaris.org] On Behalf Of Haudy Kazemi
> 
> Your remaining space can be configured as slices.  These slices can be
> added directly to a second pool without any redundancy.  If any drive
> fails, that whole non-redundant pool will be lost.  

For clarification:  In the above description, you're creating a stripe.
zpool create secondpool deviceA deviceB deviceC
(350G + 850G + 1350G)

According to the strictest definition, that's not technically a stripe.  But
only because the ZFS implementation supersedes a simple stripe, making it
obsolete.  So this is what we'll commonly call a stripe in ZFS.

When we say "stripe" in ZFS, we really mean:  Configure the disk controller
(if you have a raid controller card) not to do any hardware raid.  The disk
controller will report to the OS, that it has just a bunch of disks (jbod).
And then the OS will have the option of doing software raid (striping,
mirroring, raidz, etc).  The OS doing software raid, in most ZFS cases, is
smarter than the hardware doing hardware raid.  Because the OS has intimate
knowledge of the filesystem and the blocks on disk, while the hardware only
has knowledge of blocks on disk.  Therefore, the OS is able to perform
optimizations that would otherwise be impossible in hardware.

But I digress.  In the OS, if you simply add devices to a pool in the manner
described above (zpool create ...) then you're implementing software raid,
and it's no longer what you would normally call JBOD.  In reality, this
configuration shares some of the characteristics of a concatentation set and
a stripe set, but again, ZFS implementation makes both of those obsolete, so
(Continue reading)

aneip | 24 Apr 19:27 2010
Picon

Re: ZFS Pool, what happen when disk failure

Thanks for all the answer, still trying to read slowly and understand. Pardon my English coz this is my 2nd language.

I believe I owe some more explanation.

The system is actually freenas which installed on separate disk.

3 disks, 500GB, 1TB and 1.5TB is for data only.

The first pool will be raidz1 with 150 Mb from each disk.  This 1 should be clear I believe.

The remaining space from each disk is for video files. The main motivation for me to creating second pool is I
can use this space easily. What I mean by easily is
1) The space will be sum together. Something like JBOD. So I don't have to monitor freespace for each
different disk when I copy the file.
2) When the space is almost full, can add new disk and extend the size rather than mount another partition in
different folder.

What I trying to avoid is, if 1 of the disk fail I will lost all of the data in the pool even from healthy drive. I
not sure whether I can simple pull out 1 drive and all the file which located on the faulty drive will be lost.
The file which on other drive will still available.

I understand about the cheese thing. Can I avoid this? meaning the file will be written on 1 drive and not
splited across the drives. I don't needs the speed, only easy manageable freespace.
--

-- 
This message posted from opensolaris.org
Bob Friesenhahn | 24 Apr 19:53 2010
Picon
Picon

Re: ZFS Pool, what happen when disk failure

On Sat, 24 Apr 2010, aneip wrote:
>
> What I trying to avoid is, if 1 of the disk fail I will lost all of 
> the data in the pool even from healthy drive. I not sure whether I 
> can simple pull out 1 drive and all the file which located on the 
> faulty drive will be lost. The file which on other drive will still 
> available.

You are not going to get this using zfs.  Zfs spreads the file data 
across all of the drives in the pool.  You need to have vdev 
redundancy in order to be able to lose a drive without losing the 
whole pool.

Bob
--
Bob Friesenhahn
bfriesen <at> simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
Edward Ned Harvey | 24 Apr 14:51 2010

Re: ZFS Pool, what happen when disk failure

> From: zfs-discuss-bounces <at> opensolaris.org [mailto:zfs-discuss-
> bounces <at> opensolaris.org] On Behalf Of aneip
> 
> I really new to zfs and also raid.
> 
> I have 3 hard disk, 500GB, 1TB, 1.5TB.
> 
> On each HD i wanna create 150GB partition + remaining space.
> 
> I wanna create raidz for 3x150GB partition. This is for my document +
> photo.
> 
> As for the remaining I wanna create my video library. This one no need
> any redundancy since I can simply backup my dvd again.
> 
> The question would be, if I create strip pool from the remaining space
> (350 + 850 + 1350 GB space). What happen if 1 of the HD failure. Do I
> loose some file of I loose the whole pool?

Something for you to be aware of:

With your raidz set, as you know, you're protected against data loss if a
single disk fails.
As you know, with the stripe set, your whole pool is lost if a single disk
fails.

But what you might not know:  If any pool fails, the system will crash.  You
will need to power cycle.  The system won't boot up again; you'll have to
enter failsafe mode.  Failsafe mode is very unfriendly working environment;
by default you can't backspace, or repeat commands with the up arrow key,
(Continue reading)

Robert Milkowski | 24 Apr 16:17 2010
Picon
Picon

Re: ZFS Pool, what happen when disk failure

On 24/04/2010 13:51, Edward Ned Harvey wrote:
> But what you might not know:  If any pool fails, the system will crash.
This actually depends on the failmode property setting in your pools.
The default is panic, but it also might be wait or continue - see 
zpool(1M) man page for more details.

>    You
> will need to power cycle.  The system won't boot up again; you'll have to
>    

The system should boot-up properly even if some pools are not accessible 
(except rpool of course).
If it is not the case then there is a bug - last time I checked it 
worked perfectly fine.

--

-- 
Robert Milkowski
http://milek.blogspot.com
Edward Ned Harvey | 25 Apr 14:08 2010

Re: ZFS Pool, what happen when disk failure

> From: zfs-discuss-bounces <at> opensolaris.org [mailto:zfs-discuss-
> bounces <at> opensolaris.org] On Behalf Of Robert Milkowski
> 
> On 24/04/2010 13:51, Edward Ned Harvey wrote:
> > But what you might not know:  If any pool fails, the system will
> crash.
> This actually depends on the failmode property setting in your pools.
> The default is panic, but it also might be wait or continue - see
> zpool(1M) man page for more details.
> 
> >    You
> > will need to power cycle.  The system won't boot up again; you'll
> have to
> >
> 
> The system should boot-up properly even if some pools are not
> accessible
> (except rpool of course).
> If it is not the case then there is a bug - last time I checked it
> worked perfectly fine.

This may be different in the latest opensolaris, but in the latest solaris,
this is what I know:

If a pool fails, and forces an ungraceful shutdown, then during the next
bootup, the pool is treated as "currently in use by another system."  The OS
doesn't come up all the way; you have to power cycle again, and go into
failsafe mode.  Then you can "zpool import" I think requiring the -f or -F,
and reboot again normal.
(Continue reading)

Robert Milkowski | 25 Apr 15:45 2010
Picon
Picon

Re: ZFS Pool, what happen when disk failure

On 25/04/2010 13:08, Edward Ned Harvey wrote:
>>
>> The system should boot-up properly even if some pools are not
>> accessible
>> (except rpool of course).
>> If it is not the case then there is a bug - last time I checked it
>> worked perfectly fine.
>>      
> This may be different in the latest opensolaris, but in the latest solaris,
> this is what I know:
>
> If a pool fails, and forces an ungraceful shutdown, then during the next
> bootup, the pool is treated as "currently in use by another system."  The OS
> doesn't come up all the way; you have to power cycle again, and go into
> failsafe mode.  Then you can "zpool import" I think requiring the -f or -F,
> and reboot again normal.
>
>
>    
I just did a test on Solaris 10/09 - and system came up properly, 
entirely on its own, with a failed pool.
zpool status showed the pool as unavailable (as I removed an underlying 
device) which is fine.

--

-- 
Robert Milkowski
http://milek.blogspot.com
Ian Collins | 25 Apr 23:09 2010

Re: ZFS Pool, what happen when disk failure

On 04/26/10 12:08 AM, Edward Ned Harvey wrote:

[why do you snip attributions?]

 > On 04/26/10 01:45 AM, Robert Milkowski wrote:
>> The system should boot-up properly even if some pools are not
>> accessible
>> (except rpool of course).
>> If it is not the case then there is a bug - last time I checked it
>> worked perfectly fine.
>>      
> This may be different in the latest opensolaris, but in the latest solaris,
> this is what I know:
>
> If a pool fails, and forces an ungraceful shutdown, then during the next
> bootup, the pool is treated as "currently in use by another system."  The OS
> doesn't come up all the way; you have to power cycle again, and go into
> failsafe mode.  Then you can "zpool import" I think requiring the -f or -F,
> and reboot again normal.
>
>    
I think you are describing what happens if the root pool has problems.  
Other pools are just shown as unavailable.

The system will come up, but failure to mount any filesystems in the 
absent pool will cause the filesystem/local service to be in maintenance 
state.

--

-- 
Ian.
(Continue reading)

Edward Ned Harvey | 26 Apr 14:08 2010

Re: ZFS Pool, what happen when disk failure

> From: Ian Collins [mailto:ian <at> ianshome.com]
> Sent: Sunday, April 25, 2010 5:09 PM
> To: Edward Ned Harvey
> Cc: 'Robert Milkowski'; zfs-discuss <at> opensolaris.org
> Subject: Re: [zfs-discuss] ZFS Pool, what happen when disk failure
> 
> On 04/26/10 12:08 AM, Edward Ned Harvey wrote:
> 
> [why do you snip attributions?]

Nobody snipped attributions, and even if they did, get over it.  It's not
always needed for any reason.

>  > On 04/26/10 01:45 AM, Robert Milkowski wrote:
> >> The system should boot-up properly even if some pools are not
> >> accessible
> >> (except rpool of course).
> >> If it is not the case then there is a bug - last time I checked it
> >> worked perfectly fine.
> >>
> > This may be different in the latest opensolaris, but in the latest
> solaris,
> > this is what I know:
> >
> > If a pool fails, and forces an ungraceful shutdown, then during the
> next
> > bootup, the pool is treated as "currently in use by another system."
> The OS
> > doesn't come up all the way; you have to power cycle again, and go
> into
(Continue reading)


Gmane