Paul Kraus | 18 May 14:47 2011

Solaris vs FreeBSD question

    Over the past few months I have seen mention of FreeBSD a couple
time in regards to ZFS. My question is how stable (reliable) is ZFS on
this platform ?

    This is for a home server and the reason I am asking is that about
a year ago I bought some hardware based on it's inclusion on the
Solaris 10 HCL, as follows:

SuperMicro 7045A-WTB (although I would have preferred the server
version, but it wasn't on the HCL)
Two quad core 2.0 GHz Xeon CPUs
8 GB RAM (I am NOT planning on using DeDupe)
2 x Seagate ES-2 250 GB SATA drives for the OS
4 x Seagate ES-2 1 TB SATA drives for data
Nvidia Geforce 8400 (cheapest video card I could get locally)

    I could not get the current production Solaris or OpenSolaris to
load. The miniroot would GPF while loading the kernel. I could not get
the problem resolved and needed to get the server up and running as my
old server was dying (dual 550 MHz P3 with 1 GB RAM) and I needed to
get my data (about 600 GB) off of it before I lost anything. That old
server was running Solaris 10 and the data was in a zpool with
mirrored vdevs of different sized drives. I had lost one drive in each
vdev and zfs saved my data. So I loaded OpenSuSE and moved the data to
a mirrored pair of 1 TB drives.

    I still want to move my data to ZFS, and push has come to shove,
as I am about to overflow the 1 TB mirror and I really, really hate
the Linux options for multiple disk device management (I'm spoiled by
SVM and ZFS). So now I really need to get that hardware loaded with an
(Continue reading)

Tim Cook | 18 May 15:01 2011

Re: Solaris vs FreeBSD question



On Wed, May 18, 2011 at 7:47 AM, Paul Kraus <paul <at> kraus-haus.org> wrote:
   Over the past few months I have seen mention of FreeBSD a couple
time in regards to ZFS. My question is how stable (reliable) is ZFS on
this platform ?

   This is for a home server and the reason I am asking is that about
a year ago I bought some hardware based on it's inclusion on the
Solaris 10 HCL, as follows:

SuperMicro 7045A-WTB (although I would have preferred the server
version, but it wasn't on the HCL)
Two quad core 2.0 GHz Xeon CPUs
8 GB RAM (I am NOT planning on using DeDupe)
2 x Seagate ES-2 250 GB SATA drives for the OS
4 x Seagate ES-2 1 TB SATA drives for data
Nvidia Geforce 8400 (cheapest video card I could get locally)

   I could not get the current production Solaris or OpenSolaris to
load. The miniroot would GPF while loading the kernel. I could not get
the problem resolved and needed to get the server up and running as my
old server was dying (dual 550 MHz P3 with 1 GB RAM) and I needed to
get my data (about 600 GB) off of it before I lost anything. That old
server was running Solaris 10 and the data was in a zpool with
mirrored vdevs of different sized drives. I had lost one drive in each
vdev and zfs saved my data. So I loaded OpenSuSE and moved the data to
a mirrored pair of 1 TB drives.

   I still want to move my data to ZFS, and push has come to shove,
as I am about to overflow the 1 TB mirror and I really, really hate
the Linux options for multiple disk device management (I'm spoiled by
SVM and ZFS). So now I really need to get that hardware loaded with an
OS that supports ZFS. I have tried every variation of Solaris that I
can get my hands on including Solaris 11 Express and Nexenta 3 and
they all GPF loading the kernel to run the installer. My last hope is
that I have a very plain vanilla (ancient S540) video card to swap in
for the Nvidia on the very long shot chance that is the problem. But I
need a backup plan if that does not work.

   I have tested the hardware with FreeBSD 8 and it boots to the
installer. So my question is whether the FreeBSD ZFS port is up to
production use ? Is there anyone here using FreeBSD in production with
good results (this list tends to only hear about serious problems and
not success stories) ?

P.S. If anyone here has a suggestion as to how to get Solaris to load
I would love to hear it. I even tried disabling multi-cores (which
makes the CPUs look like dual core instead of quad) with no change. I
have not been able to get serial console redirect to work so I do not
have a good log of the failures.

--
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players



I've heard nothing but good things about it.  FreeNAS uses it: http://freenas.org/ and IXSystems sells a commercial product based on the FreeNAS/FreeBSD code.  I don't think they have a full-blown implementation of CIFS (just Samba), but other than that, I don't think you'll have too many issues.  I actually considered moving over to it, but I made the unfortunate mistake of upgrading to Solaris 11 Express, which means my zpool version is now too new to run anything else (AFAIK).

--Tim 

_______________________________________________
zfs-discuss mailing list
zfs-discuss <at> opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bob Friesenhahn | 18 May 16:29 2011
Picon
Picon

Re: Solaris vs FreeBSD question

On Wed, 18 May 2011, Paul Kraus wrote:

>    Over the past few months I have seen mention of FreeBSD a couple
> time in regards to ZFS. My question is how stable (reliable) is ZFS on
> this platform ?

This would be a very excellent question to ask on the related FreeBSD 
mailing list (freebsd-fs <at> freebsd.org).

>    I have tested the hardware with FreeBSD 8 and it boots to the
> installer. So my question is whether the FreeBSD ZFS port is up to
> production use ? Is there anyone here using FreeBSD in production with
> good results (this list tends to only hear about serious problems and
> not success stories) ?

I have been on the freebsd-fs mailing list for quite some time now and 
it does seem that there are quite a few happy FreeBSD zfs users. 
There are also some users who experience issues.  FreeBSD zfs may 
require more tuning (depending on your hardware) than Solaris zfs for 
best performance.

If you are very careful, you can create a zfs pool which can be used 
by FreeBSD or Solaris.

Bob
--

-- 
Bob Friesenhahn
bfriesen <at> simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
a.smith | 18 May 16:39 2011
Picon

Re: Solaris vs FreeBSD question

Hi,

   I am using FreeBSD 8.2 in production with ZFS. Although I have had  
one issue with it in the past but I would recommend it and I consider  
it production ready. That said if you can wait for FreeBSD 8.3 or 9.0  
to come out (a few months away) you will get a better system as these  
will include ZFS v28 (FreeBSD-RELEASE is currently v15).
On the other had things can always go wrong, of course RAID is not  
backup, even with snapshots ;)

cheers Andy.
Freddie Cash | 18 May 17:54 2011
Picon

Re: Solaris vs FreeBSD question

On Wed, May 18, 2011 at 5:47 AM, Paul Kraus <paul <at> kraus-haus.org> wrote:
>    Over the past few months I have seen mention of FreeBSD a couple
> time in regards to ZFS. My question is how stable (reliable) is ZFS on
> this platform ?

ZFSv15, as shipped with FreeBSD 8.3, is rock stable in our uses.  We
have two servers running without any issues.  These are our backups
servers, doing rsync backups every night for ~130 remote Linux and
FreeBSD systems.  These are 5U rackmount boxes with:
  - Chenbro 5U storage chassis with 3-way redundant PSUs
  - Tyan h2000M motherboard
  - 2x AMD Opteron 2000-series CPUs (dual-core)
  - 8 GB ECC DDR2-SDRAM
  - 2x 8 GB CompactFlash (mirrored for OS install)
  - 2x 3Ware RAID controllers (12-port multi-lane)
  - 24x SATA harddrives (various sizes, configured in 3x 8-drive raidz2 vdevs)
  - FreeBSD 8.3 on both servers

ZFSv28, as shipped in FreeBSD -CURRENT (the development version that
will eventually become 9.0), is a little rough around the edges, but
is getting better over time.  There are also patches floating around
that allow you to use ZFSv28 with 8-STABLE (the development version
that will eventually become 8.4).  These are a little rougher around
the edges.

We have only been testing ZFS in storage servers for backups, but have
plans to start testing it NFS servers with an eye toward creating
NAS/SAN setups for virtual machines.

I also run it on my home media server, which is nowhere near "server
quality", without issues:
  - generic Intel motherboard
  - 2.8 GHz P4 CPU
  - 3 SATA1 harddrives connected to motherboard, in a raidz1 vdev
  - 2 IDE harddrives connected to a Promise PCI controller, in a mirror vdev
  - 2 GB non-ECC SDRAM
  - 2 GB USB stick for the OS install
  - FreeBSD 8.2

--

-- 
Freddie Cash
fjwcash <at> gmail.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss <at> opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Brandon High | 18 May 19:49 2011

Re: Solaris vs FreeBSD question

On Wed, May 18, 2011 at 5:47 AM, Paul Kraus <paul <at> kraus-haus.org> wrote:
> P.S. If anyone here has a suggestion as to how to get Solaris to load
> I would love to hear it. I even tried disabling multi-cores (which
> makes the CPUs look like dual core instead of quad) with no change. I
> have not been able to get serial console redirect to work so I do not
> have a good log of the failures.

Have you checked your system in the HCL device tool at
http://www.sun.com/bigadmin/hcl/hcts/device_detect.jsp ? It should be
able to tell you which device is causing the problem. If I remember
correctly, you can feed it the output of 'lspci -vv -n'.

You may have to disable some on-board devices to get through the
installer, but I couldn't begin to guess which.

-B

--

-- 
Brandon High : bhigh <at> freaks.com
Paul Kraus | 19 May 04:14 2011

Re: Solaris vs FreeBSD question

On Wed, May 18, 2011 at 1:49 PM, Brandon High <bhigh <at> freaks.com> wrote:

> Have you checked your system in the HCL device tool at
> http://www.sun.com/bigadmin/hcl/hcts/device_detect.jsp ? It should be
> able to tell you which device is causing the problem. If I remember
> correctly, you can feed it the output of 'lspci -vv -n'.

I remember running this back when I first tackled this and did not
find any problems. I ran it again and got the attached. When I have
more time I'll try pulling the LSI SCSI card and see if that helps. In
case the image does not make it through to the list, I also posted it
at http://www.ilk.org/~ppk/snapshot1.png

> You may have to disable some on-board devices to get through the
> installer, but I couldn't begin to guess which.

The only on-board device the tool complains about is the audio device
and if there is no driver for that I don't care. I'll have to see if I
can disable it in the BIOS.

--

-- 
{--------1---------2---------3---------4---------5---------6---------7---------}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
_______________________________________________
zfs-discuss mailing list
zfs-discuss <at> opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Garrett D'Amore | 18 May 15:09 2011

Re: Solaris vs FreeBSD question

We might have a better change of diagnosing your problem if we had a copy of your panic message buffer.  Have
you considered OpenIndiana and illumos as an option, or even NexentaStor if you are just looking for a
storage appliance (though my guess is that you need more general purpose compute capabilities)? 

  -- Garrett D'Amore

On May 18, 2011, at 2:48 PM, "Paul Kraus" <paul <at> kraus-haus.org> wrote:

> 
>    Over the past few months I have seen mention of FreeBSD a couple
> time in regards to ZFS. My question is how stable (reliable) is ZFS on
> this platform ?
> 
>    This is for a home server and the reason I am asking is that about
> a year ago I bought some hardware based on it's inclusion on the
> Solaris 10 HCL, as follows:
> 
> SuperMicro 7045A-WTB (although I would have preferred the server
> version, but it wasn't on the HCL)
> Two quad core 2.0 GHz Xeon CPUs
> 8 GB RAM (I am NOT planning on using DeDupe)
> 2 x Seagate ES-2 250 GB SATA drives for the OS
> 4 x Seagate ES-2 1 TB SATA drives for data
> Nvidia Geforce 8400 (cheapest video card I could get locally)
> 
>    I could not get the current production Solaris or OpenSolaris to
> load. The miniroot would GPF while loading the kernel. I could not get
> the problem resolved and needed to get the server up and running as my
> old server was dying (dual 550 MHz P3 with 1 GB RAM) and I needed to
> get my data (about 600 GB) off of it before I lost anything. That old
> server was running Solaris 10 and the data was in a zpool with
> mirrored vdevs of different sized drives. I had lost one drive in each
> vdev and zfs saved my data. So I loaded OpenSuSE and moved the data to
> a mirrored pair of 1 TB drives.
> 
>    I still want to move my data to ZFS, and push has come to shove,
> as I am about to overflow the 1 TB mirror and I really, really hate
> the Linux options for multiple disk device management (I'm spoiled by
> SVM and ZFS). So now I really need to get that hardware loaded with an
> OS that supports ZFS. I have tried every variation of Solaris that I
> can get my hands on including Solaris 11 Express and Nexenta 3 and
> they all GPF loading the kernel to run the installer. My last hope is
> that I have a very plain vanilla (ancient S540) video card to swap in
> for the Nvidia on the very long shot chance that is the problem. But I
> need a backup plan if that does not work.
> 
>    I have tested the hardware with FreeBSD 8 and it boots to the
> installer. So my question is whether the FreeBSD ZFS port is up to
> production use ? Is there anyone here using FreeBSD in production with
> good results (this list tends to only hear about serious problems and
> not success stories) ?
> 
> P.S. If anyone here has a suggestion as to how to get Solaris to load
> I would love to hear it. I even tried disabling multi-cores (which
> makes the CPUs look like dual core instead of quad) with no change. I
> have not been able to get serial console redirect to work so I do not
> have a good log of the failures.
> 
> -- 
> {--------1---------2---------3---------4---------5---------6---------7---------}
> Paul Kraus
> -> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
> -> Sound Coordinator, Schenectady Light Opera Company (
> http://www.sloctheater.org/ )
> -> Technical Advisor, RPI Players
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss <at> opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Chris Forgeron | 20 May 01:17 2011
Picon

Re: Solaris vs FreeBSD question


>-----Original Message-----
>From: zfs-discuss-bounces <at> opensolaris.org [mailto:zfs-discuss-bounces <at> opensolaris.org] On
Behalf Of Paul Kraus
>
>    Over the past few months I have seen mention of FreeBSD a couple time in regards to ZFS. My question is how
stable (reliable) is ZFS on this platform ?

I find it more stable than OpenSolaris 148, or Solaris 11 Express, and that's on a 96 GB Dell T710 with dual
Xeon 5760's and currently 28 TB spread over 46 disks.  I have two other fairly big FreeBSD SAN's running as
well. I'm past the "home server" category for sure.  I'm running around 50 VM's off FreeBSD 9-CURRENT SAN's
via NFS on a daily basis, with very minimal problems. 

I've blogged about this to some degree: http://christopher-technicalmusings.blogspot.com/2011/01/testing-freebsd-zfs-v28.html

I'd also like to add;

 - You need to be on the "-CURRENT" build of FreeBSD 9 to get ZFS v28 - And that's the development branch. I've
been doing it for almost 6 months, but it can be tricky until you find the right version for your needs.
 - It's slower, but I think we'll get that figured soon. If you run the Stable branch (Say 8.2) you are ZFS v15,
but you can turn off the ZIL if you want speed at the expense of stability in crashes. 
 - There's no auto-replacement for failed drives. People use scripts to detect and action this now (no Fault
Management Daemon)

I ended up switching back to FreeBSD after using Solaris for some time because I was getting tired of weird
pool corruptions and the like.  Of course, this doesn't make FreeBSD better than Solaris, but with my
hardware, and in my situation, FreeBSD is far more stable for me than the Solaris builds. I'm positive this
can be argued in the opposite point as well. 

FreeBSD is a bit friendlier to get into, and supports a wider bunch of "consumer" type hardware than Solaris
does.  Since it's just a home server, go get yourself 8.2 release, run a v15 ZFS pool, and you'll be happy.
Just make sure you run the AMD64, and try to give it 4GB.
Frank Van Damme | 20 May 11:24 2011
Picon

Re: Solaris vs FreeBSD question

Op 20-05-11 01:17, Chris Forgeron schreef:
> I ended up switching back to FreeBSD after using Solaris for some time because I was getting tired of weird
pool corruptions and the like.

Did you ever manage to recover the data you blogged about on Sunday,
February 6, 2011?

--

-- 
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
Chris Forgeron | 20 May 14:37 2011
Picon

Re: Solaris vs FreeBSD question


----Original Message-----
From: Frank Van Damme
Sent: Friday, May 20, 2011 6:25 AM

>Op 20-05-11 01:17, Chris Forgeron schreef:
>> I ended up switching back to FreeBSD after using Solaris for some time because I was getting tired of weird
pool corruptions and the like.
>
>Did you ever manage to recover the data you blogged about on Sunday, February 6, 2011?

Oh yes, I didn't follow up on that. I'll have to that now.. here's the recap. 

Yes, I did get most of it back, thanks to a lot of effort from George Wilson (great guy, and I'm very indebted to
him) .  However, any data that was in play at the time of the fault was irreversibly damaged and couldn't be
restored. Any data that wasn't active at the time of the crash was perfectly fine, it just needed to be
copied out of the pool into a new pool. George had to mount my pool for me, as it was beyond
non-ZFS-programmer skills to mount. Unfortunately Solaris would dump after about 24 hours, requiring a
second mounting by George. It was also slower than cold molasses to copy anything in it's faulted state. If
I was getting 1 Meg/Sec, I was lucky. You can imaging that creates an issue when you're trying to evacuate a
few TB of data through a slow pipe like that. 

After it dumped again, I didn't bother George for a third remounting (or I tried very half-heartedly, the
guy was already into this for a lot of time, and we all have our day jobs), and abandoned the data that was
still stranded on the faulted pool. I copied my most wanted data first, so what I abandoned was a personal
collection of movies that I could always re-rip. 

I was still experimenting with ZFS at the time, so I wasn't using snapshots for backup, just conventional
image backups of the VM's that were running.  Snapshots would have had a good chance of protecting my data
from the fault that I ran into. 

I was originally blaming my Areca 1880 card, as I was working with Areca tech support on a more stable driver
for Solaris, and was on the 3rd revision of a driver with them. However, in the end it wasn't the Areca, as I
was very familiar with it's tricks - The Areca would hang (about once every day or two), but it wouldn't take
out the pool.  After removing the Arcea and going with just LSI 2008 based controllers,  I had one final fault 
about 3 weeks later that corrupted another pool (luckily it was just a backup pool). At that point, the
swearing in the server room reached a peak, I booted back into FreeBSD, and haven't looked back. 
Originally when I used the Areca controller with FreeBSD, I didn't have any problems for about 2 months. 

I've had only small FreeBSD issues since then, nothing else has changed on my hardware. So the only claim I
can make is that in my environment, on my hardware, I've had better stability with FreeBSD. 

One of the speed slow-downs with FreeBSD from my comparison tests was the O_SYNC method that ESX uses to
mount a NFS store. I edited the FreeBSD NFS source to always do a async write, regardless of the O_SYNC from
the client, and that perked FreeBSD up a lot for speed, making it fairly close to what I was getting on
Solaris.  FreeBSD is now using a 4.1 NFS server by default as of the last month, and I'm just starting my
stability tests with using a new FreeBSD-9 build to see if I can run newer code. I'll do speed tests again,
and will probably make the same hack to the 4.1 NFS code to force async writes.  I'll post to my blog and the
FreeBSD lists when that occurs, as it's out of scope for this list. 

I do like Solaris - After some initial discomfort about the different way things were being done, I do see the
overall design and idea, and I now have a wish list of features I'd like see ported to FreeBSD. I think I'll
have a Solaris based box setup again for testing.  We'll see what time allows. 

Gmane