Scientific Linux Forum.org



  Reply to this topicStart new topicStart Poll

> SL + Linux ZFS, Linux ZFS on SL6.2
crash0veride
 Posted: May 30 2012, 08:10 PM
Quote Post


SLF Newbie


Group: Members
Posts: 9
Member No.: 1226
Joined: 24-January 12









For fun here is what happens when you replace Solaris on a Sun/Oracle ZFS Storage 7000 system (in this case a 7320) reloaded with SL + ZFS for Linux wink.gif
Linkage: SL w/ Linux ZFS
PM
^
tux99
 Posted: May 30 2012, 10:51 PM
Quote Post


SLF Moderator
********

Group: Moderators
Posts: 1273
Member No.: 224
Joined: 28-May 11









QUOTE (crash0veride @ May 30 2012, 10:10 PM)
Solaris fans forgive me in advance...

For fun here is what happens when you replace Slowlaris on a Sun/Oracle ZFS Storage 7000 system (in this case a 7320) reloaded with SL + ZFS for Linux wink.gif
Linkage: SL6.2 w/ Linux ZFS


Cool, but where is the same screenshot from Solaris for comparison?
I mean I have no idea if that's slow or fast on that particular hardware.

I assume your are using this version of ZFS: http://zfsonlinux.org/

What's your opinion on this ZFS for Linux, is it already reliable enough for mission critical data?

Is all of General Electric now using SL or just your department?

--------------------
My personal SL6 repository, specialized in audio/video software: http://pkgrepo.linuxtech.net/el6/
(can be used together with EPEL and ELRepo repositories) - repository mirror: http://linuxsoft.cern.ch/linuxtech/el6/
PM
^
crash0veride
 Posted: May 31 2012, 02:28 AM
Quote Post


SLF Newbie


Group: Members
Posts: 9
Member No.: 1226
Joined: 24-January 12









The particular HW used there was a Sun/Oracle Storage 7000 setup. In this case a 7320 (7320 = an x4170m2 w/ dual core westys and 72GB RAM) w/ one J4410 JBOD hanging off a LSI 9200-8E. I added 2 SSD's for L2ARC and 2 SSD for ZIL.

Since the HBA was a dual port LSI SAS-2 controller that translates to 8 Lanes (2 physical SAS2 connections) 2.4 GB/s per connection.
The HBA was in a PCIe Gen 2 (x8 mech/elec) slot so bandwidth = 4 GB/s

Thus since the JBOD was multipathed through both HBA ports in theory we saturated the controller and the bus provided we hit the spinning disks at 4GB/s. The SSD were hung off the dual-port hydra SAS/SATA backplane so again 2.4GB/s x2 bi-directional B/W there to the SSD's and SSD system disk.

I did not profile or dig too deep yet but I'll bet a fair amount of that traffic is hitting the two SSD striped and used for L2ARC cache. Also the two SSD striped and used for the ZIL. The system itself has 72GB of ram so more than likely a fair amount is cached there as well.

The same HW setup with Solaris 11 was ~20% slower across the board.

None the less pretty damn impressive Linux+ZFS is able to outperform Solaris native ZFS by that much margin with the same HW setup.

Upon closer examination something of note that gives ZOL a boost is how ZIL devices are handled in a striped configuration:
--> FASTWRITE algorithm for synchronous writes

Thus ZOL better handles and avoids the situation with the BSD and native Solaris.
Quoting a finding from Nexenta:
QUOTE

Sync writing to a separate ZIL is done at queue depth of 1 (ZFS sends out sync write requests one at a time to the log drive. It will wait for the write to finish before it sends out another one.

This means that even though you have more than one SLOG, ZFS only writes to one at a time, so you won´t see any higher throughput from striped SLOGs.

It is very fair to say that even if your chosen log device has reportedly extremely high IOPS, you will never notice it with how ZFS write's to it (send down, cache flush, and only upon completing send down more -- as opposed to a write cache utilizing sequential write workload) if it does not ALSO have a very, very low average write latency (on the low end of microseconds, here).


All of GE is not using SL (IMHO they should be tongue.gif). There is scattered use of SL throughout GE as a whole. I know of several departments using it for various purposes within the various GE businesses. Due to our recent work with SL we often get pinged for info or help. I am proud to say that our GE Healthcare Linux Operating system used in our medical products is our spin of SL: HELiOS (H)ealthcare (E)nterprise (Li)nux (O)perating (S)ystem. Example would be MR scanners, ultrasound devices, Mobile/fixed X-ray, Nuclear medicine devices, etc.
PM
^
tux99
 Posted: May 31 2012, 03:12 AM
Quote Post


SLF Moderator
********

Group: Moderators
Posts: 1273
Member No.: 224
Joined: 28-May 11









Very interesting, many thanks for the detailed reply. http://th166.photobucket.com/albums/u117/rdshear/Smiley%20Faces/th_smiley-face-thumbs-up.gif

I need to find some time to test ZFSonLinux myself too!

--------------------
My personal SL6 repository, specialized in audio/video software: http://pkgrepo.linuxtech.net/el6/
(can be used together with EPEL and ELRepo repositories) - repository mirror: http://linuxsoft.cern.ch/linuxtech/el6/
PM
^
crash0veride
 Posted: May 31 2012, 05:00 AM
Quote Post


SLF Newbie


Group: Members
Posts: 9
Member No.: 1226
Joined: 24-January 12









For fun here is log results from a complete run of fio:
*some output surpressed*

----Sequential Reads----
READ: io=47017MB, aggrb=4701.3MB/s, minb=4814.6MB/s, maxb=4814.6MB/s, mint=10001msec, maxt=10001msec

----Sequential Writes----
WRITE: io=30605MB, aggrb=3060.2MB/s, minb=3133.7MB/s, maxb=3133.7MB/s, mint=10001msec, maxt=10001msec

----Random Reads----
READ: io=43589MB, aggrb=4358.5MB/s, minb=4463.7MB/s, maxb=4463.7MB/s, mint=10001msec, maxt=10001msec

----Random Writes----
WRITE: io=30083MB, aggrb=3007.2MB/s, minb=3080.2MB/s, maxb=3080.2MB/s, mint=10001msec, maxt=10001msec

----Sequential Mixed Reads and Writes----
READ: io=17788MB, aggrb=1778.7MB/s, minb=1821.4MB/s, maxb=1821.4MB/s, mint=10001msec, maxt=10001msec
WRITE: io=17036MB, aggrb=1703.5MB/s, minb=1744.4MB/s, maxb=1744.4MB/s, mint=10001msec, maxt=10001msec

----Random Mixed Reads and Writes----
READ: io=18250MB, aggrb=1824.9MB/s, minb=1868.7MB/s, maxb=1868.7MB/s, mint=10001msec, maxt=10001msec
WRITE: io=17600MB, aggrb=1759.9MB/s, minb=1802.7MB/s, maxb=1802.7MB/s, mint=10001msec, maxt=10001msec
PM
^
toracat
 Posted: May 31 2012, 07:51 AM
Quote Post


SLF Geek
****

Group: Members
Posts: 302
Member No.: 11
Joined: 10-April 11









Indeed. Very nice and informative post.

--------------------
ELRepo: repository specializing in hardware support for EL
PMUsers Website
^
Arch300
 Posted: Jun 3 2012, 07:26 AM
Quote Post


SLF Newbie


Group: Members
Posts: 3
Member No.: 1436
Joined: 5-April 12









Nice, here I was thinking that ZFS under Linux wouldn't perform too well because it has to run in FUSE because of license stuff. I guess that's a thing of the past then?
PM
^
crash0veride
 Posted: Jun 3 2012, 08:38 PM
Quote Post


SLF Newbie


Group: Members
Posts: 9
Member No.: 1226
Joined: 24-January 12









This is a full kernel module implementation and does not run within userspace.
See: ZFS on Linux
PM
^
helikaon
 Posted: Jun 26 2012, 08:50 AM
Quote Post


SLF Administrator
*******

Group: Admins
Posts: 836
Member No.: 4
Joined: 8-April 11









Hi,
this is very informative update for me as well. Thank You for sharing!


--------------------
PMEmail Poster
^
gregg10
 Posted: Jul 24 2012, 09:40 PM
Quote Post


SLF Newbie


Group: Members
Posts: 2
Member No.: 1689
Joined: 12-July 12









what kernel version are you using?
PM
^
crash0veride
 Posted: Apr 3 2013, 02:52 AM
Quote Post


SLF Newbie


Group: Members
Posts: 9
Member No.: 1226
Joined: 24-January 12









Fast forwarding to today and...

The first official production ready release of ZOL was updated into SL addons today.

SL 6x addons

SL 6x addons repoview

See my original announcement

Enjoy and community feedback is welcomed!
PM
^
0 User(s) are reading this topic (0 Guests and 0 Anonymous Users)
0 Members:

Topic Options Reply to this topicStart new topicStart Poll