Highly Reliable Systems endeavors to support as many systems as possible within our resources. LINUX is considered a crucial platform by HR and we are attempting to grow our support for it.

Many LINUX’s may work with Silicon Image controllers on a limited basis without any additional drivers or installations. However, they probably won’t support multiple drives on one eSATA channel (port multiplier technology) which High-Rely devices make heavy use of. The new drivers listed here should allow this added functionality thus allowing you access to all the High-Rely drives in your backup device (if it has more than one bay). Without these drivers, typically, the default driver will only see one bay of a multibay HR device.


At this time, support for SUSE is limited to a driver available from Silicon Image for eSATA based systems. We can offer no other information at this time. Novell is based off of SUSE LINUX, but, if you’re a Novell user, you probably already know this.

Click here is the link for our PCI and PCI-X controllers.

Click here is a link for our PCI Express controller.

Fedora and RED Hat

At this time, support for Red Hat is limited to a driver available from Silicon Image for eSATA based systems.

Click here is the link for our PCI and PCI-X controllers.

Click here is a link for our PCI Express controller.

Ubuntu 7.10 Gutsy Gibbon

High-Rely has licensed source code from Silicon Image and has compiled and tested a driver to work with Unbuntu 7.10. Support is still very limited but we do offer more material for this version than other versions presently.

This support is for the standard Ubuntu Desktop installation. It is assumed the same driver will work for server edition and toher additons but it has not been tested. Installation of the driver maybe different as well. Presently, the driver has been compiled as a Kernel loadable object. To install the driver for Unbuntu 7.10 desk top:

  • For our PCI Express controllers, Download (click here).
  • Rename the file to si3124.ko.gz and gunzip the file (si3124.ko) onto the /lib/modules//kernel/drivers/ata directory.
  • As root, run: “/sbin/depmod -a ” to update the modules.alias file with the new module.
  • On /etc/init.d, create a shell script like the following:

    #! /bin/sh
    rmmod satasil24
    rmmod si3124
    modprobe si3124
    exit 0

    called SIDriverChange.sh.

  • run : chmod +x SIDriverChange.sh
  • Make a symbolic link on /etc/rc2.d called s99XHighRely with the following command:

    ln -s ../init.d/SIDriverChange.sc s99xHighRely

This should complete the driver installation. When the system is rebooted, it should replace the default driver and the new driver should allow you to see additional drives in your HR drives (if it has them). fdisk -l or lshw should show the available drives.


From David C 4/30/2010:

Good news! I successfully got the sata_sil24 driver in Debian “Lenny” 5.0 (2.6.26) to notice multiple drives on the same enclosure. Here’s what you should tell people:

The sata_sil24 driver does properly handle port multipliers; it does not, however, properly handle eSATA hot-swapping. Consequently, in order to notice new drives as they come on, you have to unload and reload the sata_sil24 module after each drive activation/deactivation. The command sequence is (performed with rootly privileges; su, sudo -s, or logging in as ‘root’ will all work, depending on how your system is configured):

rmmod sata_sil24

modprobe sata_sil24

Subsequent exchange with Tom Hoops:

TH: What driver versionwas that David?

DC:(He refers here to the Driver for the PCI based Silicon Image SI3124 controller chip)That’s the built-in open source one that comes with all 2.6.26+ kernels. From what I gathered, Silicon Image put a lot of effort and knowledge into the open sourced one.

TH: Awesome news David – thanks. However, I’m afraid of one thing. If you have to pop the driver everytime you pull a drive, doesn’t that take all the drives down?

DC: Briefly, yes. That brings us to the next step:

hdparm -W0 /dev/sd*

That turns off the write cache on all of the drives. Of course, if you have SATA drives that aren’t High-Rely drives, you’ll need to do each one manually (e.g. hdparm -W0 /dev/sda, hdparm -W0 /dev/sdb, and so on; checking in /var/log/syslog where the system is putting the High-Rely drives will be necessary).

TH: Well, I suppose that is acceptable if you’re going to use the system like a Robotic Tape System for back up (Mon drive 1, Tues, drive 2, Wed Drive 3 …). But won’t it cause IO errors if drives are active and doing things (if you were say, using them for storage) when you boink the driver?

DC: I would imagine so, yes. Then again, I’m just using it for nightly backups and only swapping one drive a week; consequently, a weekly cron job on, say, Monday afternoon should keep that to a minimum on my end. That doesn’t mean I recommend this solution for everyone. It’s still better than nothing, though.

From what I’ve read, it would appear that eSATA-connected hard drive enclosures are just generally problematic with Linux; heck, the only reason I tried dropping and reloading the driver in the first place was because I wanted to find a way to detect the drives that didn’t involve rebooting the server. For all I know, later kernels might handle this better; then again, maybe not.

At this point, I’m just providing this info because I think it’s fun. Plus, hey, why not?

It turns out that, in Linux, it is possible to accomplish a “HRDM”-style approach using volume IDs. However, the catch is that you need to reformat each High Rely drive in either ext2 or ext3 format; at the very least, it’s rather helpful. There are plenty of walkthroughs that can take care of that; personally, I found this one to be fairly useful: http://www.ehow.com/how_1000631_hard-drive-linux.html

Once you have each High-Rely formatted, you then just need to set up mount points based on the disk labels. To do that, create some folders that you’ll mount the drives in. Ubuntu-based systems will usually create a /media folder that it’ll drop anything and everything into, so, following in that short-lived tradition, you can do something like…


Or whatever. Then, once they’re formatted, you just need to grab their IDs using tune2fs thusly:

root@petro1:/media/backup# tune2fs -l /dev/sda1

tune2fs 1.41.3 (12-Oct-2008)

Filesystem volume name:

Last mounted on:

Filesystem UUID: d7d97588-a8e8-459c-a23f-ec4141e20483

Filesystem magic number: 0xEF53

Filesystem revision #: 1 (dynamic)

Filesystem features: has_journal ext_attr resize_inode dir_index filetype sparse_super large_file

Filesystem flags: signed_directory_hash

Default mount options: (none)

Filesystem state: clean

Errors behavior: Continue

Filesystem OS type: Linux

Inode count: 91578368

Block count: 366284000

Reserved block count: 18314200

Free blocks: 360485712

Free inodes: 91578357

First block: 0

Block size: 4096

Fragment size: 4096

Reserved GDT blocks: 936

Blocks per group: 32768

Fragments per group: 32768

Inodes per group: 8192

Inode blocks per group: 512

Filesystem created: Fri Apr 30 14:01:28 2010

Last mount time: n/a

Last write time: Fri Apr 30 14:08:28 2010

Mount count: 0

Maximum mount count: 22

Last checked: Fri Apr 30 14:08:28 2010

Check interval: 15552000 (6 months)

Next check after: Wed Oct 27 14:08:28 2010

Reserved blocks uid: 0 (user root)

Reserved blocks gid: 0 (group root)

First inode: 11

Inode size: 256

Required extra isize: 28

Desired extra isize: 28

Journal inode: 8

Default directory hash: half_md4

Directory Hash Seed: 3265b526-b439-425e-914a-5735de0b2c8d

Journal backup: inode blocks


What you’ll care about is the UUID. For each drive, just create a corresponding entry in your /etc/fstab thusly:

UUID=d7d97588-a8e8-459c-a23f-ec4141e20483 /media/Mon ext3 user_xattr,acl,data=journal,sync 0 3

What this particular entry does is it maps the UUID of the my /dev/sda1 partition to /media/Mon, tells fstab that it’s an ext3 file system (it is), turns on some flags so I can share it via Samba if I’m so inclined (user_xattr and acl provide NTFS-style ACLs), while data=journal and sync are there to help guarantee that the file system is writing to the drive as soon as possible. The 0 tells fstab to avoid backing it up, and the 3 tells it to check the volume for integrity from time to time and, just as importantly, in what order compared to the rest of the volumes on the system.

Then just rinse, lather, and repeat. Once you do that, it doesn’t matter what order the drives load up (/dev/sdX), they’ll automatically go to the correct mount points.


Now that Ubuntu 10.04 LTS is out, I might give that a whirl and see if that works any better; I was trying the beta for a bit but one of the versions caused my server to fail to boot.

TH: Thanks David!