Highly Reliable Systems: Removable Disk Backup & Recovery


What Does Enterprise Class Backup and Data Storage Really Look Like?

By Darren McBride

Recently Joseph Walker, a blogger at SMBNation asked what Enterprise class backup and storage really looks like.  My response was to discuss how small business servers had hard drives inside the server chassis whereas Enterprise usually used one or more external SANs.   Joseph published the entire response almost verbatim here.  My portion of the text is reprinted below as a good summary of what’s happening in storage:

Due to the increase in the use of virtualized servers in the enterprise, storage has migrated over the last 10 years from inside the server to external SANs.  While small business still largely builds their servers with mirrored boot drives and RAID5 or RAID6 SAS drives installed inside the server chassis, enterprise customers like the flexibility of using shared storage among multiple physical and virtual servers.  By centralizing storage the enterprise gains several benefits.

Virtualization platforms like VMware and vMotion allow enterprise customers to move running virtual machines from one physical server to another with zero downtime, continuous service availability and complete transaction integrity.  In addition, they minimize wasted drive space compared to having storage in various physical servers.   In an environment that has 100 servers and storage physically installed inside the server, each has to be configured with enough empty space to allow for growth.  To allow overhead of two to three times the current data size per server, it’s not hard to see how an enterprise wastes a tremendous amount of the purchased hard drive space by putting it in the server.  By contrast, with a centralized SAN and shared storage, disks can be virtualized just like machines are.  Space can be allocated to each server based on need without wasting it.

Several issues arise when storage is moved from inside the server to a SAN.  The first is performance.  Anyone who has ever replaced a 7200 RPM drive with a 10,000 RPM or an SSD in a server knows that I/O speed largely dictates the end user experience when running multi-user database applications.  Users will report a night and day difference after the upgrade when doing I/O intensive things like running large reports.  How can a SAN keep up with locally attached SAS storage and retain performance?  The answer is in many cases they can’t.  SANs do have the advantage of being more highly engineered and having more spindles (more hard drives), which can make up for some of the performance gap.  Using faster file systems and interfaces like Fiber Channel has also been a traditional part of the performance answer.

Enterprise SANs must insure redundancy and reliability.  SANs are typically the domain of specialty  manufacturers like EMC, Hitachi, HP and now Dell.  Many of these vendors use redundant power supplies, RAID arrays, and redundant “controllers” (think of a controller as the motherboard inside the SAN).  Much thought goes into making sure the SAN is highly available.  In large enterprises, it’s not unusual to see multiple SANs spread over several offices.  Software in the SAN then allows them to replicate or “snap shot” to one another for backup and redundancy.

HighRelyDiagramThe line between Network Attached Storage (NAS) and a Storage Area Network (SAN) device has blurred over the last 2 to 3 years.  NAS has traditionally been storage shared at the file level, whereas SANS have shared their storage at the block level.  Software protocols like iSCSI are showing up in many NAS boxes, allowing them to be used more like traditional SANs, usually over Gigabit Ethernet, which may not perform as well as fiber channel or other traditional SAN hardware interfaces.

One key differentiator between lower end iSCSI implementations versus true SANS is the ability to have more than one server to share the same hard drive space or volume.  This functionality is important in being able to fail a virtual machine over to another physical machine while retaining connectivity to the shared storage.  At Highly Reliable Systems, our iSCSI implementations require the user to use an entire physical drive rather than allowing them to sub-divide the drive and allocate it to different servers.  This restriction is only because we’re focused on drive removability and creating a transportable backup media. It also means that sharing drive space between servers is an undesirable feature.

Darren McBride

About Darren McBride

CEO, Highly Reliable Systems, Inc. View all posts by Darren McBride →


What do you think?

Your email address will not be published. Required fields are marked *