Highly Reliable Systems

Reseller Login

Monthly Archives: February 2012

← Older posts

Using Removable Drives for Softwareless Backup

February 29th, 2012 by

Highly Reliable System’s 2 bay High-Rely, RAIDFrame, MPac, and FirstRAID removable drives allow users unique backup possibilities.  This paper discusses softwareless and automatically redundant backup strategies.

The 2 bay High-Rely, RaidFrame, and FirstRAID all have Automatic Mirror Technology (AMT).  AMT devices don’t require special RAID drivers or controllers on the host because the board is integrated into the external unit.  This custom RAID 1 board allows them to mimic a standalone USB3 or SATA hard drive.  Unlike typical mirror systems, which are only intended to offer you data protection with occasional drive failure, the 2 Bay AMTs also provide daily backup functionality.  Remirroring happens with each drive swap.  The FirstRAID and 2 Bay RAIDFrame have the same functionality but add the redundancy of RAID5 arrays.

In a recent case study at leading video producer the client connected a FirstRAID AMT to their Windows 2003 server and shared the main RAID5 array to their users.   All of the company’s daily work is done directly on the external RAID volume.   They could have installed high RPM drives and a RAID controller inside the server, but the cost and available space made that approach unworkable.  Internal storage does provide fast write performance, but the client didn’t need multi-user simultaneous access to databases and there are fewer than 15 users.  The FirstRAID is no performance slouch and provided more storage than would fit in the rack mount server.  Plus it doubles as both primary storage and backup in one unit.  With AMT, once the data is synced  information which is changed or added during the workday is replicated to the removable drive.  Essentially the system does the equivalent of “continuous data protection – CDP” without the need of any software or configuration.

The video production company also purchased 5 extra High-Rely classic removable drives.  Each evening they rotate in a new one and transport the old one off site. The remirroring process begins immediately at a data rate of approximately 300  to 400 Gigabytes/Hr.  When the system has created a copy of the data to the removable drive (usually by next day), an LED signals the user the copies are in sync.   No backup software is required in this type of installation. Because the 2 drive AMT units stay connected and online without notification or operational changes to the host when a drive is swapped, the host can treat the unit as it would an internal drive.  This example is what we call “softwareless” backup.  The 2 bay High-Rely classic AMT offers the same capability at an even lower price point using 2 individual SATA drives. Optional monitoring software can also be used to send email alerts on critical events (mirror complete etc.) or remotely check the activity/error log, note syncing progress or change configuration.

Care should be taken to train the operator not to swap drives before mirroring is finished to avoid data loss or partial mirrors.  While only a small portion of data is potentially at risk if the drive is removed while the system is actively writing, it should be avoided. It is best to swap the drives at a time which there should not be any activity or after closing applications that would be writing.

Another case study is a web server running Ubuntu 10.1 Every night, the machine runs a script which simply mounts a 2 Bay High-Rely Classic AMT as a single drive, compresses the contents of the system and stores them on the what looks to it like a single external drive. In the morning, one of the Tandem’s (2 bay AMT) media is swapped out for off site backup.  While this case does use “software” for backup in the form of the script it runs, it would have been just as easy to configure the webserver to store and work on it’s data and contents on the 2 bay device in real time. The same process of swapping drives out in the morning would work.

Recovering from a failure is as simple as replacing the master with a previous backup.  While you can manually switch mirroring off on the AMT systems, the danger of beginning to remirror when inserting an older archive should always be avoided.  If older, archived drives are needed for the restore, another High Rely device, such as the inexpensive High Rely SATA/USB3 one bay can be attached to simplify things.

Backup software certainly has its place and AMT may not render it obsolete. But, after analyzing your needs and use you may determine that by using our hardware mirroring and you can go without the software.

Posted in Blog

Removable Drives or Cloud?

February 29th, 2012 by

Removable drives or cloud for backup? We’d really recommend both!  One of the promises of cloud based applications is 99.xxxx uptime.  With the possible exception of salesforce.com and outsourced e-mail, no cloud application is more popular than backup.  But the appeal of having mission critical data automatically protected by sending it to a big data center needs to be weighed against the advantages of having it on-site. Every week it seems another high profile cloud provider has an outage, some of them resulting in data loss.  A recent outage (Feb 2012) was Microsoft Azure.  Throughout 2011 we saw outages from big players such as Google, VMWare Foundry, Yahoo Mail, Microsoft, and many others.   Going back to 2009 800lb backup gorilla Carbonite suffered such serious data loss they even sued their hardware providers.

Probably the biggest concern with pure cloud backup applications is the length of time it takes to recover when data sets are large (say over 100GB).   Recovering this much data over even fast links can take days, if not weeks.  That’s why most small and mid size businesses supplement their cloud backup with a local appliance like our NetSwap (Linux based NAS) or WBA (Windows based NAS).  Something like 90% of restore’s are done from   local storage.  In our view, only after storing the data locally to a removable drive product (like our 2 bay mini-tower) should you consider spending the extra money to off-site the data to the cloud.  The cloud is great for small amounts of data, and it should be used for mission critical data that must be moved automatically to a secure location.  But let’s lay out a list of the downsides of cloud backup so that it can be evaluated fully:

  • Restore Speed.  As noted above, local restores can happen in hours, or even, if using virtualization technology, minutes.  Cloud services can’t keep up.
  • Reliability. Cloud providers aren’t perfect.  Outages do occur.  Equipment problems, flood, power failures, terrorist attacks etc can affect even the most well prepared.
  • Legal. In early 2012, provider Megaupload was raided and shut down by authorities for hosting illegal content.  What if your business data was there?  Could the Government access your data in the cloud? Some worry the U.S. Patriot act could provide the means.  Microsoft has admitted it.
  • Finance. Cloud providers go out of business.  Many are venture funded and are running at a loss.  For example Carbonite, one of the most heavily advertised services, is still losing money (as of early 2012) after several years of operation.
  • Cost. The cost of storage (and additional bandwidth) can become prohibitive for data sets over 100GB
  • Billing Disputes. Your data may be held hostage to your provider if a snafu in billing occurs.
  • Privacy. You don’t always know where your data exists in the world.  Once it’s in the cloud, are you sure copies aren’t made?
  • Security. You must trust that the encryption system used by your provider and software is secure.  Is there a brute force hacking approach that could expose your data to competitors in the future? Recently, multiple instances of hacks and data breaches have exposed the passwords of users of well-known websites and companies. These attacks also shed light on what a lousy job most of us do in using strong, unique passwords.

We hope this post has provided some food for thought in comparing cloud and removable drive backup strategies.  We honestly think a combination approach is the best of all worlds.

Posted in Blog

Storing Raw Removable Disk. The 4 Drive Archival Case

February 29th, 2012 by

Is Storing Raw Removable Disk in your Archive or Backup Strategy?
High-Rely 4-Drive Archival Cases provide protection  for raw removable disks.

The popularity of backing up to “raw” removable hard drives has lead to the release of these stackable, 4 drive archival cases to allow users to protect drives used in toaster type trayless removable drive systems. Although Highly Reliable Systems does not sell raw drive products, we recognized the need in the marketplace of those who wish to archive hard drives over a period of time.  The pink anti-static foam is cut to accommodate 4 standard size 3.5″ drives.  Differing height drives up to full thickness work because the foam holds the drives in place.

  • Capacity – Holds four (4) Standard 3.5” hard drives.  Any thickness hard drives can be accommodated and are held in place by snug foam fit on all sides.  (Hard drives not included)
  • Stackable and Self Centering – Chevron indentations on top of each case matches similar protrusions on the bottoms for neat, easy stacking.
  • Translucent –Allows drives and labels to be easily viewed through the plastic.
  • Flexible – Vertical or Horizontal use. Stack on any bookshelf for easy access. Labels can be added to handles, sides, top, or bottom for easy identification.
  • Anti Static – Each case includes pink anti-static foam that provides both physical and electrical protection of your drives and has finger cut-outs for easy drive removal. 
  • Rugged.  Splash and dust resistant case is made from polypropylene.  It is resistant to many chemical solvents, bases and acids.  The case is also impact resistant and PVC free.
  • Convenient – Integrated carry handle & latches keep drives safe and easy to transport.
  • Ordering Info – 13.375”L x 12.625”W x 1.25”D.  Weight 16.9 ounces with foam.  Part# 15169. Made in USA.  Singles- Model# HRCC4DARCHIVAL. $19.95 each (ask about case pricing).  Case of 6 Part# 6HRCC4DARCHIVAL. Case of 20 Part#20HRC4DARCHIVAL

 

Posted in Blog

Make Sure Your Removable Drive is Securely Erased

February 28th, 2012 by

A recent report points out significant privacy issues with electronic storage devices that could include removable drives, tablets, cameras and other devices. The study, which was conducted by Laplink and O&O Software out of Germany tested 160 randomly purchased used hard drives and other storage devices. Using un-erase software, researchers were able to get data from 85% of the devices, which had not be securely erased. They recovered 53,000 pictures and 4,500 documents.

Most users believe that deleting files from their PC or phone means they are gone forever. But they may be retained in the recycle bin or simply be “marked” as deleted by the operating system which means off the shelf software can be used to recover them.  Even defective hard drives may be able to be repaired or data accessed using special techniques.

If you need us to securely erase your drives( whether in or out of warranty), we will do so for a nominal fee. For the full text of the report see this link

Posted in Blog

Backup to SSD. Pros and Cons

February 26th, 2012 by

Lately we’ve been hearing a lot about Solid State Disk (SSD) AKA Flash or NAND chips.  Flash technology that make solid state disks possible is increasing both the speed and density of conventional storage.  Every day a new article or press release comes out in which  a vendor pushes the speed envelope.  This is done by either using SSD as cache for regular hard drives (sometimes by incorporating SSD on the hard drive itself), or creating a larger SSD array.  So is SSD practical for backup?  In my opinion: Only for for those who

Mpac metal trays will hold 2 Solid state drives or rotating drives

need vibration free and high performance storage.  An example might be a military plane where video needs to be streamed to onboard storage. We sell both SSD and HDD  for our backup removable disk trays.  I don’t have a bias, although the vast majority of what we sell is hard drive based. If you want to use SSD our sales reps are happy to do a price quote for you.

SSD is arguably faster than rotating media.  But recent articles point out the trade offs:  Solid State drives are good for increasing random I/O as measured by IOPS.  But as manufacturers push the size envelope, reliability drops off.

Posted in Blog

Proper Drive Swapping can increase hard drive life!

February 24th, 2012 by

Hard drives are most vulnerable to physical damage when they are powered on and spinning.  To help avoid experiencing a premature hard drive failure in your backup media, please use the following procedures when you’re swapping your media:

If using High Rely Classic media:  Turn key off, wait 15 seconds, then remove media.

If using RAIDFrame RAIDPac media:  Press release lever all the way in.  Wait 15 seconds, then press release lever again to remove the RAIDPac media.

If using MPac media:  Pull the release handle.  Wait 10 seconds, then remove the MPac media.

Following these procedures before actually removing the media will help to assure that the hard drives in the media have stopped spinning and that the heads are parked.  Always handle your media care.

Posted in Blog

Why is my Removable Disk Backup Slow?

February 23rd, 2012 by

Removable Disk Backup Slow?  Here are common reasons for slow removable disk backup in a bullet list.  Scroll down or click to page jump to detailed explanations and suggestions.

  1. You Forgot to turn off Your Anti-virus software during backup.
  2. Your drive(s) are getting full, starting to use the inner cylinders.
  3. You are using small hard drives.
  4. Your RAID array has a failed member.
  5. Someone or something is performing another backup or heavy disk I/O during your backup.
  6. The drive you’re backing up uses a slow RAID controller.
  7. Your drive(s) are fragmented.
  8. You are using slow backup or copy software.
  9. You are backing up active DFS connections, Active Directory, Exchange etc with slow agents.
  10. You are backing up over a slow LAN connection.
  11. You think you’re using USB3 but it’s really USB2 or slower.
  12. You’ve installed multiple backup programs or programs with file I/O shims.
  13. You have lots of small files and folders.
  14. You are getting soft errors on your drive(s) because they’re failing.
  15. You are doing Full backups every night.
  16. You have low RAM.
  17. Another process or virtual machine is taking CPU power.
  18. Compression or encryption is slowing you down.
  19. Vibration.
  20. You have a virus.
  21. You have Indexing turned on.
  22. Close Microsoft Perfmon.
  23. Turn off Remote Differential Compression for LAN Backup.
  24. Use a faster controller (or bus).
  1. You Forgot to turn off Your Anti-virus software during backup. 

    Often people are running real time anti-virus or spyware scanners. These dramatically slow down performance because each read is run through the Anti-Virus I/O sub-system. Writes to the removable disk may also go through an AV scan. . Try temporarily disabling any Anti-Virus or anti-spyware software and testing the speed that way. You may need to turn off “real-time” scanning on the server and just do nightly drive scans (make sure to schedule it at a time well away from backup window).  You may want to permanently disable anti-virus scanning of the removable disk.

  2. Your drive(s) are getting full, starting to use the inner cylinders.  

    Did you know that the outer tracks or cylinders on a hard drive fill up first?  Think of a playground merry-go-round.  If you stand at the edge while it rotates things seem to be moving very quickly.  If you move in toward the axis of spin (spindle) the linear velocity slows down. Similarly, data written to the outside tracks write up to 50% faster than inner tracks because bits pass under the heads at a higher linear velocity.  Which means as your drive fills it will slow down.

  3. You are using small hard drives. 

    Big drives with high density pack more bits into a smaller area.  So given a smaller 250GB drive spinning at 7200 RPM and a bigger 3TB drive spinning at the same speed, data written (especially to outer tracks) to the big drive will lay down faster because more bits pass under the head in a given rotation of the disk. Some gamers even  “short stroke” big and cheap 2TB and 3TB hard drives by partitioning them down to only a few hundred Gigs.  Done correctly, this insures the data goes to the very fast, very dense outer tracks.   Of  course, you lose a bunch of the disk space that you paid for.  But this extreme speedup technique demonstrates the concept of using dense drives for more speed.

  4. Your RAID array has a failed member.  

    RAID 5, which are usually 3 or more disks with redundant data striped across them will become horribly slow if one disk fails.  Since the machine stays up, sometimes users are unaware that a drive has failed except by noticing a slow down.  Note that the slowdown could be either the server’s main array or, if you’re using one of our RAID 5 products could be the destination backup drives. We recommend monitoring RAID health and having an email sent out if a member goes offline.  Use HW RAID Manager where applicable to notify you of events via email.

  5. Someone or something is performing another backup or heavy disk I/O during your backup.  

    Hard drives don’t do well with multiple jobs causing the disk to “thrash” (seek back and forth all over the drive).  If two backup jobs overlap, or if someone is running a report that requires heavy disk usage, your backup will slow to a crawl.  Check Windows Task Manager to verify no other processes are running significant cpu or file I/O during a backup.

  6. The drive you’re backing up uses a slow RAID controller.

    Price pressure has forced server vendors to create small business servers with  anemic RAID controllers built-in.  Resellers who are aware of the difference will often upgrade the RAID controller for one with more processing power an RAM.  In theory it should be faster to read from multiple drives but we’ve seen several examples where RAID performance is slower than a stand alone drive. Upgrades to RAID controllers can sometimes be done by adding RAM or processor power. Run our programs called “Fakeback” and TRMark to determine if your source drives are running slowly.

  7. Your drive(s) are fragmented.  

    Most of us have heard about defragmenting our hard drives. When using removable disk backup, you have TWO potential fragmentation problems: The source drive and the destination drive.  Both of these can potentially fragment over time. If the Removable disk has been heavily used, and/or already contains existing data, you may achieve some advantage by defragging it as well as your source drive, but it may not be necessary if your backups delete or overwrite the drives (full backups). Microsoft includes defrag software with your server OS but you may want to invest in a program that does a better job, including putting everything to the outer tracks in contiguous order.  Be aware that running defrag on a hard drive that is doing incremental backups may “break” the incremental scheme causing the software to have to do a full backup after every defrag.   Try defragging (preferably with a 3rd party defragger) the source drive and read http://www.ntfs.com/ntfs_optimization.htm for more tips. For example, you might want to use larger cluster sizes on your High-Rely removable disks since backup tends to write data in large blocks.  You may also want to look at a product called Ultimate Defrag by Disktrix.  Do not defrag SSDs, as this can shorten their life.

  8. You are using slow backup or copy software.

    We like to run real world speed tests with block level imaging products like StorageCraft’s Shadowprotect. File backup is usually slower than block level imaging. Any option to verify the backup during performance testing doubles the backup time and should NOT be counted in the benchmark, although you may need to consider it for the entire backup window.  It IS a good idea to perform some sort of verification via CRC checking, data comparison, or test restore.  Try our benchmarking tools Fakeback and TRMark or something like CrystaldiskMark to get a sense what speeds you should be seeing.  You should also be aware whether the backup program is using buffered or unbuffered I/O.  Unbuffered I/O (or a raw file copy) is preferred when attempting to copy a large file from one location to another.  To test, try using Xcopy /J or eseutil  to copy file(s) using unbuffered I/O (ESEutil is a program that comes with Exchange) as described here.  As an experiment, you might also try Microsoft’s free RichCopy, which supports multiple threads.  We are not sure if it supports unbuffered I/O.

  9. You are backing up active DFS connections, Active Directory, Exchange etc with slow agents. 

    Some backup “agents” such as Exchange, open file, or SQL agents, may backup much more slowly than native file backup.

  10. You are backing up over a slow LAN connection

    Direct Attached Storage (DAS) is generally faster than NAS (Network Attached Storage). We sell both, and each has it’s place. For maximum speed use DAS over eSATA or USB3. Backup speeds taking data off remote servers over the 100MB Ethernet network will be slower than Gigabit Ethernet. Standard 1500 byte frames are slower than jumbo frames (9000 plus bytes).  Some speed increases might be achieved using multiple ethernet ports.  This is known as Link aggregation, bonding, or NIC teaming.  In order for this to work your ethernet switch needs to support IEEE 802.1ax Link Aggregation Control Protocol (LACP) or you’d need to do dual  hardwired connections from the NAS to the server (dedicated ethernet links for backup).  Most experts say doubling your ethernet ports doesn’t double your speed, and that performance increases are modest. LACP requires the Ethernet NIC drivers to support it, although it is rumored that in Windows 8 server link bonding will be done at a higher level in the operating system, allowing more dissimilar ethernet cards to be bonded.

  11. You think you’re using USB3 but it’s really USB2 or slower.

    If the host  subsystem is inadvertently using slower USB (1.0, 1.2, or 2.0) versus USB3.0 , it will make a huge difference in performance.  Expect 25-35 Mbps (100 Gigabytes per hour) on USB 2.0.  Expect 200-400 Gigabytes per hour on eSATA or USB 3.0 if the other problems in this list don’t slow you down.  Use TRMark or BurnIn Test Pro to get quick numbers.

  12. You’ve installed multiple backup programs or programs with file I/O shims.

    The same comment we made in #1 about anti-virus scanners is true for *any* piece of software that hooks into the I/O.  Any program that installs itself into the operating system read/write stack (I/O shims) can dramatically affect read and write performance. This can include other backup programs.  Be very careful about installing 2 or more backup programs on a server because often both are looking at all file I/O to determine what changed to help with incremental nightly backups.  This can lead to conflicts.  Multiple backup programs can break Windows open file (VSS snapshot) systems. For example Acronis and Storagecraft have problems when installed on the same system.

  13. You have lots of small files and folders.

    Plays in heavily to performance. Larger files and fewer deep directories will be much faster to backup than lots of smaller files with complicated directory structure. It’s hard to do much about this but if there are lots of small files that could be the problem. Imaging software that reads at the block level may be better for lots of small files (Acronis True Image, Symantec System Recovery, StorageCraft ShadowProtect, Appasure, DoubleTake, newer Windows native backup).

  14. You are getting soft errors on your drive(s) because they’re failing.

    Errors on the source or destination hard drives will slow performance. Modern hard drives will attempt to correct “soft errors” by retrying.  Drives that are slowly failing may perform failure prior to a complete failure.

  15. You are doing Full backups every night.

    Incremental backups move less data and can be more efficient.  Using software that will do “synthetic backup” or roll the incremental backups into a single full can improve comfort levels with an incremental back up scheme.  You can also eliminate things like Page files, Hibernation files, Temp directories, Recycle bin, and other things non-critical to your backup

  16. You have low RAM

    Systems with very low amounts of available RAM will use the pagefile excessively, dramatically slowing performance.

  17. Another process or virtual machine is taking CPU power.

    If the CPU is heavily loaded, doing any other type of task, that will obviously affect backup performance. For example, if a server were being heavily used for database access, running spyware, or doing computations during the time of the backup, then fewer CPU cycles would be available to the backup process. Do a CTRL-ALT-DEL, and on the processes tab, arrange processes by “CPU” to see if there is a process taking an inordinate amount of CPU during the backup. Try not to install programs or services that stay in memory on a server.  Software bloat kills performance.

  18. Compression or encryption is slowing you down. 

    If compression or encryption is turned on, either on the source NTFS disk, on the destination disk, or in real time while using the backup software, it can slow down backups. Oddly sometimes it speeds it up if compression is done before the data is transferred to the disk because with fast processors that compress quickly, less data is moved to the destination drive.  Don’t be surprised if you see either faster or slower backup speed with compression turned on/off as results will depend on the environment (hardware vs software compression and amount of CPU horsepower you have) and the nature of the files (how compressible they are).  Some already compressed files actually expand slightly if compressed again, resulting in more space taken on the backup media and slower speeds.

  19. Vibration

    If your disks are vibrating, you may be having hundreds or thousands of disk errors that disappear upon a re-read.  This will manifest as extremely slow performance.  The vibration could be other drives in close proximity, fans, or other mechanical devices.  We’ve seen demonstrations where a benchmark is running on the hard drive, someone slaps the side of the rack, and performance drops for several seconds.

  20. You Have a Virus.

    It almost goes without saying that many spyware, rootkit, or virus infection will cause performance problems.  Spyware can kill performance!

  21. You have Indexing turned on.

     Indexing Service (originally called Index server) was a Windows service that maintained an index of most of the files on a computer to improve searching performance on PCs and corporate computer networks. It updated indexes without user intervention. In Windows 7, it has been replaced by Windows Search. If your server is indexing either source or destination drives, turn it off.  Indexing Service is still included with Windows Server 2008 but is not installed or running by default.

  22. Close Perfmon.  

    Some users have reported that running Perfmon has caused slow disk I/O.  

  23. Turn off Remote Differential Compression for LAN Backup.  

    On Vista or higher Windows some users have reported slower file transfer when RDC is turned on.  Although Microsoft recommends against turning it off, you might experiment if you’re having speed problems. To turn it off go in Control Panel / Programs and features / Turn on or turn off Windows features and uncheck “Remote Differential Compression”.

  24. Use a Faster Controller (or Bus)

    Whether you’re using USB3 or eSATA, there is a speed difference between bus types (PCI-X, PCI, PCI-e), widths, and brands. For example you may see speed differences between Renesys based USB3 chipsets and TI chipsets and a controller with 1x connector may be slower than 4x (4 lane PCI-e). Additionally the version of the bus makes a difference because, for example PCI-e 1.0 is slower than PCI-e 2.0. The motherboard bus connectors must support the full speed of the controller to take advantage of later generation speed increases. Only when they are well matched and the driver is optimized will the highest speeds be attained. 

This list is not comprehensive but covers most of what we know about backup performance issues.  Here is a similar document from Symantec if you want to get another written perspective on common speed issues.

Back to top

Posted in Blog

The Reliability of using Removable Drives and Mirroring.

February 22nd, 2012 by

Customer Question:  We plan to use the 2 Bay Premier as a target for a continuous backup (either Appasure or ShadowProtect) as described in your recent blog post on mirroring removable drives.  Basically, the backup job would continuously run (every 15 minutes or maybe every hour creating incremental updates).  You suggested swapping the bottom drive each day, and that the automatic mirroring (AMT)  would start a new mirror each night.  My tech is concerned about the strain of breaking the mirror and recreating the mirror each day.  I thought he had a good point. What kinds of issues does that pose for the integrity of the unit and the drives?Automatic Mirroring Technology

Answer: The 2-bay is specifically designed to accommodate the “broken mirror” concept for softwareless backup.  What we mean by softwareless is that the backup software and host machine is unaware that an additional copy of the data is being made.   Suppose we had 3 total swap drives (4 hard drives total) and left one drive in at all times as the “primary”. There are several issues we could discuss here:

1) Will the connectors on the back of the removable tray accomodate hundreds or even thousands of plug/unplug cycles?  The answer to that concern is yes.  We’ve been asked why we don’t expose the bare SATA drive and use that as the rear “docking plug” to save costs.  Those SATA connectors are spec’d at only 50 insertions by the committee. If you look at the type of connector we use, you’ll see it is a pin type connector with high insertion ratings.  While somewhat non-traditional, it is this connector that provides reliable daily connection.

2) The primary drive (the one left in place) gets high read activity every day.  We assume you will swap media every day causing a full remirror.  This requires reading every data block on the drive so that it can be written (mirrored) to the secondary drive.  It could be argued that this extra activity creates wear and tear on the hard drive during the daily full backup.  Will the primary drive fail more rapidly for this reason?  Well, we haven’t seen a failure correlation like this.  Our head engineer suggests that if this is a concern there is no inherent reason why you couldn’t rotate the swaps – rotate the right hand (or bottom) drive one day and after the mirror is sync’d rotate the left hand (or top).  The Automatic Mirroring Technology (AMT) doesn’t care and you could balance total read activity this way.  If it makes you feel better, by all means do it. But you will be “fixing” a problem that we’ve never seen happen.

3) The secondary drives (the one swapped each day) will have power removed and applied each day.  This power cycle load is spread out over the 3 swap drives (in this example).  The question is: are hard drives like light bulbs? – Do they often fail when power is applied?   Well, we’ve never seen a drive go “poof” when it was turned on – at least not that we attribute to a power influx. The raw drive has it’s own hot plug ability (Hot plug was added to the SATA II spec) and our trays do have protection circuitry.  We also mitigate this issue as best we can by requiring the key to be turned before the High-Rely classic media is removed.  This additional step provides even more protection.

4) Are there any anomalies (bugs) in the mirroring circuit that could cause corruption to occur after many swaps?  We aren’t aware of any.  It’s been in use this way since we first introduced it back in 2008 and have not seen issues with the re-mirroring process.  It does bring up an interesting point though.  We think it would be a good “best practice” to periodically run CHKDSK /F on your backup media (as well as your source drive).  This could be invoked as a scheduled job or done manually.  Scheduling CHKDSK is a bit scary in that it could actually create data loss or other problems.  If it were scheduled it’d be important to view the logs to see if problems were found and fixed (event viewer, Windows Logs, Applications).  We HAVE seen successful backups (images were created fine by shadowprotect and other programs) in which the source drive was later found to have corruption.  This corruption was merrily imaged onto the xxxx.spf file located on the High-Rely Classic removable disk media.  When the image was successfully restored, the host machine still wouldn’t boot because the original source boot partition was corrupted (and had been for over 30 days so all the removable drives were equally useless).  So that means for the prior 30 days the server was up and running, but it was sick and had anyone tried to reboot it, it wouldn’t have come up.

Clearly, it is reasonable to check for corruption on any drive periodically, whether or not AMT technology is in use.  I hope this helps.  We think Automatic Mirroring Technology is an awesome way to duplicate your backup!

Posted in Blog

What Type of Hard Drive Provides the Most Reliable and Fastest Backup?

February 17th, 2012 by

Due to higher density and reliability, SATA drives are taking on more types of workloads, including those required for reliable and fastest backup.  Some people remember the large hard drive study Google did in 2007 essentially showing no difference in failure of SATA versus more expensive SAS. Robin Harris did a good summary and links to the original article here.  As for speed, RPM is one key metric for hard drives, as is access time(the time on average to position the heads above a given track and get to the needed sector). Many storage reviewers provide speed benchmarks with random or transactional (IOPS) performance of spinning drives, which is dominated by access time.  Access time is, in turn, dominated by seek time, which is mainly the time it takes for a motor to move the read heads.  However, in backup applications using imaging (which includes products from companies like StorageCraft, SymantecVMWare , MicrosoftComputer AssociatesParagonDouble TakeAppasureVRanger etc) data is read in relatively large blocks and written full tracks at a time sequentially.  If the drive is empty the data is written to the outer tracks first, and then the read head is stepped in by one click and writes the next track.  So access time is not a huge factor in backup.  Backup performance is dominated by the RPM of the drive and the bits per cylinder (very dense hard drives like 2TB, 3TB, and 4TB drives have more bits per cylinder).  Furthermore, as the drive fills up, the speed will decrease by as much as 50% as the heads move from the outermost to innermost cylinder. Since customers usually test backup speeds when the drives are empty, results may be skewed and backup take longer than they calculate as drives fill.  As long as the interface speed is fast enough to keep up ( and 3Gps eSATA and 4.8 Gps USB3 interfaces arguably are), the interface speed has no measurable effect on sustained performance.  The fastest drives today can sustain less than 200 MB/s (theoretically could backup data at 720Gigabytes per hour), which is less than the performance of a single 3GB SATA port.

What about using “multiple spindles” or RAID arrays to increase performance?  When measuring IOPS many benchmarks show that multiple drives help.  In fact, more drives equates to more speed.  Our testing shows that with modern hardware RAID, our RAIDFrames and FirstRAID products can  backup slightly faster than stand alone drives.  But the difference isn’t night and day because again, for large sequential writes the speed advantages of multiple drives in small random writes disappears when write operations aren’t as dependent on seek times.  Still,  using our RAID 5 backup products is a good idea for redundancy of your important data.  Plus you may gain 10 to 20% in backup and restore speeds with RAID. As for reliability, we believe slower spinning SATA drives have less heat and last longer than 10,000 and 15,000 RPM drives.  This is true even of so called “enterprise” SAS drives.  Studies of drive failures in large numbers by Google and others have proven that good old SATA drives have about the same failure rate as expensive enterprise drives.  Because the density and sequential writes make up for RPM, they can be about as fast for backup applications too.

Posted in Blog

Symantec Sues VEEAM and Acronis over Drive imaging

February 17th, 2012 by

Drive imaging has become a way of life for those who want fast backup to our removable drive products.  We like the idea of allowing our customers to choose their favorite software.  Those choices may become more limited or costly because in February of 2012 Symantec launched two lawsuits that could affect the industry.

Symantec thinks it owns the techniques for successfully doing drive imaging.  These include restoring an image to other hardware (foreign hardware retargeting), disaster recovery using virtual machines, storing disk images, backup catalogs, and using a certain type of GUI screen to perform a restore.  Two separate law suits were filed.  One against Veeam and an almost identical one against Acronis.  This suit clearly has implications for other vendors who do similar things including StorageCraft, VMWare , Microsoft, Computer Associates, Paragon, Double Take, Appasure, VRanger and many others.  This is unfortunate turn of events that could cast a pall on the backup industry.  It will be interesting to see if the courts find that the patents have merit.  If nothing else Symantec may force license fees on other vendors, increasing costs to the end user.  With backup licenses already running in the $600-$1200 range per server,  backup software can cost more than the Windows OS license!  Given the OS is over 50 million lines of code many of us hoped backup software vendors would be bringing prices down.  This lawsuit may continue to prop up those prices for a while.

A short story with links to the .pdf complaints can be found here: http://www.storagenewsletter.com/news/business/symantec-patent-lawsuits-acronis-veeam

Symantec's Lawsuit Count 1

Backup Virtual Machines

Symantec Count 2

Computer Restoration Methods

 

 

 

 

Symantec Count 3

Method of providing Replication

 

 

 

Symantec count 4

Selective File and folder snapshot creation

 

Posted in Blog
← Older posts