Highly Reliable Systems: Removable Disk Backup & Recovery

Monthly Archives: March 2015

High-Rely DeltaSync File-Level Backup Technology Introduced

March 26th, 2015 by

High-Rely DeltaSync File-Level Backup Technology Introduced

NetSwap and RAIDFrame Firmware Update Adds Important New Speed-Saving Backup Synchronization Feature

COLUMBUS, OH – 26 MARCH 2015 – Highly Reliable Systems, the innovative American-made server backup storage experts, have announced at the ASCII Group Success Summit immediate availability of NetSwap firmware release 2.13. This update includes a newly developed High-Rely DeltaSync Technology, which enhances synchronization functionality for NetSwap and RAIDFrame server backup appliances. DeltaSync is a file-level tracking feature that queues changes in memory, and promotes significantly improved synchronization times between High-Rely backup devices and their targets.

“A key feature of our products is our Automatic Mirroring Technology (AMT), which allows our backup systems to be continuously connected while creating automatic backups onto removable media. Our new DeltaSync feature was designed in direct response to the needs of our customers using today’s larger hard drives” explained Derry Bryson, Senior Software Engineer at Highly Reliable Systems. “High-capacity hard drives translate into longer rebuild times using block-by-block mirroring technology that copies the entire disk – even if only a small portion of the data changed. DeltaSync copies only the data that has changed, leading to rebuild times that are a fraction of what they were using mirroring.”

High-Rely DeltaSync Settings

High-Rely DeltaSync Settings

DeltaSync offers a straight-forward user interface, with easy to understand features and simple configuration options. Synchronization can occur in real time, or on a set schedule. If synchronization is interrupted, DeltaSync will rescan and restart the process without concern for loss.

Tom Hoops, Chief Technology Officer at Highly Reliable Systems, had this to say about the new technology: “Our DeltaSync feature greatly shortens time to re-sync a degraded RAID 1 (mirror) volume when a media has been swapped. The addition of this feature allows our Automatic Mirroring Technology that has been used for years to provide a hardware-only backup method that keeps pace with today’s growing hard drive sizes.”

Also available in this free firmware update is the High-Rely NetSwap Locator software tool, which detects High-Rely backup appliances visible on the network and enables faster access for configuration tasks. Complete details of this update are documented in the change log:

Release 2.13 (February 26, 2015)

  • Added DeltaSync feature as an alternative to using Mirror Disks – a smart sync that
    copies only changed data
  • Default IP Mode is now DHCP+STATIC which is a dual IP mode with DHCP plus a static IP
    that defaults to for backwards compatibility
  • Added “Action URLs and Programs” to facilitate safe removal of disks without logging in
    to the web UI
  • Added network detection feature to allow NetSwaps to be automatically detected by the
    High-Rely NetSwap Locator software (available on website)
  • Updated Realtek network driver to version r8168-8.039.00 to fix possible network disconnections
    with some chips/motherboards
  • Added Tehuti Networks 10GE tn40xx- driver
  • Modified NAS share mode to minimize network disconnects when swapping disks
  • Updated Samba to fix code execution vulnerability CVE-2015-0240

Software Download

NetSwap and RAIDFrame firmware release 2.1.3 is available at: http://www.high-rely.com/downloads/high-rely-usb-installer-version-2-1-3/
Historic firmware change log notes are detailed at http://www.high-rely.com/faq/netswap-raidframe-series-firmware-change-log/

About Highly Reliable Systems, Inc.

Highly Reliable Systems is a talented group of engineers, technicians, and backup storage experts based in Reno, Nevada, USA, that have produced computer backup systems since 2003. High-Rely manufactures durable American-made server backup devices utilizing highly-removable drives with an auto backup system. Our purpose-built network attached storage devices feature Cloud replication with Reverse Cloud management, Scheduled Mirroring with air-gap security, and HIPAA compliant AES-256 data encryption. High-Rely NetSwap and RAIDFrame computer backup systems are cost effective, and work where Cloud server backup is not practical.

Learn more about Highly Reliable Systems at http://www.high-rely.com

Media Contact:

Olin Coles
Technical Marketing Director
775-329-5139 *101


Posted in News

6 Reasons to Trust Your Backups to an Expert

March 25th, 2015 by

6 Reasons to Trust Your Backups to an Expert

By Jay Waggoner, Director of Cloud Services Business Development, VeriStor Systems

Backups. They are arguably the least glamorous task for any IT team. And yet, they are also one of the most critical. In fact, according to Gartner, 43% of companies go immediately out of business after a “major loss” of computer records. Even worse, only 6% of companies survive longer than two years after a significant data loss.

Although these statistics are ominous, so many organizations are still using a hodge-podge of backup solutions. But, there is an easy answer. Backup as a Service. BaaS, managed by data protection experts, will eliminate the burden of traditional backup solutions. They also offer the predictability of an operational expense, rather than costly up front capital expenses traditionally expected with a backup or disaster recovery solution. Typically available in varying degrees to support a range of recovery time objectives (RTOs), BaaS solutions can resolve many of the challenges faced by these data dinosaurs.

Here are six reasons you should offload your backup process to an expert.

  1. Optimized resources. With BaaS, there is no hardware to purchase, no software to license and no ongoing maintenance costs to consider.
  2. Secure and compliant. With security features such as built-in 256-bit encryption and retention in secure SAS-70/SSAE16-certified datacenters, data is never at risk. Plus, hosted virtual infrastructure services support a full range of regulatory compliance requirements including SEC 17a4, PCI/DSS, HIPAA/HITECH, Sarbanes-Oxley, and Gramm-Leach so considering the impact of a new regulation or policy is never an issue.
  3. Offsite. To achieve disaster recovery preparedness, BaaS solutions, by nature are offsite resources that will safeguard data in the event of any disaster.
  4. Cost effective and predictable. BaaS solutions are usage-based services with a set monthly or quarterly billing that is both cost effective and predictable. This shifts the cost of data protection from its history as a capital expense to an operational expense that can deliver the financial benefit of no surprises.
  5. Flexible and scalable. BaaS data protection also delivers unparalleled flexibility. They support an unlimited range of operating systems, applications and platforms and don’t require specific hardware or software at the source.
  6. Fully Managed. Finally, it’s transparent. Many BaaS providers will offer fully managed options so that backup and disaster recovery is as simple as a “set it and forget it” process.

Protect company data with the peace of mind that comes from expert management, monitoring and support so that you can survive any data loss without a second thought.

For more information, contact Jay Waggoner at jwaggoner@veristor.com or via phone at 678.990.1593.

Jay Waggoner is the Director of Cloud Services Business Development at VeriStor Systems, an advanced IT solutions provider specializing in virtual infrastructure and enterprise private, public and hybrid cloud services and solutions.

Posted in Blog

14 Reasons To Do Reverse Cloud Backup

March 23rd, 2015 by

Software as a Service (SaaS) applications have become “cloud backup applications,” and are increasingly popular. According to a survey of enterprise customers, by Aberdeen Group, around 1/2 of firms are using the cloud for Customer Relationship Management (CRM) software and email, with Exchange making up 19%. The high adoption rate reflects software manufacturer’s focus on recurring monthly revenue models versus the older sales models, where software was purchased with yearly support, or maintenance fees. By hosting their applications, software vendors, such as Microsoft CRM and Salesforce.com, create higher profit margins and create tighter linkage to the end user. This model will eventually diminish, lessening the importance of trusted consultants and IT resellers, which account for approximately 30% of the traditional cost of IT.


Reverse Cloud Backup captures corporate data stored in the cloud.












Besides cutting out the middle man, cloud apps put cloud providers firmly in control of customer data. Yet despite substantial investments in redundancy by cloud vendors, the desire by customers to have a copy of their data remains high. Vendors, such as Amazon, Microsoft, and Google, invest millions in redundant connections, data, and backups to convince clients that the utility computing model provides an all-in-one solution. Yet customers remain skittish, trusting data solely to their cloud vendor. Let’s look at some of the reasons that clients want local copies, despite “best in class” data center investments and data protection promises:


Users deletes, or overwrites each other’s data.

  • —User error. Employees accidentally delete, or modify cloud data all the time. The number one reason for data loss is user error. These mistakes create a vulnerability that cannot be protected without regular versioning. Local Windows servers easily provided this protection using VSS (Volume Snap Shot) services, but cloud apps do not provide this functionality.
  • —User 1 overwrites user 2’s data because cloud apps are “stateless,” and are more vulnerable to the “last write wins” problem than local databases with record locking functionality. Internet applications do not have the granularity, or control, of multi-user access that are available with local servers. This can lead to users overwriting each other and the problem gets worse as more users access the same data.
  • Cloud vendor can go down, or worse, go belly up.
    While not a significant issue with the bigger players (Amazon, Google, Microsoft), there are many examples of cloud providers running into financial problems. This can happen without warning and can leave customers’ data at risk.
  • Cloud vendors can have inadequate backup, or replication. Vertical market applications may be hosted at a single data center. The backup policies of these smaller players should be looked at closely by the customer.
  • Cloud vendors have no granular restore. “Granular restore” refers to the ability to restore a single record, or a piece of data, without having to roll back an entire database to a day, or two, earlier. An example of this is a company with 200 email users, where the CEO accidentally deletes one email. In recovering that single email, it requires a restore of the entire email data base to the night prior. The cost to the enterprise, and to all of the other users losing their most recent emails, is unacceptable.
  • Cloud vendor tech support is unavailable. This makes calling for help and restoring more trouble than it is worth. Have you tried to Google a phone number for Google? Big cloud companies create a business model that eliminates answering stupid questions.  Unfortunately, that also eliminates your important question when the cloud application is not working as expected.
  • Cloud vendors charges to do restores ($10,000 for restoring data from salesforce.com). As reported by backupcentral.com, protecting your data is intended to allow your cloud vendor to recover from a data center failure. They do not consider an employee mistake the emergency it may be for you. Asking for a restore can incur significant costs.
  • Cloud vendors have inadequate versions, or abilities, to save ‘Archive’ data long term. Many clients need 7-27 year retention.
    HIPAA, Sarbanes Oxley, Gramm–Leach–Bliley, and other laws may dictate that you retain data for many years. While your cloud vendor may be happy to retain this data based on monthly fees per GB/Month, it may make sense to move old data to cold storage to eliminate high monthly data fees.
  • Cloud vendors get raided by Feds (servers taken), or served by a court case, demanding data. On January 19, 2012, following a U.S. indictment accusing MegaUpload of harboring millions of copyrighted files, servers containing 150 million users data were seized.  What if the actions of one user on a particular cloud vendor impacted the up-time of millions of other users? There are stories of MegaUpload customers trying desperately to get data back that was contained on servers seized by the government.
  • Hackers. Even if one employee’s credentials gets hacked, it could allow access to all cloud data. You have 30 employees. One of them uses the password “password” on a vital service that does an eight character check, but does not bother to check dictionary hacks.  Your company data is now owned by a 15 year old hacker in Russia who used the most simple dictionary grinder code available.  Congratulations.
  • Customer forgets to pay monthly bill, credit card number is changed, or there is paperwork confusion, resulting in the cloud vendor shutting the account down. The story is a common one. A credit card charge looks unfamiliar. No one remembers making the charge. A single call to the credit card company voids one credit card number and a new one is issued. An unfortunate side effect is the monthly charge of the obscure cloud vendor that is suddenly not being paid. Emails get diverted to spam filters and suddenly a vital service is shut down. Ooops.
  • Forgotten password, or inexplicably locked out of account, after multiple retry attempts. We have all had it happen. One night we have a bit too much on our minds and forget a password. We struggle to log-in to an application we use every day, and we fail to log into  the account we use all the time. After multiple log-in attempts, the entire account is locked out. Perhaps we are the administrator, so the entire cloud infrastructure becomes unavailable until we call tech support. The feeling of not having a backup at this moment is awful.
  • Cloud vendor refuses to provide methodology for customer to extract their data and go elsewhere, in the event of poor service. Let’s face it: One of the great things about hosting an application is the complete control it gives over the customer. In fact, it gives so much control that the customer support can slip a bit. Without a local copy of the data, the client is at the mercy of the SaaS vendor.
  • Internet connectivity problem on either side. (These should be temporary, but…) Whether the internet connection is down on the cloud side, or your side, the result is the same. A mission critical application is suddenly unavailable. This situation is uncomfortable, but even more so when you realize you have no option, but to wait. Knowing the data is available locally provides incredible peace of mind.


A whole host of problems can occur when trusting cloud vendors with valuable corporate data. While cloud computing is an inevitable part of corporate life, and will likely gain traction in years to come, IT professionals should put reverse cloud backup systems in place to provide protection against common problems. While proprietary data formats remain a significant obstacle to meaningful reverse cloud backup, the ability to retain local control is clearly very important.

Our Solution:

At Highly Reliable Systems, we provide a solution to all of these significant obstacles of cloud backup, with our new series: the RNAS. The RNAS series is a rapid recovery platform, which utilizes our High-Sync software. Our system is capable of performing reverse cloud backups, from twenty-seven different cloud providers, which consolidates important information in one location. Click here, to view our High-Sync functionality.


Posted in Blog, Spotlight

7 Reasons Cloud Computing Backup Makes Sense

March 20th, 2015 by

Like many of you, I’ve viewed the cloud computing backup hype with rolling eyes.  Tonight I viewed a presentation by Amazon S3 executive that, while 45 minutes long, I felt was worth sharing.  It convinced me the cloud computing backup is coming soon… and it may be something you want to watch in quiet time off hours, but it’s pretty jaw dropping:

The 2nd half started to get into some database stuff I didn’t fully understand.  As IT pros, I think we all tend to be skeptical about “the cloud”. The graph below (from his presentation) should convince you something is going on here.  Listening to what they’re doing you start to grasp how many advantages they have:

  1. amazonThey don’t pay the Cisco premium for networking – they have designed their own switches and routers
  2. They don’t pay the HP premium for servers – they have designed their own servers  (and he points out their servers are faster and they don’t pay 30% to “the channel” – to purchase servers interesting number)
  3. They don’t pay the EMC premium for storage – they have designed their own storage
  4. They bypass network software stack overhead by creating virtual NICs and stacks. (new)
  5. They don’t pay VMWare or Microsoft for operating systems and failover clusters.  They design their own failover systems
  6. They don’t pay for bandwidth, they bypass the AT&Ts and others by buying their own private fiber links
  7. They design their own power stations (last slides)

Summary:  Moving forward backing up both to and from the cloud will be an important part of data retention strategies.  The Netswap Plus NAS gives you the ability to do so with no monthly fees.  Cloud computing backup will become critical to insure that data in the cloud has a redundant copy locally.  We will be introducing more ways to do this.

Posted in Blog

Using RAID-5 Means the Sky is Falling

March 16th, 2015 by

Using RAID-5 Means the Sky is Falling!

Why disk URE rate does not guarantee rebuild failure.

Editorial article by Olin Coles for Highly Reliable Systems

Today’s appointment brought me out to a small but reliable business, where I’m finishing the hard drive upgrades for their cold storage backup system. It was an early morning drive into the city, with enough ice on the roads to contribute towards the more than 30,000 fatality accidents that occur each year1. The backup appliance I’m servicing has received 6TB desktop hard disks to replace an old set with a fraction of the capacity, so rebuilding the array has taken considerable time.

Their primary storage spans eight disks in a RAID-10 set, which gets archived to the server backup appliance for long-term retention. That backup appliance has a unique cartridge system that safely holds three disks in a redundant array. Later this evening when the project is finally finished, I’ll count myself as lucky for surviving the treacherous roadways and lethal cold, but I won’t give it a second thought to the risks I took by using RAID-5 on their cold storage devices.

You might not agree, but there are people out there who believe we should not be driving because the statistics indicate it’s clearly a dangerous activity. Nearly every driver in America will be involved in an auto-accident at some point in their life3, some of which will cause serious injury or death. For those people not involved in an accident this year, which is more than 96% of all licensed drivers, we’ll drive to our destinations unharmed. Sure, a statistical risk exists, but it’s not an absolute guarantee I’ll be killed on the drive home. The sky is not falling.WD Purple 3.5-inch Surveillance-Class Hard Drive Line Debuts

Every year, no matter where you live, it gets cold in the winter. This natural occurrence drops temperatures, which could lead to hypothermia and for a very small portion, death. Winter temperatures in Sierra Nevada can chill you to the bone, which explains how over 1500 people succumb to hypothermia annually2. When I walked from the parking lot to the client’s office this morning it was extremely cold, but just because there is a statistical risk of hypothermia does not mean I’m surely going to freeze to death. The sky is still not falling.

For some strange reason people seem to think that everything changes when we talk about hard disk drives, and that a statistical possibility becomes absolute certainty. Manufacturers conduct abbreviated testing on hard disk components4, sampling a set number of drives to determine a relative mean time between failure (MTBF) or maximum unrecoverable read errors (URE). Nevertheless, there are people using fear tactics that claim redundant arrays of large capacity disks, such as the 6TB hard disks I used in those RAID-5 sets, are risky business. Some even go so far as to say RAID-5 will stop working on a particular year, reminiscent of pre-apocalypse Y2K.

In reality, most hard disks seldom see operating temperatures below the chill of a server room or beyond the warmth of rack space, and most disks will not commit an URE that crashes a RAID-5 rebuild. While it is agreed that better parity schemes exist, the exception is not the rule. My customer could have retained cold storage data to individual disks via removable drives, with no redundancy at all. In fact, most organizations already use a single removable disk or cloud container for their nightly backup routine. My customer choose a special backup appliance that fits three disks into a single cartridge, further protecting archived data and proving RAID-5 still has business applications.

But if the opinion of an Internet personality vocal on storage technology is to be revered as the gospel truth5, then we must forego these large capacity disks because they’re all purported to carry an “almost certain” unrecoverable read error rate… something to the tune of 1014. A guaranteed URE, you ask? Well, it’s not as certain as freezing to death or being killed in an accident, or even both of these statistics combined, but according to the often-cited but seldom verified test methodology, your hard drive will fail to read a sector once every 12TB of data. Such a failure could happen as a RAID-5 array is being rebuilt, striking a sector with a guaranteed URE on the parity disk happening at exactly 100,000,000,000,000 bits – unless it doesn’t.

Some writers build their reputation by making audacious claims that create controversy, done solely to help propel traffic onto the website they write for. Common sense and real-word experience be damned; let the lack of evidence claiming otherwise and the use of complex math help prove their confusing point! After all, it’s not like anybody knows exactly how any particular manufacturer came up with a 10^14 error rate, which arbitrarily changes from time to time, or where people can find these clearly documented test procedures. You’re not supposed to question the numbers – you’re just supposed to believe what the manufacturer tells you, and know that regardless of capacity per disk or number of drives involved that after reading 12TB you will experience an unrecoverable read error. Oh, and that RAID-5 also stopped working in 2009 – except that it didn’t.

We all survived Y2K unscathed, and not surprisingly the end of RAID-5 did not actually happen as predicted. That same author later wrote a follow-up article, and instead of admitting defeat he doubled down and claimed RAID-5 was as doomed as ever because URE rates remained the same in the largest capacity drives. Never mind that there are countless real-world scenarios where RAID-5 continues to be used with great success well into 2015, that’s not important. The people forget that the 10^14 bit URE rate is not an absolute; it’s a predictive failure specification, measured for a single disk based on a unknown test sample size of disks. It’s also a marketing ploy, since nearly all consumer desktop hard drives typically receive the same failure rate while enterprise drives magically receive a 10^15 bit URE rate… an entire order of magnitude greater reliability, all without quantified explanation.

It’s possible that people who claim the sky will fall have failed to envision a future with solid state storage, or they’ve misinterpreted a suggested error rate as a predictable mechanical function. Both are likely, yet facts being facts, they still weren’t able to prevent an entire subculture from embracing the notion that RAID-5 does not work, or that all desktop hard disks will have read failure precisely at that 10^14 bit. All that is necessary to disprove this is the successful rebuilding of a RAID-5 set with 12TB or better capacity, as many primary and backup storage administrators have done countless times.

As we approach an era where Solid State Drive products reach multi-Terabyte capacity with built-in error checking and data management technologies, the argument for unrecoverable errors and subsequent RAID rebuild failures becomes even less valid. It’s foolish to claim a proven technology will one day fail in the far off future7, when that future involves dramatic improvements with every product cycle that nobody can predict. If the sky really is falling, next time they’ll just have to shout louder and use proven math.

  1. NHTSA, 2012: http://www-nrd.nhtsa.dot.gov/Pubs/811856.pdf
  2. CDC, 2010: http://www.cdc.gov/mmwr/preview/mmwrhtml/mm6151a6.htm
  3. Karen Aho, 2011: http://www.carinsurance.com/Articles/How-many-accidents.aspx
  4. Adrian Kingsley-Hughes, 2007: http://www.zdnet.com/article/making-sense-of-mean-time-to-failure-mttf
  5. Robin Harris, 2007: http://www.zdnet.com/article/why-raid-5-stops-working-in-2009
  6. Robin Harris, 2013: http://www.zdnet.com/article/has-raid5-stopped-working
  7. Robin Harris, 2010: http://storagemojo.com/2010/02/27/does-raid-6-stops-working-in-2019
Posted in Blog