The true nature of recovery: 5 ways to mitigate downtime, data loss

December 15, 2012 in Medical Technology

If an army marches on its stomach, does that mean a healthcare provider marches on its data? That may not sound as catchy as its military counterpart, but it rings equally as true. From integrated medical devices to billing to EHRs, data and computer horsepower are mainstays of the healthcare industry. If a hospital’s servers fail or if data is lost, it can hamstring a provider until it is restored – if it can be restored.

That being the case, there are a number of things a healthcare organization should do in order to both mitigate possible instances of downtime and to recover from data loss. Ralph Wynn of Melville, N.Y. based data protection firm FalconStor, talks about some ways hospitals can cope with crashed servers, downtime and data loss.

Know your systems. The modern hospital is awash in systems. “Patient records, charting systems, imaging systems,” says Wynn. All need to be analyzed and viewed as parts of a whole. Some parts will naturally be more crucial than others, and a successful recovery plan will acknowledge that. “The first thing you have to understand is what the rank and order [of systems] is, what is most critical to business,” and then have a plan for how to bring those pieces online, says Wynn. Taking a hard look at each component of the IT infrastructure and knowing what matters most is one of the first solid steps to take. “Without coming up with some type of business analysis plan, you’re not even prepared to know what would happen” should downtime occur, says Wynn.

Have a plan for recovery timeline. How many applications does a system use? What are the most crucial things that need to be online as soon as possible? Wynn says that after a thorough audit of systems is done, the next step is to “look at a timetable as to how readily available an application needs to be.” He says that just knowing what systems are important is not enough: “You need to sit down and write out the steps for getting data moved over.” By having a framework in place dictating what is going to be recovered and when – and how long the overall process will take – it takes a large amount of the guesswork out of a recovery. An upshot of this is that an organization will know how much to pony up in advance, as a timeline addresses a lot of the cost factors. “Procuring the secondary site, bandwidth for moving data over daily, staffing,” Wynn lists as examples. “Costs become inordinate.”

Avoid points of failure. Even in a large healthcare organization, it’s not uncommon for there to be bottlenecks in a system’s recovery process. If the actual IT system is robust, make sure the people behind it are too, says Wynn. In many organizations, there is often only one person who knows passwords to various routers, holds administrative privileges in certain systems or even knows where specific backup tapes are located, he adds. As much as possible, Wynn says it is important to avoid a scenario where an organization’s uptime is dependent upon “one person who holds all the keys to the kingdom.” Even in a smaller care center, if only one person does IT and isn’t onsite, their absence can severely hinder a recovery effort. Wynn talks about the problems this can lead to, outlining a scenario where a system can’t recover because the only person who has access isn’t reachable: “Oh, that’s Dan and he’s not here. We’re trying to call but all the lines are overloaded … we can’t even reach this person.”

Be the first to like.
VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)

Article source:

Be Sociable, Share!
Bookmark and Share

Leave a reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>