I thought it was funny when I saw a Tweet come through from Alex Payne (aka @al3x) this afternoon. Alex is something along the lines of the Big Daddy Architect at Twitter. The tweet stated that power was out at Twitter HQ and that they had failed over to abacuses.
That’s not really funny, actually.
Actually, in my time as a contractor for some random alphabet soup government agency, we regularly went through “hotsite” drills where a core team would disappear to Chicago or New Jersey or somewhere offsite and in a different geographical region to perform disaster recovery drills.
After 9/11, the companies like JP Morgan that had decentralized their operations, were able to recover from the World Trade Center attacks much quicker than those who did not. Maybe those who did not were small businesses.
Which reminds me of the day the email died at the Wall Street Journal…
We’ve been through a fair bit ourselves at b5media. It was bad when our service provider, very early on and before funding, allowed a power surge to fry our servers. It was a “death to our enemies” moment when another power-related failure occurred two weeks later. Our question: Why the heck is there even a hint of power failures in a data center?
Sadly, that question never was answered before we moved to LogicWorks after taking funding.
But this is not the point.
As a small business – what are you doing to mitigate catastrophic loss? Are you relying on simple backups? Are you shipping data offsite in case you need to do a data recovery? What happens if your data center is in NYC and another terrorist attack happens and takes out your systems?
What do you do? Is it in your plans?
If all else fails, there are always abacuses.