Google is sorry to report it's lost some cloud customers' data. Lightning struck four times near its St. Ghislain, Belgium data center, causing some Google Compute Engine (GCE) storage to go bye-bye, without sufficient battery backup to commit.
Cue the usual cast of characters who'd never trust cloud computing. Ever.
The GCE europe-west1-b zone Persistent Disk service is what's affected, but Google claims the amount of data lost is minuscule.
In IT Blogwatch, bloggers test their backups.
Your humble blogwatcher curated these bloggy bits for your entertainment.
Aunty speaks peace unto nation (yes, even to Belgium):
Google says data has been wiped...at one of its data centres in Belgium. ... Some people have permanently lost access to their files.
While four successive strikes might sound highly unlikely, lightning does not need to repeatedly strike a building in exactly the same spot to cause additional damage.
The...GCE service allows Google's clients to store data and run virtual computers in the cloud. It's not known [what] data was lost. ... Although the chances of data being wiped by lightning strikes are incredibly low, users do have the option of being able to back things up locally. MORE
Yevgeniy Sverdlik clarifies where the strikes happened:
Lightning...struck the local utility grid and not the actual Google data center. ... In the initial version of the incident report...Google said lightning had struck electrical systems of one of its three data centers in St. Ghislain, a small town about 50 miles southwest of Brussels. [The] incident report has been updated accordingly.
Google engineers estimated that about five percent of persistent disks in the zone saw at least one...failure. [And that some lost] data permanently: 0.000001 percent. ... Google reminded users that it has...multiple isolated zones within each region precisely so that users...can fail over from one zone to another. MORE
Mike Wheatley uses a colorful metaphor -- "Lightning strike wipes out data":
Google has admitted that some of its users have lost data...between August 13 and August 17.
Despite not having any control over the weather, Google did assume full responsibility for the outage and data loss. ... Google’s infrastructure team is in the process of replacing its storage systems with newer hardware that’s less susceptible. ... [But] users can setup resilient infrastructure that’s capable of failing over...in the case of any problems like [this]. MORE
Loud bang after flash? Dave Neal before Zod: [You're fired -Ed.]
There were a number of contributory factors to the data loss, and Google is looking to prevent future problems.
Google has taken responsibility for the incident - we didn't know it had a relationship with the weather - but perhaps this relates to the 'Don't be evil' thing? MORE
Meanwhile, Simon Sharwood says a second saw:
0.000001%...isn't a bad result, even if plenty of customers were inconvenienced.
[But] should lightning strike twice, you should remember that a datacentre in the hand can't beat two in the bush. MORE
You have been reading IT Blogwatch by Richi Jennings, who curates the best bloggy bits, finest forums, and weirdest websites… so you don't have to. Catch the key commentary from around the Web every morning. Hatemail may be directed to @RiCHi or email@example.com. Opinions expressed may not represent those of Computerworld. Ask your doctor before reading. Your mileage may vary. E&OE.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.