Server Hiccup?

Server crashed 5/28.
 
Everything was migrated perfectly with no loss of data. Took +/- 10 hours from crash to uptime.
 
5/30 some idiot in the datacenter activated the old machine which still had the data and was online for some reason. This caused the DNS to point to the bad machine instead of the new one. So during this "switch" you would have seen a post gap back to 5/28, and you were able to make posts... but they were posting to the old machine.
 
Someone caught this and stopped it. It switched back to the new machine. So now there is a post gap from the switch back. So what was lost is actually on the old machine, but in this situation it's best not to try to mess with the database, we are up and running and what was lost can be reposted. Even activating the old machine could have caused the same issue so the data was removed to prevent it from happening again.
 
Thanks for your patience.
 
Devv said:
Sure seems like we lost about 17 hours of data :mope:
 
5-31-14 around 5 pm Central
 
SQL mishap?
 
Devv said:
Well it was working,just fine, and then a 30 sec downtime. My guess is the database server had an issue. I clicked and got page cannot be displayed. Reloaded the page and it went back to last night.
 
Yuppers, exactly what happened. No restore. Just some idiot restarting a bad machine that was still online.
 
Hope that all makes sense!
 
Sadly, I've worked with IT people that do this exact sort of thing, then proceed to wonder what happened, all while refusing to amid guilt and/or fix it...
 
Back
Top