I hope you don't think that somehow justifies the performance of the servers. Because... it doesn't.
I won't talk about Amazon/Google/whatever, because obviously the infrastructure is not the same. But I'll give you an example more akin to this.
A couple of weeks ago I had to supervise a series of tests by a medical board. They handled about 800 students, simultaneously, using a web app on a localized server (meaning, using LAN). They had the works: heavyset servers (dual, quad-core processors at around 3 Ghz, 8 Gb RAM, etc), 1 Gb wired LAN, the laptops used to take the tests were pretty decent... and the test broke down as soon as the students started.
After a few hours of debugging their configurations, app, etc., the solution was simple. A couple of misconfigured parameters in SQL Server, and a pathetic table that was meant to hold every single student's answers... without indexing. Yeah. Obviously after that, the tests went as smoothly as possible. By their hardware, they could've supported the entire student body (they did this on 10 different cities, around 800 students on each) even remotely, and the servers wouldn't have even blinked.
My point is, the numbers shouldn't matter. 600 simultaneous users is nothing. Nothing.
Hell, 6000 simultaneous users is nothing as well. You have to go to the 10s or 100s of thousands to even make a server notice.
FWIW, yeah, the server is running pretty well today. But it has been an issue, and is worth talking about it.