A lot has happened since my first article on the Stack Overflow Architecture. Contrary to the theme of that last article, which lavished attention on Stack Overflow's dedication to a scale-up strategy, Stack Overflow has both grown up and out in the last few years.
Stack Overflow has grown up by more then doubling in size to over 16 million users and multiplying its number of page views nearly 6 times to 95 million page views a month.
Stack Overflow has grown out by expanding into the Stack Exchange Network, which includes Stack Overflow, Server Fault, and Super User for a grand total of 43 different sites. That's a lot of fruitful multiplying going on.
What hasn't changed is Stack Overflow's openness about what they are doing. And that's what prompted this update. A recent series of posts talks a lot about how they've been handling their growth: Stack Exchange’s Architecture in Bullet Points, Stack Overflow’s New York Data Center, Designing For Scalability of Management and Fault Tolerance, Stack Overflow Search — Now 81% Less, Stack Overflow Network Configuration, Does StackOverflow use caching and if so, how?, Which tools and technologies build the Stack Exchange Network?.
Some of the more obvious differences across time are:
- Just More. More users, more page views, more datacenters, more sites, more developers, more operating systems, more databases, more machines. Just a lot more of more.
- Linux. Stack Overflow was known for their Windows stack, now they are using a lot more Linux machines for HAProxy, Redis, Bacula, Nagios, logs, and routers. All support functions seem to be handled by Linux, which has required the development of parallel release processes.
- Fault Tolerance. Stack Overflow is now being served by two different switches on two different internet connections, they've added redundant machines, and some functions have moved to a second datacenter.
- NoSQL. Redis is now used as a caching layer for the entire network. There wasn't a separate caching tier before so this a big change, as is using a NoSQL database on Linux.
Unfortunately, I couldn't find any coverage on some of the open questions I had last time, like how they were going to deal with multi-tenancy across so many diffrent properties, but there's still plenty to learn from. Here's a roll up a few different sources:
The Stats
- 95 Million Page Views a Month
- 800 HTTP requests a second
- 180 DNS requests a second
- 55 Megabits per second
- 16 Million Users - Traffic to Stack Overflow grew 131% in 2010, to 16.6 million global monthly uniques.
Data Centers
- 1 Rack with Peak Internet in OR (Hosts our chat and Data Explorer)
- 2 Racks with Peer 1 in NY (Hosts the rest of the Stack Exchange Network)
Hardware
- 10 Dell R610 IIS web servers (3 dedicated to Stack Overflow):
- 1x Intel Xeon Processor E5640 @ 2.66 GHz Quad Core with 8 threads
- 16 GB RAM
- Windows Server 2008 R2
- 2 Dell R710 database servers:
- 2x Intel Xeon Processor X5680 @ 3.33 GHz
- 64 GB RAM
- 8 spindles
- SQL Server 2008 R2
- 2 Dell R610 HAProxy servers:
- 1x Intel Xeon Processor E5640 @ 2.66 GHz
- 4 GB RAM
- Ubuntu Server
- 2 Dell R610 Redis servers:
- 2x Intel Xeon Processor E5640 @ 2.66 GHz
- 16 GB RAM
- CentOS
- 1 Dell R610 Linux backup server running Bacula:
- 1x Intel Xeon Processor E5640 @ 2.66 GHz
- 32 GB RAM
- 1 Dell R610 Linux management server for Nagios and logs:
- 1x Intel Xeon Processor E5640 @ 2.66 GHz
- 32 GB RAM
- 2 Dell R610 VMWare ESXi domain controllers:
- 1x Intel Xeon Processor E5640 @ 2.66 GHz
- 16 GB RAM
- 2 Linux routers
- 5 Dell Power Connect switches
Dev Tools
- C#: Language
- Visual Studio 2010 Team Suite: IDE
- Microsoft ASP.NET (version 4.0): Framework
- ASP.NET MVC 3: Web Framework
- Razor: View Engine
- jQuery 1.4.2: Browser Framework:
- LINQ to SQL, some raw SQL: Data Access Layer
- Mercurial and Kiln: Source Control
- Beyond Compare 3: Compare Tool
Software And Technologies Used
- Stack Overflow uses a WISC stack via BizSpark
- Windows Server 2008 R2 x64: Operating System
- SQL Server 2008 R2 running Microsoft Windows Server 2008 Enterprise Edition x64: Database
- Ubuntu Server
- CentOS
- IIS 7.0: Web Server
- HAProxy: for load balancing
- Redis: used as the distributed caching layer.
- CruiseControl.NET: for builds and automated deployment
- Lucene.NET: for search
- Bacula: for backups
- Nagios: (with n2rrd and drraw plugins) for monitoring
- Splunk: for logs
- SQL Monitor: from Red Gate - for SQL Server monitoring
- Bind: for DNS
- Rovio: a little robot (a real robot) allowing remote developers to visit the office “virtually.”
- Pingdom: an external monitor and alert service.
External Bits
Code that is not included as part of the development tools:
- reCAPTCHA
- DotNetOpenId
- WMD - Now developed as open source. See github network graph
- Prettify
- Google Analytics
- Cruise Control .NET
- HAProxy
- Cacti
- MarkdownSharp
- Flot
- Nginx
- Kiln
- CDN: none, all static content is served off the sstatic.net, which is a fast, cookieless domain intended for static content delivered to the Stack Exchange family of websites.
Developers And System Administrators
- 14 Developers
- 2 System Administrators
Content
- License: Creative Commons Attribution-Share Alike 2.5 Generic
- Standards: OpenSearch, Atom
- Host: PEAK Internet
More Architecture And Lessons Learned
- HAProxy is used instead of Windows NLB because HAProxy is cheap, easy, free, works great as a 512MB VM “device” on a network via Hyper-V. It also works in front of the boxes so it’s completely transparent to them, and easier to troubleshoot as a different networking layer instead of being intermixed with all your windows configuration.
- A CDN is not used because even “cheap” CDNs like Amazon one are very expensive relative to the bandwidth they get bundled into their existing host’s plan. The least they could pay is $1k/month based on Amazon’s CDN rates and their bandwidth usage.
- Backup is to disk for fast retrieval and to tape for historical archiving.
- Full Text Search in SQL Server is very badly integrated, buggy, deeply incompetent, so they went to Lucene.
- Mostly interested in peak HTTP request figures as this is what they need to make sure they can handle.
- All properties now run on the same Stack Exchange platform. That means Stack Overflow, Super User, Server Fault, Meta, WebApps, and Meta Web Apps are all running on the same software.
- There are separate StackExchange sites because people have different sets of expertise that shouldn't cross over to different topic sites. You can be the greatest chef in the world, but that doesn't qualify you for fixing a server.
- They aggressively cache everything.
- All pages accessed by (and subsequently served to) annonymous users are cached via Output Caching.
- Each site has 3 distinct caches: local, site, global.
- local cache: can only be accessed from 1 server/site pair
- To limit network latency they use a local "L1" cache, basically HttpRuntime.Cache, of recently set/read values on a server. This would reduce the cache lookup overhead to 0 bytes on the network.
- Contains things like user sessions, and pending view count updates.
- This resides purely in memory, no network or DB access.
- site cache: can be accessed by any instance (on any server) of a single site
- Most cached values go here, things like hot question id lists and user acceptance rates are good examples
- This resides in Redis (in a distinct DB, purely for easier debugging)
- Redis is so fast that the slowest part of a cache lookup is the time spent reading and writing bytes to the network.
- Values are compressed before sending them to Redis. They have plenty of CPU and most of their data are strings so they get a great compression ratio.
- The CPU usage on their Redis machines is 0%.
- global cache: which is shared amongst all sites and servers
- Inboxes, API usage quotas, and a few other truly global things live here
- This resides in Redis (in DB 0, likewise for easier debugging)
- Most items in the cache expire after a timeout period (a few minutes usually) and are never explicitly removed. When a specific cache invalidation is required they use Redis messaging to publish removal notices to the "L1" caches.
- Joel Spolsky is not a Microsoft Loyalist, he doesn't make the technical decisions for Stack Overflow, and considers Microsoft licensing a rounding error. Consider yourself corrected Hacker News commentor.
- For their IO system they selected a RAID 10 array of Intel X25 solid state drives . The RAID array eased any concerns about reliability and the SSD drives performed really well in comparision to FusionIO at a much cheaper price.
- The full boat cost for their Microsoft licenses would be approximately $242K. Since Stack Overflow is using Bizspark they are not paying near the full sticker price, but that's the max they could pay.
- Intel NICs are replacing Broadcom NICs and their primary production servers. This solved problems they were having with connectivity loss, packet loss, and corrupted arp tables.
Related Articles
- Hacker News Thread on this Post / Reddit Thread
- Stack Exchange’s Architecture in Bullet Points / HackerNews Thread
- Stack Overflow’s New York Data Center - hardware of the various machines?
- Designing For Scalability of Management and Fault Tolerance
- Stack Overflow Blog
- Stack Overflow Search — Now 81% Less Crappy - Lucene is now running on an underused cluster.
- State of the Stack 2010 (a message from your CEO)
- Stack Overflow Network Configuration
- Does StackOverflow use caching and if so, how?
- Meta StackOverflow
- How does StackOverflow handle cache invalidation?
- Which tools and technologies build the Stack Exchange Network?
- How does Stack Overflow handle spam?
- Our Storage Decision
- How are “Hot” Questions Selected?
- How are “related” questions selected? - the title, the question body, and the tags.
- Stack Overflow and DVCS - Stack Overflow selects Mercurial for source code control.
- Server Fault Chat Room
- C# Redis Client
- Broadcom, Die Mutha
Reader Comments (13)
Did they explain why they use Redis instead of Memcached for caching? I've heard of quite a few people using Redis for cache, just wondered what does Redis do that Memcached doesn't?
If I remember correctly Redis is not a distributed database, right? With memcached if I add new nodes the client will automatically redistribute the cache to take advantage of the additional capacity. Redis doesn't do that. So why Redis?
Really? People still do this? I know some organizations invested a tremendous amount in automated, robotic tape backup, but seriously, a site founded in 2008 is backing up to tape?
why would anybody use windows/asp over linux/anything else?
It really surprises me people still do such things..
why would anybody use windows/asp over linux/anything else?
It really surprises me people still do such things..
Because .NET is one of the best development frameworks out there. And linux for networks is cheap, so the combination makes sense.
@john
One of the advantages of using something like Redis or membase instead of memcached is that the cache can be persisted to disk, this can avoid the cache storm issue if the cache goes offline and is then is brought back up.
I guess what we don't know is what configuration the Redis boxes are in e.g. are they sharding, doing master/slave replication etc.
Andy
@Joe the logic is easy enough if you know your shit: Joel was on the MS Excel team, which wrote VBA and OLE automation.
@Joe - That's one of the least intelligent comments I've seen on this site.
James: backing up to tape means offline/archival backup. This is often worth the expense and hassle, especially for a large important dataset. After the issues a week or three ago, I can tell you that the Gmail guys are *very* glad they backed up to tape. If all your replicas are online, there's always the possibility that a single bug or slip of the fingers can wipe them simultaneously.
Technically, the IIS 7.0: Web Server is incorrect, under Windows Server 2008R2, it's actually IIS 7.5: Web Server.
@Sosh - Please take it easy and don't elevate yourself in support of Microsoft products. There is no technical reason to run MS stuff among the best and latest of open-source companies and their communities. In fact to really drive this point, the StackOverflow team should be using more *paid/licensed* ms products everywhere to drive their point home. There is also the perspective of using best combination of tools for the job so points there. The answer is really simple: StackOverflow team knows MS products, visual studio, C# and .NET therefore it was cheapest and fastest (for this team) to deliver StackExchange family of sites. ^M
Do they have any stated performance goals? How do they monitor site performance under load? These would seem to be important questions to ask of any site that gets profiled at HighScalability.com...
Yes, most people with serious data still use tape. Also, they are windows because the founder is an old Microsoft guy!
You can avoid software license AND network hhardware costs by just using a better app. server:
Server Requests per second
---------------------------------------------
G-WAN Web server ....... 142,000
Lighttpd Web server ........ 60,000
Nginx Web server ............ 57,000
Varnish Cache server ....... 28,000