A Flock Of Tasty Sources On How To Start Learning High Scalability
This is a guest repost by Leandro Moreira.
When we usually are interested about scalability we look for links, explanations, books, and references. This mini article links to the references I think might help you in this journey.
DISCLAIMER:
You don’t need to have N machines to build/test a cluster/high scalable system, currently you can useVagrant and up N machines easily.
THE REFERENCES:
Now that you know you can empower yourself with virtual servers, I challenge you to not only read these links but put them into practice.
- First of all, motivate yourself by watching this tutorial using nodejs + nginx + applying static caching + load balancing + testing, all this in 7 minutes.
- Add these words and their meaning to your vocabulary: scalability, failover, single point of failure (SPOF), sharding, replication and load balancing; even if you don’t understand them completely.
- In order to have a general overview and the reasons/whys about scalable systems, I strongly recommend you to read Scalable Web Architecture and Distributed Systems. This is a great introduction.
- After you get the general idea you can move on to understand how to use a load balancerand what decisions and problems you will face. And then you can try to run a haproxy and make it not a single point of failure too.
- Dare yourself to serve 3 million requests per second but for this task you’ll need togenerate 3 million requests, fine tune your web server and finally scale and test it.
- Your application is already scalable, now you need to scale your databases. They are very important part of your application, here I recommend you to read at least how MongoDB scales with sharding and replication and Cassandra with its almost linear scalability and the ease of adding nodes to the cluster.
- Since your application and database are scalable and fault tolerant, it’s good to save your servers unnecessary workload and also make the responses to the user faster. Learn that a good request is the one that never reached the “real server”.
- Let’s assume we’re deploying the whole infrastructure within a single data center, now we have another SPOF. Since all servers are in the same space, some natural disaster might happen or even the simple power outages. Good news is that Cassandra have support to multiple data center out of the box and you can see how google face this issue. If your user is on Brazil, don’t make him travel longer than he needs and remember even with the best situation we still have latency.
Good questions to test your knowledge:
- Why to scale? how people do that usually?
- How to deal with user session on memory RAM with N servers? how LB know which server is up? how LB knows which server to send the request?
- Isn’t LB another SPOF? how can we provide a failover for LB?
- Isn’t my OS limited by 64K ports? is linux capable of doing that out of the box?
- How does mongo solves failover and high scalability? how about cassandra? how cassandra does sharding when a new node come to the cluster?
- What is cache lock? What caching policies should I use?
- How can a single domain have multiple IP addresses (ex: $ host www.google.com)? What is BGP? How can we use DNS or BGP to serve geographically users?
Bonus round: sometimes simple things can achieve your goals of making even an AB test.
Please let me know any mistake, I’ll be happy to fix it.
参考:
posted on 2014-12-08 21:44 Still water run deep 阅读(342) 评论(0) 编辑 收藏 举报