原文在墙外:http://jlouisramblings.blogspot.com/2011/05/nodejs-vs-erlang-is-wrong-battle.html
Every once in a while, we see the same discussion popping up again: "Should I concentrate on Node.js or Erlang for my needs?" Let me be clear: I think it is the wrong discussion. Fact is, if you have chosen a language which support a large number of concurrent tasks at the same time, you are probably going to be ready for the future.
Divination is hard, but I am not sure we are going to see a fast multi-core revolution with thousands of processes. As per Moore's law, if transistor count increased as we would like, we would have had 2 cores in 2006 (Intel Core 2 Duo), 4 (physical) cores in 2008 (Core i7, Intel also) and 8 physical cores in 2010. Where is my 8 core laptop today? Even Sandy Bridge is not up there. I have a hunch a couple of things are happening: the market has seen a shift to low-power devices, which is more important to the consumer, notably by the ARM incursion. And people can't utilize a multi-core machine with 8 cores since their programs are not written to take advantage of several cores. Finally, it is cheaper to buy infrastructure on-demand at places like Heroku, Google, Amazon, Microsoft, Rackspace and so on.
But if parallelism is not coming, distribution and concurrency is! Modern programs have to live on several devices at the same time. You check your mail on the mobile phone, on the tablet device, on your laptop and on the workstation, if you still use one. Likewise, a modern program will probably live on multiple devices at once and to harness that game, you need efficient distribution. Enter new languages with support.
And this is why the battle must stop. Like a game of Chess the point of the languages are to be different. Haskell, Google Go, Erlang and Node.js all come with a given set of features for controlling the multi-core problem. But like in Chess the pieces are different and built for different purposes. And here we are only looking at current implementations. Theoretically, the ML family, with Standard ML, Ocaml and F# are really well suited for the concurrency and distribution revolution as well, even though the implementations lack in robustness.
The enemy of the state, is a large set of old ways of programming. It is not so much the language chosen as it is the ideology. You can't rely on shared information when the RAM is physically split between two devices (and you have no fast reliable link in between). You can't rely on OOP in the Java sense to save you. And you can't usually get consistency across the distribution parts due to unreliability.
Rather, see the different languages as stakes in the future of computing. They are different for a reason and they gamble by being different. Personally, I like Erlang because it has what I see as a good view of the problem and hand, and a novel way at solving it. Yes, it can be made better in so many ways, but I think Erlang is at the right point on the "Worse is Better" scale. It actually solves a lot of my real-world gripes.
Node.js plays different cards in the game. I really like the mix of continuations, lack of blocking, V8 and Javascript on the server side. Go plays it yet differently, taking a much more low-level approach to the problem at hand. Go feels like an updated C with channels, and it got substructural subtyping to boot. Haskell is just awesome because the language needed very little change to support concurrency. The semantics of Haskell are on a different level than most other, and with Haskell, it is the fabric of computation you are mangling to your desire.
In the endgame of Chess, pieces like the knight become less efficient since its cunning ability to pass through other pieces is not as powerful. Time will tell what piece was the knight. But if you did not bet on every approach, you wouldn't know if you just hit a local optimum.
Divination is hard, but I am not sure we are going to see a fast multi-core revolution with thousands of processes. As per Moore's law, if transistor count increased as we would like, we would have had 2 cores in 2006 (Intel Core 2 Duo), 4 (physical) cores in 2008 (Core i7, Intel also) and 8 physical cores in 2010. Where is my 8 core laptop today? Even Sandy Bridge is not up there. I have a hunch a couple of things are happening: the market has seen a shift to low-power devices, which is more important to the consumer, notably by the ARM incursion. And people can't utilize a multi-core machine with 8 cores since their programs are not written to take advantage of several cores. Finally, it is cheaper to buy infrastructure on-demand at places like Heroku, Google, Amazon, Microsoft, Rackspace and so on.
But if parallelism is not coming, distribution and concurrency is! Modern programs have to live on several devices at the same time. You check your mail on the mobile phone, on the tablet device, on your laptop and on the workstation, if you still use one. Likewise, a modern program will probably live on multiple devices at once and to harness that game, you need efficient distribution. Enter new languages with support.
And this is why the battle must stop. Like a game of Chess the point of the languages are to be different. Haskell, Google Go, Erlang and Node.js all come with a given set of features for controlling the multi-core problem. But like in Chess the pieces are different and built for different purposes. And here we are only looking at current implementations. Theoretically, the ML family, with Standard ML, Ocaml and F# are really well suited for the concurrency and distribution revolution as well, even though the implementations lack in robustness.
The enemy of the state, is a large set of old ways of programming. It is not so much the language chosen as it is the ideology. You can't rely on shared information when the RAM is physically split between two devices (and you have no fast reliable link in between). You can't rely on OOP in the Java sense to save you. And you can't usually get consistency across the distribution parts due to unreliability.
Rather, see the different languages as stakes in the future of computing. They are different for a reason and they gamble by being different. Personally, I like Erlang because it has what I see as a good view of the problem and hand, and a novel way at solving it. Yes, it can be made better in so many ways, but I think Erlang is at the right point on the "Worse is Better" scale. It actually solves a lot of my real-world gripes.
Node.js plays different cards in the game. I really like the mix of continuations, lack of blocking, V8 and Javascript on the server side. Go plays it yet differently, taking a much more low-level approach to the problem at hand. Go feels like an updated C with channels, and it got substructural subtyping to boot. Haskell is just awesome because the language needed very little change to support concurrency. The semantics of Haskell are on a different level than most other, and with Haskell, it is the fabric of computation you are mangling to your desire.
In the endgame of Chess, pieces like the knight become less efficient since its cunning ability to pass through other pieces is not as powerful. Time will tell what piece was the knight. But if you did not bet on every approach, you wouldn't know if you just hit a local optimum.