Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Fast Facts on Node.js (bostinnovation.com) similar stories update story
25 points by sliggity | karma 587 | avg karma 5.7 2010-12-28 08:38:55 | hide | past | favorite | 32 comments



view as:

I find a number of the fast facts to be horribly misleading due to their brevity...

"Node.js runs everything in ‘parallel’, meaning that all of your tasks can be run at the same time. For example, let’s say you have a webpage that has 100 lines of code with 10 database calls. With a web server like Apache, these 10 database calls must be made in succession. While running node.js, you can run all 10 of these database calls at once, which makes the webpage load faster."

errr... maybe...

"Current web servers must open many connections to interact with their operating system (OS), while node.js only needs to open 1 connection. Each connection costs the system memory, so using fewer connections is more efficient and faster for the entire system."

there are many lightweight polling webservers like nginx, lighttpd etc


Hey Spooneybarger, I wrote that article. I by no means intended to mislead, I just wanted to keep it simple for a more mainstream audience to digest.

In programming or technical environments, there are literally thousands of factors that come into play, so to avoid the inclusion of all of them, I kept it brief.

Is there any different way you'd phrase the above?


I'd get rid of the 'node.js runs everything in parallel' BS unless you can back it up with facts and numbers. It reads like complete rubbish atm.

Also I have no idea what you're talking about with 'connections to the OS'. Sounds like more FUD to me.

  The code that makes up node.js is carried out by the V8
  javascript engine.  This engine (read: the component that
  processes javascript so that it can be understood by your
  operating system) is the same one Google uses in Chrome,
  which makes the processing of node.js very, very fast.
It should also be noted here, that it makes it very, very fast "when compared to other js engines", however, it makes it pretty slow when compared to a ton of other languages/runtimes. It's just misleading marketing speak to omit that.

Looks like I was given some bad info, updating now

With regards to the 'connections to the OS', I was attempting to paraphrase this chunk from the official site:

"Node's goal is to provide an easy way to build scalable network programs. In the "hello world" web server example above, many client connections can be handled concurrently. Node tells the operating system (through epoll, kqueue, /dev/poll, or select) that it should be notified when a new connection is made, and then it goes to sleep. If someone new connects, then it executes the callback. Each connection is only a small heap allocation.

This is in contrast to today's more common concurrency model where OS threads are employed. Thread-based networking is relatively inefficient and very difficult to use. See: this and this."


Pretty much any webserver can do that though. Even apache can easily be setup to do that. So it's not really unique to node.js.

Definitely a good design decision, but as I say, not unique to node.js.


I think it is perfectly valid to heavily simplify things for a less-technical audience. The scientific models we teach kids in schools are simplified to the point of sometimes appearing outright wrong, however we don't go accusing science teachers of lying to or misleading our children.

As long as there are no deliberate deceptions, and links are provided to resources that give a fuller picture, there is no problem. If all the resources surrounding a technology are written at the lowest possible level of understanding, that technology will never be able to gain traction outside the world of hackers.

to the OP: maybe add a disclaimer to the effect of "Some of these points are simplifications, please read on elsewhere if interested to find out more"


There's simplifying, and misleading.

The only thing node.js does that other frameworks don't, is uses javascript as a language syntax.


It also has a standard library that includes absolutely no blocking code, and contains no language structures that can block. I was under the impression that this was it's key feature, and also largely unique?

I don't think any stretch of the imagination would say it's unique. Good design choice, but exists many places.

Not even close to unique. As I've said elsewhere, I haven't really got a problem with Node.js, but the hype is toxic. It has some nifty features but the standard hype-pack contains several lies about how unique Node.js is, or how performant it is. For the most part Erlang dominates it on all straight-up feature points, for instance, but it isn't Javascript. It very, very isn't Javascript, for both better and worse. But there are several languages that actually fail to block much better than Node.js.

In fact, Node.js isn't what I personally would call "non-blocking", as it only has the weakest form of it. Node.js has cooperative multitasking; writing an infinite loop still takes everything down. Erlang, Haskell, and a growing number of other languages have preemptive multitasking, while still having sane concurrency compared to lock-based threading; infinite loops are still bad but will only lock up that local-running-unit (thread, process, whatever). That's non-blocking; within reason (modulo bugs, deliberate hostile attempts, etc), nothing you do in one thread will block another.

If you need to shuffle various things around, Node.js is OK, but if you actually need to compute something that may take significant fractions of a second, you're going to have some sort of problem. Maybe that problem is bad latency on your other requests, maybe your problem is a sudden and bizarre need to chop up an otherwise-straightforward computation into many little pieces so other things get a chance to run which is easier said than done (chop too fine, lose performance, chop too coarse, lose latency on other responses, getting this right is very tricky), maybe you're going to have to manage worker OS processes, but it's going to poke out somewhere.

Again, I don't really have a problem with Node.js, but the hype needs to be taken out back and shot.


It'd be great to kill the hype, unfortunately though it seems to have always been there, it just moves around... Java -> Ruby on rails -> node.js etc etc

If that is the path the hype has taken it certainly seems to leave a trail of thriving, highly productive industries and communities in its wake.

I disagree. The companies chasing the hype are bubble companies.

The companies providing value, creating profits are using old boring tried and tested unfashionable technology.


Thankyou for setting me straight.

great call, Jonnie. will add that disclaimer

"It should also be noted here, that it makes it very, very fast "when compared to other js engines", however, it makes it pretty slow when compared to a ton of other languages/runtimes. It's just misleading marketing speak to omit that."

Actually, I'd disagree on both counts. V8 isn't faster than contemporary JavaScript engines anymore (although it's a tad faster than Safari's Nitro engine, perhaps). However, V8 is much faster than most of the other languages used for web development: Ruby, Python, PHP, and Perl. The only exception is Java.


> The only exception is Java.

Most languages can compile down to JVM byte code. I'd say the JVM is a pretty quick beast.


Exactly. It's not about being faster than other Javascript engines, it's about compiling down to Machine code, and being way faster than all the other scripting languages.

Unless you use Java, C, C++, etc you would do better with Node.js


Still bytecode interpreted not native, but Node.js wasn't better: http://blog.mysyncpad.com/post/2073441622/node-js-vs-erlang-... (at least cpu / memory -wise)

I personally have no problem with the language he uses here. For a non-technical audience, this is just fine. And it's technically correct and not misleading at all.

Just take out the "runs everything in parallel" and replace it with "is able to handle more requests and events while it's waiting for slower operations to complete, such as a database query, whereas in the traditional model, each request is handled by a dedicated process, which must wait for each database query to complete before issuing the next one"


there are many lightweight polling webservers like nginx, lighttpd etc

Question from a newbie: is there any disadvantage to polling?

Allow me to digress a little. When I look at the benchmark comparisons between Apache and the new, lightweight crowd that there is no way a modest static site could serve anything reasonable back in 1995.

I realize that in 1995 there was way less traffic, but some humble sites did get thousands of visits per hour. It looks like this can very well choke an Apache server with 2010 hardware!

I guess my question could be rephrased thus: Why didn't the Apache folks take this lightweight path fifteen years ago, when it mattered much more?

Any comments will be welcome.


also, if anyone is an expert on node, please email me at kevin at bostinnovation dot com so I can fill in some of the holes in this article

Stop into #node.js on irc.freenode.net, and see what feedback they have (if you haven't already).

Fast Fact: due to a v8 bug, a Node.js process currently only supports a max of 1.9gb of RAM

http://code.google.com/p/v8/issues/detail?id=847


That's fine if you are running a 16-core machine with 16GB of ram, isn't it? Just have one Node.js process per core.

Yep, easy enough to workaround. Seems like a needless limitation however. Would be nice if someone on node.js core (or v8 team at google) would try to fix that bug.

Also wonder what other bugs are going to emerge as a result of trying to use a client side app on the server.


Do note clearly that V8's garbage collector is fairly slow (think old Java GC) so having more instances of node with smaller heaps is a good idea anyways.

Argh, I was afraid of that. Can we have something that is a hybrid of refcounting and stop-the-world GC? I read some nice academic paper a few years ago about how people made a GC that combined the benefits of refcounting and stop the world.

Depending on your workload, 1 node per 2 cores is a better rule of thumb. Also, split the networking across multiple NICs.

Legal | privacy