Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Why are most browsers developed in C++? (programmers.stackexchange.com) similar stories update story
66 points by marpalmin | karma 227 | avg karma 4.13 2013-07-31 10:29:42 | hide | past | favorite | 134 comments



view as:

Be sure and go down to the "Ask 'when' before 'why'" answer.

Interesting. With the recent influx of Go rewrites, I wonder if anyone would consider writing a browser in Go just as an experiment.

Has Go got a good UI library?

It does not, and this needs to change before Go becomes viable for applications like this.

GC pauses might be an issue with Go (they don't matter that much in a Go server app, but in a client app like a browser things are very different).

I had the same doubt.

I rewrote quite a bit of the Chromium browser in Go, achieving a 10x reduction in lines of code (I stopped this project before I could reliably evaluate performance). Since Chromium uses a multi-process model, this was pretty easy: I implemented the Chromium IPC protocol in Go (I wrote a Chromium utility to emit a JSON description of the IPC protocol), and then told the privileged and unprivileged Chromium processes to use my Go subprocess rather than the original logic. My Go logic ran on both Windows and Mac OS X.

Mind sharing if you have the code public ? :)

Belongs to my employer :(

Also, when I said 'quite a bit', I was thinking about the data-plumbing logic --- not UI and WebKit logic. It doesn't really make sense to rewrite WebKit.


The Rust language is being designed to be ideal for use in building a Web rendering engine. Mozilla's rust-based engine experiment is called Servo:

https://github.com/mozilla/servo


Web browsers must be some of the most demanding applications to develop: they have to do very many different things; they must be very efficient; they must be very reliable; requirements change very rapidly.

If claims made by certain language X apologists are to be believed, one can infer that language X should permit developing a web browser in shorter time than C++, with performance within say 20% of C++ (or even superior to C++, depending on X and the apologist), with fewer bugs, and with a more maintainable codebase as the result. Actually delivering such a web browser would be a rather more convincing argument than empty talk or even glorified Fibonacci programs. Hence, I'm very much looking forward to future progress by the Rust guys.


There are a lot of poor or outdated answers to the question (arguing a lack of accelerated graphics, for example).

The number one reason current browsers are implemented in C++ is momentum. A browser has a lot of parts that do a lot of things; they're big code bases.

JIT compilation technology on the JVM (and elsewhere) these days is pulling within 20-50% of static C/C++ code.

If you were coding from scratch these days, would you start with C/C++? I doubt it, especially when you know you can get most of the same performance with other platforms.

By far the most important reason to choose a modern VM environment is security. Except in a vanishingly small number of cases, everything should always be bounds checked. The "native" part of a browser should be as small as possible, with a highly constrained and thoroughly checked API. Take away manual memory allocation and use after free goes away. Take away pointers and buffer overflows go away. You want as much of the browser code as possible to be running in a managed environment.

In my opinion every line of native code carries risks that don't exist in managed environments. Yes, properly written C code won't exhibit those problems -- but it seems to be extraordinarily difficult to do that. Security flaws are still being found in browsers, decades later.

And yes, security flaws exist in systems like the JVM. Those mostly come from native code as well, but some of them are because of the design. The JVM's "native part" is just too big. Too much is done there that doesn't need to be.

With a massive native code base it's just a huge problem to verify everything.

So Rust and Go are quite interesting. It's critically important that these languages remove "unsafe" features from something like C/C++, and lose almost nothing in the process.

And around the corner, environments like the JVM can auto-vectorize on the fly (http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7116452), knocking down yet another performance-parity barrier.


Performance doesn't just include CPU time, but also memory footprint. You might eat a similar amount of processor time, but you're not going to have a similar memory footprint with any kind of GC'd language, and if the GC happens to be a stop the world GC, you're likely to see pauses in the browser that will irritate the user.

Before the inevitable "memory is cheap" argument is made, I'd rather have my client app be a good neighbor. Having a GC'd process on a client munch through memory like it's infinite is just bad manners and makes some bad assumptions. (On the flip side, a GC'd process that tries to be a good neighbor by freeing memory aggressively becomes a bad neighbor in terms of CPU time, which has additional consequences in terms of battery life.) The JVM, for example, tends to only run its GC when it needs to. It is a horrible neighbor in this respect because it just takes all the memory it needs and only runs its GC when it really, really has to. This is one of the reasons it's relatively fast, but as a consequence puts a lot of pressure on the system. In such a scenario it's possible other processes will OOM and be killed, or the system will start swapping memory pages, or other processes will have to run their GC more aggressively (as V8 does in low memory conditions) which puts even more pressure on the system. All this adds up to very selfish apps that bog down a system and can become virtually unusable. These selfish apps are also unsuitable for use in poorer parts of the world where incomes are lower and additional memory is potentially unaffordable.

I think it's probably okay to take the attitude that memory is cheap when you control the environment and you're paying for the memory, but in any other circumstance, shipping software that gluttonously gobbles memory is probably unacceptable and the #1 reason why C++ makes good sense as a language choice for writing a browser.


I found this whole discussion irritating because not a single person mentioned that with the memory demands of a web browser that you might not want GC. You can say all this BS about momentum but in my mind avoiding GC is probably the best and most relevant reason in 2013. Thank you for mentioning it.

Shows how dependent on GC a large part of the community of developers have become - they are incapable of imagining a world without it.

(PS: I am amused when people mention that being 20-50% slower is described as fixing the performance gap.)


I see your point, but even if you avoid GC you have other alternatives nowadays.

Such as?

Probably referring to reference counting, but that's not exactly a new idea. Still makes for a performance hit using reference counting all over the place.

I hope not! Reference counting is horrible in terms of CPU time!

Citation?

There is no a priori reason to think that it isn't horrible in terms of CPU time. Anything that has to do more work just to manage memory is going to consume cycles, and usually a lot of them. In simple utility programs it might not be observable but in larger applications with more objects, or on slower hardware (phones, for example) the cost is palpable.

> We find that an existing modern implementation of reference counting has an average 30% overhead compared to tracing, and that in combination, our optimizations are able to completely eliminate that overhead. This brings the performance of reference counting on par with that of a well tuned mark-sweep collector. [-3]

Even the best tuned mark and sweep GCs are a drag on performance.

> Reference counting is expensive - every time you manipulate pointers to an object you need to update and check the reference count. Pointer manipulation is frequent, so this slows your program and bloats the code size of compiled code. [-2]

> "Unfortunately, reference counting is expensive in both time and space". [-1]

Also, the slide entitled "Why reference counting is slow" in some recent lecture notes go into some detail. [0]

[-3] http://users.cecs.anu.edu.au/~steveb/downloads/pdf/rc-ismm-2...

[-2] http://ocaml.org/tutorials/garbage_collection.html

[-1] http://www.rtsj.org/RTJPP/errata.html

[0] http://cs.nyu.edu/courses/fall12/CSCI-GA.2110-001/lectures/G...


If using something like shared_ptr in C++0x, it does come with some drawbacks (due to not having a strict ownership) as mentioned by Bjarne Stroustrup[1]:

Please don't thoughtlessly replace pointers with shared_ptrs in an attempt to prevent memory leaks; shared_ptrs are not a panacea nor are they without costs:

-a circular linked structure of shared_ptrs will cause a memory leak (you'll need some logical complication to break the circle, e.g. using a weak_ptr)

-"shared ownership objects" tend to stay "live" for longer than scoped objects (thus causing higher average resource usage)

-shared pointers in a multi-threaded environment can be expensive (because of the need to avoid data races on the use count)

-a destructor for a shared object does not execute at a predictable time, and the algorithms/logic for the update of any shared object is easier to get wrong than for an object that's not shared

[1] http://www.stroustrup.com/C++11FAQ.html#std-shared_ptr



If you use naive reference counting, yes. But reasonably tuned implementations of deferred reference counting are competitive.

Not really. Nothing mature. I mean Mozilla is working on Rust for exactly this reason. But the immaturity of that is why we don't have a Rust based browser at the moment.

Otherwise, C++ is really our only choice for a mature, modern language without GC.


I was referring mainly to rust, but you are right in what you say.

Then I agree in that Rust is a wonderful language with a lot of potential. But I'm not sure it's yet a practical language to develop a large piece of software in.

Objective C is also all of those things.

Objective C is close. in that it's a superset of C. But it has problems in portability, and it's insistence for dynamism make it not obviously well suited for the same problems. At least not more than just working in C.

There's C. There's Ada. There's Object Pascal. There's Objective-C. If you're a GNOMEhead, Vala can be pressed into service.

I seriously think that anyone who finds themselves reaching for C++ to solve a task should try Ada instead. The semantics of their code will be clearer and their code will be more readable, therefore maintainable.


I didn't downvote you, but, oh holy crap, Ada! Free documentation is non-existent, performance is lacking compared to C++ [-1] (yeah, yeah, benchmarks, etc), the community is minuscule and pretty much limited to stuff like critical real time systems (read: avionics), and the price of the books (the complete, current ones) is a barrier to entry.[0] From what I've read of others' experiences, there aren't many happy Ada developers. This isn't exactly conducive to a successful FOSS project! (On a side note, I do mostly like the language's syntax and its influence on Ruby is very obvious.)

I'm not sure anybody would use Objective-C if it weren't for Apple, any more than people would choose to use JavaScript if it weren't for it being the only language available in the browser.

It's a bit of a chicken and egg problem, but the one thing you really need for a successful OSS project are developers. Do any of the Object Pascal implementations actually have a community?

A codebase as large as a browser written in C would be a horrible, unmaintainable mess IHMO.

[-1] http://benchmarksgame.alioth.debian.org/u32/ada.php

[0] http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3D...


The Ada community is actually growing. Ada has a bad rap because back in the day Ada was a DoD requirement, which meant that vendors could ship compilers that conformed to the spec, but otherwise sucked rocks. With the advent of GNAT it's no longer the case that your compiler is bound to suck. I learned Ada primarily with the help of free resources. The ARM is also free.

Granted, the sort of person who is likely to use Ada is not a l33t hax0r, but someone who's been around the block a few times on major engineering projects. Still, GNAT has bindings to the likes of Gtk and OpenGL, a direct result of increasing interest in Ada outside of critical systems engineering.

It's a bit of a chicken and egg problem, but the one thing you really need for a successful OSS project are developers. Do any of the Object Pascal implementations actually have a community?

FreePascal/Lazarus does. It's mainly Europeans (French and Germans primarily) who once did PC development with Turbo Pascal.

I'm not sure anybody would use Objective-C if it weren't for Apple, any more than people would choose to use JavaScript if it weren't for it being the only language available in the browser.

I would (and do). Its Smalltalk-based object system is much easier to wrangle than C++'s.

A codebase as large as a browser written in C would be a horrible, unmaintainable mess IHMO.

Yes, point against C. C++ has the advantage that it can turn even small codebases into horrible, unmaintainable messes. :) This is the reason why I converted an entire game engine to Objective-C.


I concur - with a lot of browsers lately running on severely memory constrained systems like smartphones/tablets and lower end laptops ... currently I have 3 tabs in chrome using 300+ MB each and about 40 using 50-60 MB.

So memory right now is important. And browsers are using way too much already.


Just as an aside, the reason Chrome is using so much memory is that each tab runs in its own process. This affords some security and stability benefits (one process dies, the rest are fine), but it obviously comes with a major price. I see this as a major architectural flaw in Chrome.

Chrome did this since the beginning but didn't use that much memory back then. So clearly processes aren't the only answer.

Chrome's codebase also got more complex so now every new process consumes a lot more memory. It's a way of achieving concurrency, but it's not necessarily the best way in terms of memory footprint.

I think the processes are not the main culprit. We are using way more massive DOMs and a lot of js these days. And there hasn't been solid memory optimization efforts for js VMsas far as I know. The focus was purely on speed the last few years.

I'm under the impression there was for SpiderMonkey, but I haven't followed it very closely. [0]

[0] http://blog.mozilla.org/nnethercote/2011/11/01/spidermonkey-...


you're not going to have a similar memory footprint with any kind of GC'd language

Cost of GC doesn't bother me.

Java's basic overhead for all data structures does.

Class String is a mistake. Better to have string scalars compiled down to char[], avoiding the class wrapper.

Massive overuse of HashMaps by most frameworks is huge drag too. Using typed collections (vs generics) like Trove helps quite a bit.

Our study group had a really good session on optimizing memory usage. Sorry, I can't quickly find the slidedeck we used. I used jamm to profile various strategies. Super fun.

I recently saw something in the newsfeed about JVM and optimal memory usage. Bit packing, word boundaries, ordering, etc. (It's more low level than I typically work, so I didn't bookmark it, sorry.)


> Cost of GC doesn't bother me.

I generally work on the JVM and I do enjoy it, but if you're looking for rock-solid 60fps-all-day-every-day, the cost of GC without question should bother you a great deal.

I would never implement a browser in a managed language and I would do everything in my power to avoid gating rendering on the completion of execution of its scripting system for similar reasons.


JVM != GC, although it is one of the best implementations out there.

Rock-solid FPS doesn't preclude GC. What it requires is a separate rendering layer that isn't subject to any GC pauses (or a game-like rendering loop that's always on time). The managed code manipulates the declarative level of the animations, and the low level handles the frame-by-frame details.

On relatively small heap sizes (under a GB) GC on recent PC hardware usually fits into the equivalent of a single frame -- under 30 milliseconds. Concurrent collectors would rarely need anything more than that, and wouldn't stop the world in any case.

The question here is, can you build a stable high performance rendering system on the JVM? If not, what would you have to change to be able to do that?

Correctly constructed you wouldn't have to change much. And JMonkeyEngine shows you can do a lot with what's there right now. There are some excellent discussions on its site on the topic of GC.


> Rock-solid FPS doesn't preclude GC. What it requires is a separate rendering layer that isn't subject to any GC pauses (or a game-like rendering loop that's always on time). The managed code manipulates the declarative level of the animations, and the low level handles the frame-by-frame details.

I don't think this is true. It's considerably more than rendering--it's reactivity. You need to handle audio streaming without pause, be able to process and dispatch input events seamlessly as well (I don't even know how you'd do this), so on and so forth. You'll essentially need to pull everything except actor logic out of the Java layer. Even some of that will be latency-constrained.

If anything, I think Unity is on the right path here: embed a sufficiently optimized virtual machine inside an engine that does all the hard work. But Unity still suffers from serious perf issues with nontrivial scenes. Like I said in another reply to you, object pooling is still heavily used in Unity because of the costs of object creation and destruction.

> On relatively small heap sizes (under a GB) GC on recent PC hardware usually fits into the equivalent of a single frame -- under 30 milliseconds. Concurrent collectors would rarely need anything more than that, and wouldn't stop the world in any case.

Sure, but that's too slow. The human eye can easily, easily, detect a dropped frame. Concurrent collectors (you mean Azul and friends?) are a possibility but we'd need a portable one that's freely licensed. If you've got one in your back pocket...gimme. =)

> The question here is, can you build a stable high performance rendering system on the JVM? If not, what would you have to change to be able to do that?

This is a tricky question. My instant reaction--"you can't"--feels right for the current state of the JVM. I do think something like Azul or another fully concurrent collector helps. But even then the JVM just has its limitations. Like, you're going to blow up your cache iterating a list of whatever your "GLbuffer" analogues are, unless you decompose all your objects. In most games way more of your code ends up being engine code than game logic code and so you're kind of hosing the main reason to use the JVM: a mostly-clean object-based system.

At that point, I'd just write C++ and embed a scripting language. (Recurring theme!)

> And JMonkeyEngine shows you can do a lot with what's there right now.

JMonkeyEngine is actually a major reason why I'm not using the JVM--if they can't get it good enough, I sure won't, right? IMO, you don't get enough from using the JVM (unless that's all you know, which'd just be sort of a shame) to make up for its limited platform reach and its perf issues. What you can do with it, right now, relative to just sucking it up and writing some C++ (and I do mean "sucking it up", I hate writing C++), just isn't enough.


Thank you for the follow up(s).

No contest on your position regarding high fps and using a GC language like Java. I definitely don't have your experiences.

I equate VRML browsers with HTML browsers. General purpose, casual use, low cost of content production, etc. Just like with an HTML browser, even if you can make a fast twitch game with VRML, you probably shouldn't, for so many reasons.

My project was a virtual world construction kit based on a modular toy system. Think LegoCAD. At the time, VRML was ideal.


If you said that HTML browsers should be that, I'd totally agree with you, but (unfortunately, IMO) they're not. They're app platforms now, and so performance becomes really critical. Like, stuttering animations make users mad at the browser, not at the (possibly stupid) code within it.

My last project was a now-defunct deformable-terrain world game[1], based on voxel fields and surface nets. We were running into problems with Lua's GC just on our actors, nothing else in the world. With the amount of data we were forced to push around, I can pretty confidently say that an all-managed environment would have killed us (because we tried it first in C# and it did).

[1] - An early video, before the team broke up: http://static.largerussiangames.com/voxels/videos/rigidbodie...


I was probably a little too JVM-centric in my post.

What's a "stop the world GC"? I joke, of course, but I really haven't seen anything like that in a long, long time.

You're right about memory, but you're wrong when you propose C++ as a solution. C++ doesn't solve any of the problems you outlined. You can implement a solution to those problems in C++. A C++ program doesn't come with an inherent advantage in memory handling. Many of the same techniques (creating and managing pools of memory) can and are being done in applications (like Cassandra) on the JVM (off-heap memory).

What you're really deciding is whether you want to solve those problems in a VM, or solve them at the application level.

Garbage collectors give you two things -- a reliability backstop, and (when you take pointers away) increased security. The behavior of GC is something that can be tuned (often dynamically) for a particular environment.


> What's a "stop the world GC"? I joke, of course, but I really haven't seen anything like that in a long, long time.

Actually, many modern GCs are still "stop the world" including many generational GCs. the alternative generally has overhead elsewhere that hurts performance. GC has a lot of tradeoffs.

> A C++ program doesn't come with an inherent advantage in memory handling. Many of the same techniques (creating and managing pools of memory) can and are being done in applications (like Cassandra) on the JVM (off-heap memory).

Sure, but it sure is a lot easier in C++ since the entire infrastructure doesn't assume GC is operating. I don't even know a way to reliably allocate a temporary object in Java that won't trigger GC I could guess some things that would probably cause the optimizer to allocate it on the stack, but that precludes having the temporary lifetime outlive the stack. (It must however be said, I'm not really a Java programmer)

> Garbage collectors give you two things -- ... and (when you take pointers away) increased security.

As you just said, increased security is from taking pointers away, which is completely orthogonal to GC.


> As you just said, increased security is from taking pointers away, which is completely orthogonal to GC.

That's not what he said. Removing pointer arithmetic [1] is part of getting memory safety, but not the entire story. Avoiding dangling references is another part of memory safety, and GC is one way to accomplish that; it's not orthogonal.

[1] You cannot actually eliminate pointers entirely; you can disguise them, but if you want a dynamic heap and not just static memory, you need to be able to reference memory within the heap. When people talk about eliminating pointers, they generally mean unsafe C-style pointers.


> Avoiding dangling references is another part of memory safety, and GC is one way to accomplish that; it's not orthogonal.

It's the removal of the `free` and `delete` semantics that avoid dangling references. Not GC. While GC posits a good alternative to many of these problems. GC doesn't actually help memory safety at all. Unless you consider memory leaks to be a matter of "memory safety".


Avoiding premature free()s is precisely what garbage collection accomplishes in this context, because you will never have to call free() to avoid running out of memory.

Correct me if I'm wrong, but I'm under the impression the JVM has a stop the world GC and uses this by default (although it does have other GCs available). I'm also under the impression that V8 and the Erlang VM, as well as the stock VMs for Ruby and Python, etc at least stop execution during the mark phase! Go has a fairly basic GC and stops execution to do its housekeeping.

Yes, it's easier to write safe applications in a managed language, but it comes at a performance price due to the GC.

It's far easier to write an app in C++ that performs well. There's no overhead of a virtual machine (there goes the safety...) and manual memory management tends to also yield better results in terms of footprint and CPU time when done properly. C++ gives you the tools to implement a fast process with a small memory footprint; managed languages do not unless they're using a null GC! C++ also gives you the tools tools to write a sloppy heap of shit, but then again, Java affords such a facility too!!

Garbage collection simply isn't free. Maintaining the lie that a system has infinite memory comes with a price.

I'm not making the point that GC is evil, and I write code in managed languages frequently, but I am making the point that it's really only appropriate for situations in which you're paying for the hardware, and using a managed language makes good business sense.


I agree with you, but I do want to point out one thing: there exists a happy medium between unsafe, no-overhead manual memory management (C++) and safe, GC'd managed memory (Java). For example, unique pointers in Rust enforce compile-time memory safety, without imposing any runtime overhead whatsoever and without forcing the programmer to manually allocate and free. In other words, there does exist such a thing as safe, no-overhead, automatic memory management, if you're willing to accept the single-ownership restriction of linear types (which in practice doesn't seem very onerous at all).

Here's an old, but good thread relevant to the discussion:

http://gcc.gnu.org/ml/gcc/2002-08/msg00552.html


Keep in mind though that a 20-50% slowdown is pretty big in the browser world, especially given that those numbers represent a lower-bound for how much slower a GC browser would be. Would you trade your Chrome/Firefox browser for something 20-50% slower? I wouldn't, and I don't think anyone else would.

Keep in mind that using a less hostile language boosts the performance returns you get from optimization work per work-hour spent. And reduces risks of breaking stuff while doing so.

"Would you trade your Chrome/Firefox browser for something 20-50% slower?" IE?

I'm just kidding. I found after I upgraded from IE 7/8 (only had for testing) that 9 was pretty usable (haven't used past that).

Just out of curiosity though...depending on where it slowed down would people notice? If it still displayed as quickly but was slower to startup (or other examples).

I admit I don't know much about how browsers actually work (I am assuming it is more than just displaying html).


In the context of a browser, I doubt that the standard 20-50% difference applies. That difference is what you see when measuring JVM code against optimized C++. In the browser world, that kind of pure speed doesn't happen because achieving security means layering all of the bounds checking and such back into the code, erasing one of the major reasons why the JVM code is slower.

In C/C++ you have the option of not performing those kinds of checks to get speed. In a secure browser, you don't want to take that risk unless it's absolutely necessary.


I disagree.

There's a reason your OS, your browser, your shell, your word processor, etc. are written in C and or C++ not to mention your JVM, python, perl, your favorite libraries, etc. It's not "momentum", it's "track record". Yes, you can shoot yourself in the foot with C/C++. Experts know this and manage not to maim themselves.

Java has improved ... a lot ... no doubt but the " 20-50%" is a tad hyperbolic. Yes, repeatedly calling the same function (who's internals are in C) with Java sometimes you can get to with 20%. And , of course, carefully crafted java routines can beat naive, sloppy C sometimes.

Back in the day, Java proponents promised us the Javagator [ http://www.wired.com/science/discoveries/news/1998/04/11458 ].

Don't hold your breath for a competitive Java browser. The speed and small foot print of C/C++ will continue to be edge giving it almost all the market share.


There's another huge problem with writing a browser in Java (or any other JVM-based language), which is that your browser would be dependent on the user installing the JVM, which is a separate product from a separate company that has a separate release cycle (and lots of security updates to deal with). It's hard enough to support a browser like Firefox or Chrome without dealing with issues like the user having Java 1.5 installed but the current version of the browser requires Java 1.6 or above. For a non-technical user who barely knows the difference between their browser and their operating system, there are huge obstacles in supporting software built on top of the JVM.

I feel that additionally there is some user resistance to installing and using the JVM. It may be minor, but I do know some people that won't touch eclipse with a 10 foot pole due to that.

You could just supply the jvm with the browser. Most browsers autoupdate nowadays silently in the background anyway.

According to the programming language shootout it seems it's close to the right range -- The median speed of Java 7 is twice as slow as C++. It also meshes well with my experience -- though I can only get "close" to C++ performance if I use mostly primitive types.

> I doubt it

I do, I like writing C++. The toolchains and infrastructure around it support really concise clean code since C++11.

I wouldn't probably want to do job-work with it, because it is easy to fuck up C++ (ie, the unsafe features) and trying to debug someone else's incompetence is a pain in the butt.

But I really like qt / kde and its support libraries and if you use a modern IDE, at least cmake-quality build systems, a memory profiler + debugger, and the modern language and syntax, it isn't any harder than Java or C#. I'd say its easier, lambdas and global scope let you do non-OO when it is appropriate.

Though I'd rather have Rust / its successor where you get something like C++-- where you add the good stuff (smart pointers, lambdas, generics) and remove the bad stuff (point mangling, ugly syntax, unexpected behavior, the horrible char/string format the standard uses, textual inclusion).

D is close, but since the entire std depends on garbage collection and it uses a runtime to enable mixins and other fancy features it isn't a drop in replacement. You need deterministic compile time assembly generation that does the minimum magic behind the scenes. You don't want the compiler to hide anything, but you also want it to present everything intuitively with a bend towards naturally promoting best practices, rather than crazy hacks in the C++ world like #ifdef <this header> etc #endif, d pointers, and its ilk.

I think that is actually the flaw in the Rusts / Gos / etc. When you put in garbage collection, mixins (or other code generation at runtime) you obfuscate the implementation which instantly makes it unsuitable for a lot of the reasons you use native code in the first place. The processor pipelines and assembly architectures themselves are already vague and unintuitive enough, throwing on language runtime features that you have to debug make it a much more difficult proposition.

I think inheritance is basically the limit of how much behind the scenes magic you can do (complexity wise) and still expect people to use it like its just a fancy assembly. It is easy to understand and debug vtables - it is much less easy to debug the generational shifting aspects of a good GC. It is even harder to debug the code generator making your runtime injected code. Those kinds of things make great libraries to link in knowing they aren't in the base language and aren't cluttering up your execution space with pervasive hooks everywhere to support a "clean" language syntax. Yeah, it means using a gc in C++ requires you to write things like gc::new(<construct object here>) when you want to construct it, and the gc itself requires an initializer (ie, in main, gc::start();) since C++ doesn't have autorun at start (which is good, in my book, it means the code running under the hood is only the code you say to run).


Just FYI: the Rust team is in the process of moving Garbage Collection out of the core language and into the standard library.

If by "these days" you mean ~6 years ago, then the answer is "yes, you'd start with C/C++". At least, that is what the Chrome team did.

Actually, webkit dates to 1998.

KHTML dates to 1998.

but the chrome effort started in 2006 to finish in 2008..

Chrome contains millions of lines of code, not counting WebKit. Almost all of that is C++ and was started 6-7 years ago.

I think all of your arguments go precisely into the other direction, at least at the moment. Browsers are huge code bases, and have at least at their core very high quality code. So for starters, the security advantages are actually on the side of low level languages, since you do not import the problems of the VM. ( Granted most VMs are also really high quality code, but if you do your memory management yourself, you can fine tune it for your use case.) Additionally, the 20-50% speedup are perhaps not the factor of 10 of yonder years, but they are still a speedup. And last but not least, a browser is a runtime environment. You need to do the memory management for JS anyhow, so you can also use the memory management for the rest of your code.

So it is possible that at some point other languages than C/C++ will be the languages of choice for large projects. But frankly I think that these will not be GC languages, but languages were you get the GC somehow from the paradigm. ( I am currently thinking of Haskell. Functional languages can do really interesting things with GC, since all their data is local. )


> So Rust and Go are quite interesting. It's critically important that these languages remove "unsafe" features from something like C/C++, and lose almost nothing in the process.

I have no idea why people keep bringing up Go as a C++ replacement language. I feel any C/C++ programmers that see Go suitable for their needs, would have already moved on to Java or C#.

Can anybody explain to me what makes Go favorable to a C++ programmer?


I agree. I saw mandatory GC in Go and ruled it out as a potential C++ replacement. Rust may be on the right track though.

Indeed. I was about to make a similar comment. I think this comes from the hype that surrounds Go, that it is a "systems" language. In reality, it is no more a "systems" language than Java. These days, it seems that Go lays somewhere between the Perl/Python/Ruby bunch, and Java. It is not a replacement for C and C++ in their domains (OS; kernels; file systems; embedded; high performance games; etc.). Rust is attempting to take on C++ in those segments, as you pointed out.

Most browsers are developed in C++ because for the timeframe in which the projects started, C++ was the performant answer

It still is.

Consider, not just PC based browsers, but the one on your phone. Every JIT compile costs watts, not just time. There's a relatively large push to go back to C++ because of this (power usage). 20% reduced performance may not be noticable as far as user interaction goes, but it is sure noticed in terms of "my battery just died!".

It's nice to have a common code base. C++ across multiple platforms reduces my cost, even if I have to pay for good programmer that understand the use of things like smart pointers.


The browser in my phone is part Objective C and part C++, as part of the project descended from Konqueror, via WebKit. I'd contend the C++ parts are C++ for legacy reasons, aka, "It was the performant solution at the time KHTML was written".

Very little mobile phone software these days is written in cross platform C++, of that it's mostly games. Most iPhone software is still written in Objective C, most Android software is still written in Java. For cross platform code, I'd say Xamarin or Unity C#, or Adobe's product line both are still beating out C++, even counting the games.

Chrome is about 40% C++, and is now the default browser for android.

C#/Xamarin would probably be what I'd standardize on if going for a true cross platform non-game app. Javascript/Unity3D or C+/Unity3D would probably be what I standardized on for games, with Futile being looked at hard for 2D games.

Companies with a huge existing codebase (e.g. EA) likely find porting an existent engine cheaper than writing a new one, and will of course stick with their historical C++


> Most iPhone software is still written in Objective C.

Objective C shares many of the advantages of C/C++.

> I'd say Xamarin or Unity C#

Speaking of Unity, while much of the game specific code is written in C#. As far as CPU time, the vast majority of execution is spent in C++ code.

Similarly on the Web, there is more Javascript code that runs in the browser than browser code. But most of the CPU time is spent in Browser C++.


>Speaking of Unity, while much of the game specific code is written in C#. As far as CPU time, the vast majority of execution is spent in C++ code.

And much of the operating system specific code is written in C. I've always found this "My tight loop is in the language I like" argument a little weird.


I'm not sure I understand what you are saying.

I was simply saying that since we were talking about the performance of browsers (An application that pretty much is the tight loop) Bringing up examples of applications that are not in such a tight loop may not be the most relevant without also discussing where their "tight loop" is.

Does that perhaps clarify my comments?

Also, keep in mind that in a well designed system. The "tight loop" should be in application code. Not OS code. The OS should generally have a priority on latency and overhead over raw throughput.


> Companies with a huge existing codebase (e.g. EA) likely find porting an existent engine cheaper than writing a new one, and will of course stick with their historical C++

Can you envision an alternative for C++ with similar performance characteristics, if historical investment and reliance is taken out from the equation?

I can only think of something like Rust, because it allows such a fine-grained control over the platform which is essential for all sorts of low-level code which implement an abstraction, such as game engines.


>Chrome is about 40% C++, and is now the default browser for >android.

Chrome is 100% C++. Did you ever sync its source code from the repo? (i wonder what the other 60% are in your opinion, Js?)

edit: to be fair there are a little of objective-c and java just to glue with mac/ios and android


http://www.ohloh.net/p/chrome

The metric is probably off from the automated calculator.http://www.ohloh.net/p/chrome It's likely counting the .h files as C


this 40% metric could only be true of they are counting the third-party (with ffmpeg, webkit, etc) but then we cannot say this is chrome..

also there are a lot of tools written in python(gyp, grit, ninja etc).. but this dont get into the executable or libraries.. so i would not count (at least in the perspective of this thread) as the chrome itself. since those tools are used in other projects as well and come only with the source code to build things (it would be like to count the gcc compiler as part of the source)


There was a browser written in Java in the past:

http://en.wikipedia.org/wiki/HotJava


I rank discontinuing HotJava as one of Sun's biggest missed opportunities.

The stackexchange commenter moans about graphics performance in Java. I helped write the Magician OpenGL bindings for Java in the '90s (the Java API side was all me, which Sven of JOGL basically copied). My VRML browser was just as fast (fps) as the available VRML browsers written in C/C++. With JDK 1.1.

Nowadays, Java and OpenGL go together like peas and carrots. Exhibit A is MineCraft.

A good buddy of mine does all his projects in Java and OpenGL. The latest is the awesome 2D skeletal animator called Spine at http://esotericsoftware.com. I couldn't imagine him doing it on top of any other stack.


> My VRML browser was just as fast (fps) as the available VRML browsers written in C/C++. With JDK 1.1.

With respect, I find your claims dubious. I would certainly buy that your top framerate was competitive. I also expect that your framerate was significantly more stuttery and inconsistent due to garbage collection (or else written in such a devolved form of Java that you really might as well write C++ anyway).

The problem is not Java's execution speed. It's that garbage collection is the death of responsiveness.

> Nowadays, Java and OpenGL go together like peas and carrots. Exhibit A is MineCraft.

Minecraft's performance characteristics are pretty bad for the fairly trivial rendering work being done. (It's improved, but it's still not great.) And writing code with LWJGL/JOAL is gross relative to similar code in C++--I have done both. You don't get anything for tying yourself to the JVM except, in a weird and mostly self-destructive way, the 'freedom' from understanding your object lifetimes and deterministic destruction.

I've used libgdx and XNA/MonoGame extensively and have gone back to C++ because both are fairly limited in their usefulness. libgdx helps by hiding a lot of nasty API issues--and Mario and company are super good at what they do--but what it gives you is generally taken away by the limitations of the JVM in a client context (limited platform support, the infuriating Dalvik/Hotspot divide, That Frigging GC Again...). Writing up some pretty minor glue code between windowing, input, audio, and graphics isn't that bad. And, perhaps more importantly, you'll actually understand how it works. (I've spent the week fighting with OpenAL. I'm glad I did. I've learned a lot and can recognize the failure cases and can do something about them.)

> A good buddy of mine does all his projects in Java and OpenGL. The latest is the awesome 2D skeletal animator called Spine at http://esotericsoftware.com. I couldn't imagine him doing it on top of any other stack.

Spine's pretty impressive, but there's not much in it you couldn't do with ease with Cocoa/Obj-C[++] or Qt or even WPF and .NET. It's also a misleading example, though, because tooling generally has much looser latency requirements than user software and so the severe weaknesses of JVM client applications are less readily apparent than in an application that needs to hit 60fps all day every day.


expect that your framerate was significantly more stuttery and inconsistent due to garbage collection

Static scenes, simple eventing. Compile the screen graph to meshes, load textures, then move the camera around. No stutter.

The trick is to avoid the GC, not thrash the heap, don't create a lot of short-lived objects.

Implementing LOD would have sped up my browser quite a bit.

The other idea was to compile VRML's DEF prototypes to Java byte code, avoiding VRML engine overhead, in which case my Java would have smoked C/C++.

My conclusion was not that my stupid simple browser was amazing, but rather the "native" C/C++ implementations were terrible. By comparison, my browser was about 5 times faster than LiquidReality, another Java-based browser.


And these days you can go ahead and create quite a few short-lived objects without worrying about it, as long as they're elided by escape analysis.

With judicious use of off-heap memory for regular structures it's simple to erase much of the crap in the heap, and reduce already-tiny GC times even further.

Never dropping a frame is another goal entirely, though, and for that I still believe you need a separated rendering/animation mechanism.


They all have roots going back to a time when C/C++ were the "it languages" for UI and systems development. Java was nowhere near ready for that sort of thing; Mozilla tried anyway, but Rhino is the only thing from that project that really took off. No other language even came close: Tcl/Tk had the UI part, but its performance wasn't up to par, and most other languages either didn't have the performance, or didn't have the UI capabilities, or called out to Tk to get the UI capabilities and thus inherited Tk's problems.

Mozilla tried the low-level guts/high-level UI approach anyway, using JavaScript because they had to implement an engine for it anyway. But it took them a few tries to really get it right, and its false starts in this category remain infamous to this day. The others stuck with C/C++ for everything because it's what worked at the time, and continue because it works for them.

Mozilla's at it again with Rust, though, and that looks like it could be an interesting project.


IMHO it's pretty simple: low-level performance. Even 25% slower than the output of a good C++ compiler means a browser goes from first to last place in many benchmarks. People expect the page to scroll at 60 FPS and that rules out GC pauses longer than 16 ms. Browsers are a mature market and unless the proposed replacement has the performance of C++ it's not worth the risk. Horizontally scaling server software has greater flexibility for performance; hence the explosion of languages on the server side while the client side is largely confined to languages developed in the 80s.

Parallelism may be the key to disruption here however; if a new browser can scale better to the multi-core present and future it may be worth taking a small hit over C++ in the sequential case. That's Servo's bet (although Rust strives for C++-level performance even in the single-threaded case).


In 2013 I don't think it has anything to do with the merits of C++ vs other languages. You can surely write a better browser from scratch in a language like Haskell; a project this important could even implement their own JVM if they want to use Scala and need to guarantee some performance characteristics. Like how Facebook wrote their own PHP compiler/optimizer.

A browser that breaks on non-standard markup is worse than useless. Legacy compat is so critical and so complex that a rewrite is just not an option. Lots of money and time is invested in battle-tested security etc, you can't just throw that investment away. Again, like how Facebook is still written in PHP.


> You can surely write a better browser from scratch in a language like Haskell.

This isn't something quite as self-evident as you make it.


Having written web browsers / browser engines and compilers for functional languages, I don't think you could do this with anything that exists today. And if you did it, the result would not be idiomatic Haskell; it would just have one huge monad threaded through the entire thing with no pure portion. You would also probably have to drop down to unsafe manual memory management for performance reasons.

I think you are too optimistic. Even today C++ is probably the best choice for writing a general purpose browser from scratch.

> People expect the page to scroll at 60 FPS and that rules out GC pauses longer than 16 ms

How do people writing games in Java or C# deal with this? Gaming is an area where people expect smooth, high speed performance.


Object pools: they allocate objects upfront and re-use instances instead of creating and releasing them.

This is one trick, among many. Most of which negate many of the advantages of GC in the first place.

The advantages of memory-safety remain.

Some advantages of memory safety remain. When using pools, you don't get protection against use-after-free or leaks. It does somewhat mitigate the consequences of those, however: it's much harder for a use-after-free to become a security exploit in a pool in a GC'd language, for example.

Sounds like you have a specific kind of object pool implementation in mind? Invalid object refs don't have to break the memory safety, cf. weakrefs.

An object pool can also be implemented without unsafe bulding blocks or other language support, as a pool of objects that are manually recycled after the pool lifetime is up. Then a bug could let you confuse a previous generation object witha current-gen one, but would not break memory-safety either.


Java or C# are not used in the kind of games that pushes hardware to the limits, really complex games are still in c++.

Last time I was interested for big games it was - C++ engine and above it Lua or C# scripting for majority of AAA titles.

You'll notice that most games are not written in Java or C#. It's much more common in small games that have a lot more cycles to spare.

And even still, it often becomes a problem. Generally it's solved by being very careful with object allocation as to not require GC during gameplay. Or some other trick to force GC to take less time than budgeted.

Also keep in mind, a lot of games are designed to run much closer to 30 FPS these days. Which gives you a lot more time to deal with GC.


Not true. I see what's cannot be more than a couple of hundred elements on this page. Many games best that in foliage alone.

No kidding. My test code for rendering a Tiled TMX 2D tilemap creates and manages more rendering objects than this page has to.

> How do people writing games in Java or C# deal with this? Gaming is an area where people expect smooth, high speed performance.

Short, mean answer: poorly.

Longer answer: they sacrifice the benefits of managed languages. They forego immutability and controllable object lifetimes for the use of object pooling to prevent the garbage collector from going nuts. And many still target 30fps in order to minimize the effects of variable latency.

I used to do this; I'm porting my stuff from XNA/MonoGame to my own OpenGL/GLFW/OpenAL engine in C++ because I grew unsatisfied with the performance and limitations inherent in the process. Many of the biggest benefits of managed languages are blunted with games in other ways, too. The CLR, for example, should be a huge boon for scripting--the DLR is a fantastic concept, for example. But dynamic code requires codegen, and System.Dynamic doesn't work in Xamarin's iOS product. So if you want to bring a game with scripting from Windows/OS X to iOS, you're boned. (Java has the same problems; Groovy, while a great scripting language, doesn't run on Dalvik or on IKVM-over-MonoTouch. Rhino's LiveConnect doesn't work there either.)

On the other hand, I use AngelScript and expose C++ objects and methods to it, computed statically at runtime but executable via interpreted scripts, for the best of both worlds without breaking my back to do it. Memory management just isn't that hard and the performance and compatibility of native code just can't be touched right now.


Object pools were good JVM thinking 10 years ago. It's been a very long time since they made any sense for memory-based objects.

Unity manages to do quite a bit with the CLR on a cross-platform basis, and that looks like Javascript to me.

Horrors! AngelScript has Garbage Collection! But since it didn't come from the JVM, I guess it's OK.

http://www.angelcode.com/angelscript/sdk/docs/manual/doc_mem...

> Memory management just isn't that hard

Every victim of a use-after-free just cried out in anguish. Your world weeps at 60 fps.

Keep in mind the original question: What's a good language for implementing a browser? The number one requirement for a browser today is not performance. It is security. A decade of effort has been put into the major browsing engines to snatch pseudo-victory from the clutches of determined hackers. A great deal of that effort has been necessary because of the desire for "speed", and the fallibility of those who believe memory management "just isn't that hard".


> Object pools were good JVM thinking 10 years ago. It's been a very long time since they made any sense for memory-based objects.

I guess that's why libgdx very recently added pooling, right? And (OK, it's Dalvik, but still) why Android uses view recycling all over the place? Why recommended practice for ListView is to re-use existing objects whenever possible?

I'm not pulling the use of pooling out of my ass here. This is what you do for high-performance managed stuff, in games and otherwise. Everybody I've ever worked with, and I've worked with some pretty heavy hitters (some of my former coworkers at TripAdvisor are insanely plugged into the Way Of The JVM), has kept pooling in the toolbox for latency-sensitive problems.

> Unity manages to do quite a bit with the CLR on a cross-platform basis, and that looks like Javascript to me.

Well, for one, it's not. It's statically typed sort-of-JScript. It bears little relation to JavaScript; it was originally called UnityScript and the name was changed largely for buzzword bingo purposes. (It's really very frustrating when working in Unity, both because it's not real JavaScript but doesn't play nicely with Boo and C# scripts a lot of the time.)

Unity also has notable performance problems for nontrivial scenes due about 50/50 to expensive creation on the front end and garbage collection on the back end. One of the most popular Asset Store items is--wait for it--an object pooling system.

> Horrors! AngelScript has Garbage Collection! But since it didn't come from the JVM, I guess it's OK.

It does, but only if you're allocating within it. I literally never allocate non-stack data in it. I use it as a logical glue layer within my application, which is not garbage collected.

> Every victim of a use-after-free just cried out in anguish. Your world weeps at 60 fps.

I literally can't remember the last time I hit a use-after-free in my own code. If you understand your object lifetimes, RAII will usually (almost always?) put you in the right place. In the worst of cases, where you've found yourself in a thoroughly opaque situation and can bear the perf hit, you have tools like boost::shared_ptr (though in my current project I don't use it anywhere).

It's certainly possible to make mistakes, but it is easier to do the right thing than most managed-languages people are willing to admit. I used to be one. It's why I was pretty terrified of working in C++ for so long. As it happens, I think former-me was pretty wrong about that.

> What's a good language for implementing a browser? The number one requirement for a browser today is not performance. It is security.

I'm of two minds about this. Sure, security is a critical concern. But user experience isn't to be ignored, and I really do feel that user-detectible pauses are to be avoided at all costs. Writing better native code is, to me, a more feasible option than an imperceptibly fast general-purpose garbage collector.

(As an example, I grow frustrated when scrolling a big list on my Nexus 4 when I can watch the scrolling get stuttery. In my experience, it's almost always due to someone not following best practices and re-allocating instead of re-using existing objects.)


Use-after-free bugs that are so blatant that they occur in normal use are rare, but real UAF bugs are discovered adversarially, and it is absolutely not simple to test them or, for that matter, avoid them.

I'd certainly agree that they certainly exist in the wild but I'm less convinced that they're particularly hard to avoid if you understand the lifecycles of what you're working on That browsers are a tough problem with a difficult-to-internalize object lifecycle and a bunch of edge cases makes this tougher, for sure; my stuff is games-focused and so, yeah, I have a relatively simple problem set.

But, all things considered I'd rather Somebody Else (because Somebody Else makes browsers, I don't) spend developer time making a fast system secure rather than running a slightly more secure system up against the hard walls imposed by any managed language one would care to name.


Integer overflows, which require no mitigation other than getting a computer to count correctly, are extremely pernicious; they pop up even in systems that were deliberately coded defensively against them. There's an LP64 integer bug in qmail! And that's just for counting. Avoiding UAFs requires you to track and properly sequence every event in the lifecycle of an object --- and, in many C++ systems, also require you to count properly.

No, I don't think they're straightforward to avoid.

I think it seems straightforward to avoid memory lifecycle bugs because it's straightforward to avoid them under normal operating conditions. But attackers don't honor normal conditions; they deliberately force programs into states where events are happening in unexpected ordering and frequency.

I don't even know where to begin with the idea that it's acceptable to sacrifice a couple memory corruption vulnerabilities in the service of a faster browser.


> No, I don't think they're straightforward to avoid.

Fair enough, and you certainly have more experience on the topic than I do. Thanks for the explanation.

One quibble though, and maybe it was poor phrasing on my part:

> I don't even know where to begin with the idea that it's acceptable to sacrifice a couple memory corruption vulnerabilities in the service of a faster browser.

It's not really an either/or though, right? I mean, you can protect against some forms of attacks through a managed environment (though the environment itself adds its own attack surface). But you trade performance to get it. A managed language isn't going to solve all your problems, and it is going to mess with your performance in inconsistent and essentially intractable ways. I'm not saying "throw in a couple memory corruption vulnerabilities if it'll make it faster", I'm saying that you can fix memory corruption vulnerabilities. You can't fix inherently slow.


> They forego immutability and controllable object lifetimes...

How/where does immutability come into play here? Isn't it orthogonal to GC? Genuine question.


It's a side effect of object pooling. If you're re-using objects, you're by necessity not using immutable objects.

Thanks for the explanation. :)

So what you're saying(or, implying, as I stubbornly deduce from references to GC, sue me ;)) is that it is unfortunate that we can't use... gasp Java?

Oh dear.


Or the many other GC'd languages game devs might like to use: Python, Ruby, JavaScript, Scala, Clojure ...

Yes, it is unfortunate we can't use those languages and achieve the same performance as manual memory management in C/C++.


> GC pauses wouldn't really be a problem if development in the past twenty five years had been focused on hardware assisted incremental garbage collection

Hardware assisted GC isn't free. It comes at a large cost of silicon* and power. And makes a lot of assumptions and puts a lot of restrictions on your GC. And it still doesn't solve the caching problems related to automatic memory management.

* The silicon has to come out of the die budget somewhere. likely decreasing performance on other operations. Or removing other special purpose instructions altogether (SIMD?)


The hardware features required for hardware-assisted GC are actually pretty simple. See https://www.usenix.org/legacy/events/vee05/full_papers/p46-c... for a description. Other features like hardware-assisted write barriers (simple) and transactional memory (much more complex) can be used to increase performance.

> Even 25% slower than the output of a good C++ compiler means a browser goes from first to last place in many benchmarks.

Do most people care about that, though? Like, go to a page that demonstrates browser-benchmarks, then uninstalling their current browser, downloading the best one and installing it? Or are the browser they got "bundled with the OS" fast enough for them? (not to mention exporting and importing all their bookmarks.) Maybe I've been spoiled by having fast browsers, but I can imagine that I could live with that, if I gained other benefits. Most of the websites I read are either very "static" pages for mostly reading stuff, or videos. I am pretty content as long as webpages load in around one second (I can probably live with more than that too) and my videos buffer at a respectable rate.

If I got a browser that was 25% slower, but on the other hand was very much less prone to buggy behaviour, then that sounds like a great deal to me as a user. Chrome is annoying me right now because of the relatively recent behaviour of freezing the UI-frame when I make a new tab: I can type in text but it's effectively hidden to me, and I only get to press enter and see what I wrote after up Five seconds. Another problem (very common in Chromium) is the whole webpage being unresponsive until the whole page is loaded, instead of loading stuff like text and letting me navigate the loaded stuff. If these things could be actually fixed for me, the lazy user (ie without me changing browser or diddling around with stuff that I shouldn't need to diddle around with in order to have a performant browser), then that sounds like a good deal to me.


Would you rip out the guts of a modern browser and replace them with brand-new code that you claim is 25% faster?

No way. You'd be laughed out of the room. How exactly would you get a radical expansion of the attack surface past a security audit?

You might do it in a game, where security is (usually) not a primary concern.

Anyone who would sacrifice browser security for a 25% increase in performance is a fool. Browsers need to be secure, and then performance needs to be "good enough".


> Would you rip out the guts of a modern browser and replace them with brand-new code that you claim is 25% faster?

We do it all the time.


I'll bite. Which browser is that?

Why not? What other high performance languages were out there (besides C) when these browsers were created? New browsers can be written in different languages (for example Rust).

Maybe a more interesting question is: Would you use C/C++ to develop a browser today?

Probably yes, if it's C++11 and UI is in Qt/QML for example.

how about to try a OS agnostic multiprocess architecture with IPC and/or shared mem, embeding a script language like javascript, with gpu acelleration, and video streaming in other language, that could have the same performance as c/c++ ??

Developers should be language/technology agnostic and just use the right tool for a given problem/solution, the same way you use a screwdriver for screws and a fork to eat..


A browser ought to compete against Firefox and Chrome? Do managed languages come even close in resource usage(CPU and RAM) and consistent performance at projects at that scale?

low level performance and control with a lot of abstraction features.

Because what else would you write it in?

Why a C like language? Power. Browsers are CPU and memory hogs, and C is simply the best way of squeezing out more performance.

Why an OO language? Well, while OO is overused, a HTML document looks a lot like a bunch of objects, doesn't it?


Legal | privacy