Happy to answer questions (when I wake up; getting this ready has been a lot of work, needless to say) :)
It goes without saying, but as this is the very first nightly (not by any means a full-fledged release), expect severe bugs, crashes, and missing functionality. Many of your favorite sites will be broken. Don't expect to use this as your everyday browser. We'd love feedback on what issues folks hit the most, so we can prioritize—especially if you're a Web developer!
Looks like visiting a Youtube video linked to via reddit.com/r/videos crashed it. The whole UX is locked up, so I can't click the "Click to submit report" button either. screenshot [1]
Either way, I'm already impressed by how zippy it is :D
There's a more serious error in the url parsing
that caused this, due to some wierd stuff google
is doing with punycode. I haven't looked into that yet. But youtube no longer crashes -- so you can now enjoy window-shopping on youtube!
Yes, this is using WebRender by default. The WebRender 2/WebRender Next branch can render most content just fine at this point, but we went with the mainline branch for the initial nightlies, as it's more mature and stable.
We're hoping for days, but it needs a lot more testing. We're (unfortunately, due to some font-related issues) still building Windows with the mingw toolchain, which makes build & packaging more complicated.
I miss SVGs, <audio>, Youtube embeds, and the layouting is off.
Sometimes mouse wheel doesn't work, was stuck once scrolled. Biggest problem is keyboard layout isn't right.
I tested on OSX, nice work overall!
There is a fix in the pipelien for keyboard layout,
and we are aware of the mouse wheel issue.
Please file bugs for layout being off -- we know
that this sometimes happens, but it's good to have testcases. As for embeds, that's not priority now, but we should eventually get them.
The Windows version is not availiable because mozjs the JavaScript engine, Or say, SpiderMonkey, fails to compile on windows currently, and servo developers can not know when it will be fixed. Once it can be build on windows, windows version may come out soon.
This is not true. Servo and Spidermonkey compile fine on windows using the mingw toolchain. For the MSVC toolchain we don't have that fully working, but obviously Spidermonkey is not the problem there since the official Firefox builds use MSVC. Mostly we have to fix the build glue.
The Windows port didn't ship yesterday because of technical glitches post-install. Servo works fine, but after installation it had some strange behavior we couldn't track down in time. It should come in a few days.
> It's made by system developers and for people comfortable with debugging low level issues like kernel panics caused by faulty drivers.
Fixed that for you :P
PS. It is considered bad etiquette to ignore the issues of a Windows user and simply "recommend Linux" in a tech community like this, especially when a Windows build has already been promised. You don't know the reasons why someone is using Windows (ex. because of a policy of their employer).
I sorry to inform you, but Windows is considered harmful, so this is good etiquette now, also known as "word of mouth". You can download and run Linux in fraction of time you will need to argue that this is bad for a user.
Download Linux. Download Servo. Run Linux. Run Servo. Profit.
Yeah I wish Linux desktop was that great. I run Xubuntu for dev work (Rust actually). Even really trivial things, like making the edges of the windows easy to grab to resize -- they get totally wrong: the target area is 1px or so. (Yes I know, Alt+Right click is an option, and I should really just use XMonad.) I go search for this issue. Sure enough, many years ago people were talking about this issue, and it's just one excuse after another. Never resolved; maybe it's a theme issue (with all in-box themes), maybe it should be solved somewhere else. Whatever. End result, out of box, it's difficult to resize a bloody window. I think that sums it up pretty well.
I've less and less love for MS (though Visual Studio...), but even with having to install crappy hacks to fix Win8, it's _still_ a smoother end-user experience than Linux desktop.
With Win10's Linux layer, assuming there's an accelerated X server for Windows, I can't think I'll keep even a VM around much. (If I can turn off Win10's spyware.)
The Linux philosophy is to do one thing, and do it well, this is why they get impeccable codebases, but bad UI/UX design.
Jokes aside, I stopped fighting desktop environments and embraced Ubuntu's Unity some years ago, at the end I learned to appreciate it and nowadays I feel more comfortable with Ubuntu than with OS X.
I still prefer Windows over Ubuntu, but because I have added several utilities to my workflow like Everything Search Engine or WinSplit Revolution (now defunct), not because of Windows' own merits.
I just wish there could be a way to have OS X's hotkeys in Windows, since CMD + <anything> is more comfortable to press than Ctrl + <anything>. I also miss having the behavior CMD + Tab / CMD + ` and the ubiquity of CMD + ,
Hey, stop that. This topic is about nightly builds and contributing to open source project. Linux is great for that because it made by developers and for developers, and not because it has a polished UI/UX. Linux UI is rough. Accept that.
This specific branch was to ask for an ETA for the Windows Nightly, and you hijacked it. I advised you to refrain from hijacking it and you began spitting anti-Windows nonsense like a freshman who drank the Linux cool-aid for the first time.
This is not the place to discuss about operating systems, and people expect from you to respect their preference.
Linux has its technical merits, and because of them it is the most used OS in servers, supercomputers and niches like web development, but it also has its weaknesses, and because of them it isn't the most used OS in homes or corporations.
OTOH, there already exists a Linux distribution which is more popular than Windows, it's named Android, you've probably heard of it. It is a disaster in comparison with Windows, it gets locked, bloated and abandoned by phone manufacturers, phones get declared obsolete ridiculously fast and people are left without even security updates, you don't even have the ability to "format" a phone unless someone breaks a vulnerability to gain root access.
People blinded by Linux usually overlook the disaster behind Android because they're too accustomed to focus their attention in the issues of Windows, but no OS is perfect and people will choose the OS that works better for them, you must learn to accept and respect their decision.
It looks like you are Windows fanatic. Replace "Windows" by "C++" and "Linux" by "Rust" and reread my comments, please. I am talking about development and developers, nightly builds and contribution. You are talking about users. This version of Servo is not for users. This topic is not for users. Stop blaming me for your own error.
Thanks for the suggestion but I'm already using Linux where it makes sense for me (my website, Gitlab instance, etc). I asked about the Windows version since that's what I'm using most of the time being a .Net dev.
> expect severe bugs, crashes, and missing functionality.
I really don't mean this in a snarky way: wasn't this kind of thing supposed to be obviated by the switch to Rust? It often seems that every other post about Rust on HN says that if it compiles, it works.
No language can fix missing functionality, that seems pretty clear.
Your compiler doesn't know what you're trying to do, so you can't prevent all bugs.
That leaves crashes, and ... well it may also depend on what you mean by a crash. The UI becomes unresponsive? Might have absolutely nothing to do with memory issues or even code written in rust. Maybe your rust code is fine but you're trying to open and write to the same file from two places which you don't realise because the users file system is case insensitive but you've only tested on a case sensitive one, etc.
It's also multi-threaded and while languages can significantly help improve the safety of multi-threaded code, I don't think anything can stop you from creating a deadlock or putting your system into an inconsistent state.
And finally, they may not even particularly expect those things, but putting out early nightly builds to a wider audience and expecting nothing to go wrong would be cavalier. A typical warning on many early releases is "don't blame me if this destroys all your files".
To clarify, a panic in rust is the result of an unhandled exception. Usually this is due to laziness rather than a real bug. For example Servo might encounter an IMG tag, start to load the body. The result might be stored in an option because something could have gone wrong during loading (peer disconnected etc.) but the developer didn't feel like implementing robust image handling that day and instead called .expect to extract the image. This throws a panic and exits the application if the image is not there.
This is just a hypothetical example, I bet image loading is one of the things Servo does fine. Just to give you an idea of when Rust might panic.
I do things like this all the time in Haskell when quickly prototyping new code and then swiftly kick myself because of it. Safe languages are nice, but there's really no way to prevent a developer from saying "Yeah fuck it, this case will never happen anyway" and then calling a fromJust on a Maybe monad which turns out was a Nothing and throws a runtime exception.
Well, there is a way: Don't allow partial functions like fromJust or head. But a) it's going to make prototyping so tedious that no one will bother using this hypothetical language, and b) there are problems that you just cannot solve without partial functions (e.g. foldl1).
We were basically panicking when an invalid URL was given to us. There was a comment there noting this -- which means that this was probably written when it wasn't so important to handle all the cases, and more important to handle some cases so that we can test out various ideas. We're still sort of in that stage, and you may see other comments like this throughout the code :)
And importantly, a panic is not a result of a memory safety bug, and those are the crashes (with their potential security risks) that Rust is aiming to get rid of.
It would be quite nice if there was a synonym for expect (say unimplemented_expect) that meant 'I haven't bothered to implement the proper error handling yet' rather than 'error handling shouldn't be required'. That would really enhance auditability of in-progress code.
Python has a nice built-in exception for this: NotImplementedError. It's intended usage was related to abstract base classes, but it's taken on this secondary use case recently.
Yeah, I use that, but it obviously requires more work than a simple unwrap or expect, so the temptation is to just use those and think you'll remember to flesh it out later :-)
>Maybe your rust code is fine but you're trying to open and write to the same file from two places which you don't realise because the users file system is case insensitive but you've only tested on a case sensitive one, etc
That sounds too precise to be just an example :) We've all been there!
Haha, yes, this kind of thing has bitten me a few times. I remember getting SVN really confused where the server thought files were different but my machine thought they were the same (or similar). I think I had to make a whole new repo.
It's now on my mental list for "but it works on my machine", checking filenames and their cases.
In pure Rust that never uses "unsafe", maybe. (Though even then, you only get memory safety; it's still possible to e.g. go into an infinite loop and use all CPU and/or memory).
Rust intercepts most stack overflows and aborts the program when they occur. There are some more difficult cases where legit stack overflows are still possible, the resolution to which is blocked on LLVM support on most platforms (notably, Windows is the best here).
Well ultimately any useful program will use a library that uses unsafe. All I/O is unsafe at the bare metal so while you can safely wrap it it still is unsafe internallly.
When talking about the usage of unsafe code in rust, we generally ignore encapsulated unsafe code in libstd since it has the same safety guarantees as tgr compiler itself. all unsafe code encapsulated in a safe API has these guarantees, but you trust libstd more :)
Any reasonably large program will have at least some amounts of unsafe code. And that's not a big deal. The point of Rust is to very clearly isolate the unsafe parts, so that you only need to audit 1% or less for memory safety.
If it uses any external libraries (for example, for image handling) then I believe it does by definition, since the API wrapper you end up writing ends up being a small shim that wraps the actual library call in unsafe (because object code is unsafe, as the term in used in Rust).
> It often seems that every other post about Rust on HN says that if it compiles, it works.
I'm always careful not to say that "if it compiles, it works"--no language can guarantee that (well, except those that prove your program correct). Instead I like to say "if it compiles, it will fail for a not-stupid reason". :)
A very general question: what's the high-level plan for Servo regarding Firefox? I see a lot of requests/issues dealing with stuff that doesn't seem related to a layout engine. Is the plan to have browser.html as a temporary incubator for Servo and eventually migrate it to replace FF's layout engine, or is it to "grow" an entire new browser around this new engine?
There are efforts to make various components work in Firefox. The largest effort is "Stylo", which replaces Firefox's selector matching with ours. I'm not sure what the future is on Firefox using Webrender or our layout; though I suspect it depends on how well stylo works.
The plan seems to be to let Servo continue to evolve as a testbed for new ideas (like webrender), and share components with firefox whenever ready. An independent Servo product is not likely in the near future, since there's a lot of work to make it fully web compat. In the far future ... well, we can't tell :)
I feel like mobile (Android, really) needs a fast, secure and extensible browser, I hope that Servo can help Firefox achieve those first two goals so that it could be my browser of choice
browser.html is just the browser chrome (er, not to be confused with the browser Chrome), it's an independent Mozilla project to create a frontend entirely in web technologies, which, since these are all standardized, should theoretically work with any browser engine, not just Servo.
As for Firefox, later on this year we should see the integration of Servo's style engine ("Stylo") into Gecko, and going forward we should expect to see the two codebases share even more components. However, wholesale replacement of Gecko with Servo in Firefox isn't on the table at the moment.
Things like history management. While tabs in browser.html are just iframes, browser.html needs more access to the contained documents to be able to fully do its job. For security reasons, you can't get at an iframe's current location/title/favicon if the loaded document is from another domain. We need this information to show a proper title in the title bar and the tabs list, and to show a back button and things like that.
In addition, the browser needs to act as a sort of broker between sites and the OS. Storing cookies and session data, showing notifications or dialogs, and entering fullscreen mode are just a few examples of this. (Note that most of these things aren't even implemented yet. Browsers are complex applications in their own right, even after accounting for the browser engine.)
Parsing CSS, the selector matching to apply style rules to nodes, and the cascade of properties to other nodes.
Servo's style engine is parallelized and scales nearly linearly with number of cores, and style time is a non-trivial amount of the total rendering time.
I noticed that the Servo Nightly build is already about as big as Chrome or Firefox. As it's not yet a complete browser, I expected it to have a lighter executable. Do you know what lead to this? Is it a matter of build artifacts or optimization? Is Servo going to grow even larger while achieving feature-parity with other engines?
It looks like Servo could be a great alternative for Chromium Content -- one of the things that keeps me from (ab)using Electron/NW/CEF is the massive initial overhead, and a WebRender-based UI with lower admittance price would change the whole scenario.
Mozilla are lucky enough to be able to afford a code-signing certificate and I'm pretty sure their build process can be adjusted to sign binaries by default.
It's unfortunate that a LetsEncrypt style project can't be done for code signing, due to malware/admin overhead.
There's no need to do that. Just right-click on Servo.app and select "Open". A dialogue will pop up asking you if you're absolutely certain and off you go.
Debuggers are special--the OS won't let unsigned binaries control other processes, no matter how they came to exist on your system.
Outside of that, Gatekeeper applies to executables fetched from the Web, can be disabled (harder on Sierra), and is easy enough to bypass--and you only have to do it once per executable. If signed, the certificate does need to be trusted to count--otherwise, it'd just be a fancy checksum.
Alright. This sounds like something we should be
eventually doing. Not sure if we should prioritize
it right now. GPG-signing the binaries for
extra verifiability is another thing we
could do in the meantime, though it doesn't fix the OSX issue.
If you install Servo as of now and try to launch it you'll get a pop-up saying it's not signed (and it won't launch). So, to launch, you need to go to Applications, right-click on servo and choose "launch". Then it'll let you say "yes, run this unsafe executable". Might reduce number of users who can figure this out.
Gatekeeper is nice in some important ways but seeing as it isn't free, I'd also be quite happy with just a SHA1 or similar signature to verify a binary with.
"./servo: error while loading shared libraries: libEGL.so.1: cannot open shared object file: No such file or directory"
I guess it should be an easy fix. But if there are instructions to copy/paste that would really help. If someone has an answer, it would be great if you can please post it here.
Edit 1: I think the problem is because I am trying to run it on ubuntu 14.04 server. Also I am using xming and putty x11 forwarding. I installed libegl using 'sudo apt-get install libegl1-mesa-dev'. Now I am getting a new error.
Xlib: extension "XFree86-VidModeExtension" missing on display "localhost:12.0".
thread 'main' panicked at 'Failed to create window.: NoAvailablePixelFormat', ../src/libcore/result.rs:785
note: Run with `RUST_BACKTRACE=1` for a backtrace.
So I ran with RUST_BACKTRACE=1 and this is the stack trace.
Xlib: extension "XFree86-VidModeExtension" missing on display "localhost:12.0".
thread 'main' panicked at 'Failed to create window.: NoAvailablePixelFormat', ../src/libcore/result.rs:785
stack backtrace:
"Servo is a modern, high-performance browser engine being developed for application and embedded use."
Is embedded devices one of the goals around Servo? Do Servo have any embedded specific design goals? I'm interested to know which parts in Servo targets to embedded use. I can see that page says Android builds are coming soon. i guess that explains it.
ah....the description is ambiguous then. Embed a browser in another application isn't very clear either. Is Servo like framework where devs can build apps on top (something on electron line).
Or is it really about embedding browser inside your application. That one can do today with QT framework where you can embed a browser inside your application. But QT itself moved from Qt WebKit to Qt WebEngine. So, 'embed a browser' in your application sounds old to me.
> "Embed a browser in another application isn't very clear either. Is Servo like framework where devs can build apps on top (something on electron line)."
"Embed a browser in another application" is perfectly clear. Servo is designed so that it can be linked with other applications, and displayed within windows that the host application controls.
WebKit is also embeddable in this way, but other browser engines are not as mature in this area. There was talk of Servo using the same API that WebKit uses for embedding, probably to help developers transition from WebKit to Servo:
Servo is a browser engine like Gecko or Blink. However, while those two engines are really made for use in a specific browser (Firefox and Chrome respectively), Servo is easily used in arbitrary programs (like WebKit).
Embedding generally means using it as a part of another application, either as a webview or something like the chromium embedding framework.
However we do have ports for mobile hardware and some embedded hardware, and we think Servo has a lot to offer there. We've done some initial research into battery life stuff that looked really promising, and of course getting parallel speedups is much more noticeable on slower hardware than on desktops.
When I run it, I first see a webpage loading then it suddenly disappears and the whole page becomes white. There is no UI, but when I hover over the white nothing, it appears as if the webpage is still underneath as the cursor behaves as it would if there were text and hyperlinks.
I don't think we do autodetection yet aside from what Glutin already does (which is very basic). But you can put Servo into a HiDPI mode manually with the --device-pixel-ratio=2.0 argument, and you can supply whatever ratio you like.
We do detect Retina on Mac and have system DPI awareness on Windows, so once we figure out what to do on Linux we can probably add it pretty easily.
Is there an article somewhere comparing how Servo and Chromium threaded compositing differ? I guess there is some work on Chromium side to render pages using multiple threads, but I couldn't find more information and how it compares to Servo's mechanism.
The compositing is very different: Servo does almost everything on the GPU in "retained mode" as opposed to "immediate mode," which is much faster and similar to how modern game engines work. There's a good talk here explaining it here: https://air.mozilla.org/bay-area-rust-meetup-february-2016/
We're planning some more in depth blog posts on how Servo works, and this was already one of the topics I wanted to cover.
The gist is that Servo runs most subsystems on separate threads (including sandboxed and cross origin iframes for example), and so each tab or page is not fighting over the same JS engine's time. On top of that, painting/compositing is combined in WebRender and is parallel as well.
Chromium doesn't have multithreaded layout, and restyle/layout/painting runs on the DOM ("main") thread.
Retrofitting multithreaded, off-main-thread restyle/layout/painting onto an existing engine is an enormous undertaking, almost on the level of rewriting from scratch. This is much of the reason why Servo's rendering is from scratch, in fact.
I started it and it worked, nice to see this progress!
Two things though:
1. when I opened the tab sitebar clicking on a tab closed it.
2. I cannot reproduce this, but servo hung when i closed the window, it stayed open, but did not rerender on resize. Using gentoo with Xorg, running dwm as window manager.
Is the scrolling behavior on OS X custom implemented, just like on Firefox? Imo it hits the uncanny valley unfortunately, especially for edge bounce back. Otherwise, excited to try this out more.
I think the browser.html team will be very interested in this feedback. I sat with them as they poured over Cocoa disassembly and source trying to replicate the behavior as best they could. We definitely have some things to improve. Because we are using our own rendering system to draw on the GPU, we can't use Apple's implementation of this stuff which requires buying into Core Graphics/Core Animation completely I believe.
Note that I don't think Firefox has overscroll at all, except maybe on mobile.
There are a couple of bugs around it, where the bounce back stutters a bit. Also, in general, we have the same physics as Mac OS X, but the constants are known to be a bit off. We'll need to tweak those.
Looking forward to using Servo one day, keep up the good work.
First thing I noticed almost immediately after launching is that everything breaks when dragging the window across monitors (that have different densities) - OS X. I dragged from my Macbook's display to an external monitor with a lower DPI and everything went huge and most of the UI got cut off. Dragging back to the Macbook display does not resolve the issue and instead introduces some weird redraw flickering with garbage data.
you misunderstand if you think servo is scheduled for production and put peer pressure on the devs, but sure, firefox is great already, can only get better. Speaking of which, how are the memory requirements of servo, in relative terms, yet?
It's definitely slower to actually load pages — I'm guessing that there are still a lot of network optimizations that more mature codebases have accrued that Servo hasn't yet — but holy cow are pages buttery smooth once they do load (and even while they're loading, which is unusual). Comparing Chrome and Servo in terms of UI jank felt pretty shocking, in Servo's favor. Kudos, it looks like Rust and WebRender have paid off.
I have some more involved projects planned for the network stack that aren't yet ready to be picked up though. I suggest working on some bugs above, getting familiar, and then asking in IRC for something in the network stack.
It's amazing to see actual effort taking place to build a new browser engine from scratch.
I would say that since the late 90ies, there hasn't been a single new engine being made. Everything we have now is a development based on KHTML, Gecko, Presto or Trident.
By now, HTML and the standards surrounding it have become so big that starting from a blank slate nowadays is next to impossible. The existing engines have it much easier as they are allowed to grow with the spec, but a new engine has to do everything from scratch.
This makes it even more impressive to see how far Servo is already (yes. I know - there's probably some overlap with some existing Mozilla code, but still).
Huge congrats to everyone involved. We need unafraid visionaries like you people to actually move the web forward.
> EdgeHTML is a proprietary layout engine developed for Edge. It is a fork of Trident that has removed all legacy code of older versions of Internet Explorer and rewritten the majority of its source code with web standards and interoperability with other modern browsers in mind.
"rewritten the majority of its source code with web standards and interoperability with other modern browsers in mind"
I think coldtea is focusing on highlighted part. A rewrite like that might change so many behavioral aspects, esp user- or developer-visible, that it can be considered a new engine rather than mere clean-up of behaviorally-identical old one. I certainly reuses some code and techniques but differences might be major.
At the same time, depending on how the rewrite was carried out, it may be nothing like a completely new project. Rewriting function by function is different than rewriting component by component, which is also different than rewriting from scratch.
Rewriting function by function will give you largely the same program using largely the same algorithms, but possibly with more or less bugs, or cleaned up code.
Rewriting component by component will allow new ways those components are implemented, but the interface between components will likely be the same.
Rewriting from scratch could yield anything (servo could be classified as a way to rewrite gecko from scratch, I believe).
All of these could be said to have had the majority of their source code rewritten, but I would only really consider the last one to be "new" without knowing further details. Of course, the devil is in the details...
It's why I'm focusing on behavioral aspect: what does the software do? A browser fork that behaves very differently from prior one to point you have to change existing code to get same effect is essentially a new app at behavioral level. However, it might have same name and lots of same code inside. At that level, it's old code in an old app. We largely define our components by their interfaces and behavior. So that's what I'm focusing on.
Sure, but if it behaves significantly differently, then I think that precludes it from being a function by function rewrite. At the same time, a component by component rewrite can be a complete rewrite (but it depends a bit more on the details, I think). If the mainly user visible portions are rewritten, but much of the utility code and other components are not, it may appear behaviorally different, but but be largely the same code base. It's much harder to know, looking in from the outside.
For a simplistic example, say someone provides a "rewrite" of grep. Maybe the options are all different looking and sounding, so it may appear to be significantly different. But if the core of the matching algorithm and it's capabilities are largely unchanged, is it a that much of a rewrite? To the outside user it may superficially appear so, but to someone comparing the source from before and after, there may be an entirely different opinion, and even that may change if you come from a context of focusing on a particular aspect.
As applied to Trident/Edge, we may have a case where the person speaking was involved in a project to rewrite one major component of the browser, and so is speaking in that context. Maybe that component is responsible for about 60% of the code and functionality, but it still relies on a quite a bit of largely unchanged additional libraries. It's very subjective as to whether you think that qualifies as a rewrite of the project, and depends quite a bit on what was not rewritten, and what you think of that code.
Let's look at the grep example agsin. If it did same thing, but new interface, then it would be same thing because behavior is otherwise the same. Now, lets say you changed what pattern-matching itself produced where you no longer got the same results from same text. Matched totally different patterns unless you changed the text you are inputting to get same results again. And the interface was different. Is that still grep?
If he's talking one component, maybe not anything new underneath. He said the engine (rendering) would be more standards compliant and compatible with other browsers. That's basically what Opera and Mozilla did many years ago while IE didnt. If you blocked out the name, nobody using the browser would think they were using same app when they saw rendering hit those aspects totally distorting the presentation. Or had to redo their websites to display correctly. I remember reading many complaints from web developers about making stuff work with several, different engines whose rendering argued.
So, if Edge engine is different enough to cause that, then it seems like a different engine at behavioral level given same input leads to different output that fails acceptance. Nobody would think it was same app unless the UI told them.
I mean, I still think sameness or newness is an open issue. Do we judge it be function and component as you added? Or interface and behavioral spec as I was looking at? One might also look at data formats. I think it will best to technically just compare in various ways to be accurate. Feature-driven development field probably has some insight into this. However, users will look at them as different, at least in a version since, if they can't do what they were used for or break compatibility. The two Python versions are possibly a good example.
> Let's look at the grep example agsin. If it did same thing, but new interface, then it would be same thing because behavior is otherwise the same.
That sort of depends on how you define "behavior". More specifically, if it matches something equivalent but different to regular expressions (even if it's just a character transliteration to regular expression syntax), then does the behavior change? Is that still grep? Maybe, if grep decides the the "new grep2 have an easier to learn matching language". It's not like we haven't seen that before.
> I mean, I still think sameness or newness is an open issue.
Sure! The vast majority of cases it probably is an open issue, because different people have different criteria. I was just pointing out that the wording from MS doesn't necessarily imply a rewrite as many people might define it, because we have very little context to go on in the statement, and different people have different ideas about what is the browser. The rendering engine? The UI? The JS VM? All of them combined?
I can totally see someone responsible for the rendering engine saying "we rewrote most of the source code" and the JS VM guys scratching their heads wondering what the hell they are talking about, because they definitely didn't rewrite most their code, and they think their component is a significant portion of the browser...
I'll mostly agree with that. Harder to do looking at assembler. However, at behavioral level, you can just black box it to see if it passes acceptance tests for desired outputs. If it does, it might be same for your needs. If it doesnt, it's clearly a problem.
Your point applies totally if we're talking the How instead of the What.
> By now, HTML and the standards surrounding it have become so big that starting from a blank slate nowadays is next to impossible.
It's a two sided coin: HTML now defines a lot of the basic web platform parts in enough detail that you can just implement the spec and have a web browser that works, whereas in the 90s (and most of the 00s) browsers spent huge amounts of resources on reverse-engineering each other. Pretty much the only way to be web compatible then was to start with something already mostly web compatible (consider the fact that Apple's first two or three years of work on WebKit were essentially spent pretty much just fixing web compat bugs!).
Honestly, from what I see, the amount you do is relatively minuscule compared with what Opera was doing ten years ago, while Opera had far more marketshare which is normally the best way to avoid web compat issues in the first place.
It's a rule of thumb in most software I found studying high-assurance. Inevitably, some academic or commercial team wants to robustly implement some standard (esp a protocol). They notice it's specified in a combo of English and implementation code. They start doing formal specifications of English spec. Every time, IIRC, they find various inconsistencies and such that already did or could lead to real-world problems. Showed me the value of formal specs but also important lesson: using informal specs will almost always require reverse engineering and extensive testing due to, at very least, English ambiguity or implementation deviations.
So most of the spec bugs I've come across don't have to do with English vagaries; they have to do with the reverse-engineering not being perfect. Converting the nebulous concept of expected behavior into procedural text is hard, and there are bugs.
So yeah, "implementation deviations" is usually the issue, but not really English ambiguity, at least with web specs. I put my pedant hat on when looking at proposed changes to web specs, and I believe the rest of the community does too :)
There have certainly been spec bugs that could be found from a formal specification, some of which are security related (most of the TLS bugs found by miTLS have been protocol bugs simply found as a result of encoding much of the semantics formally).
That said, while I certainly have a fondness for formal specifications, I also always end up feeling like they have a lot of downsides. They're often very verbose and hard to skim, and they're also another language that anyone implementing the specification has to learn prior to implementing it (or they can just wing it, but then your formal specification is worth nothing!).
The wisdom from old days was to use relatively, simple, formal languages that were easy to train people on and use. B, Z, and TLA had good results plus lasted over time. The other thing was to do English spec, formal spec, and code side-by-side with each team looking for inconsistencies. The developers usually picked up the basics of the technique rather quickly but at least one specialist was always necessary to make first use go smoothly.
So, that's still my recommendation. A subset of those, especially focusing on states and contracts, done in same fashion against English specs and clear code should take lower work than going all Isabelle, NuPRL, or PVS on a problem. Second application is also always easier. Reuse where possible helps as with code.
So, some difficulties and training but probably easier than some frameworks, etc IT people learn today.
(As a disclaimer, for some context, I have only relatively passing experience with Z, and the majority of formal modelling of specs I've done is using CSP.)
From when I've considered trying to formalise parts of the platform before, some of the things that often get hit very quickly are the fact that most of the web platform is defined in terms of sequences of Unicode code points, and ultimately any formalisation is going to have to deal with that (be it some complex parser—like HTML—or some simple validation of it matching a regular grammar). It has always seemed to me that to get much advantage you need to be able to programmatically assert statements about it (including, most obviously, pre-conditions are satisfied at all call-sites), but once you've gone from 2^256 to 2^1114112 characters you end up with such massive state explosion that even many symbolic tools start to struggle quickly.
Now, maybe I've tried using the wrong tools, or maybe I'm structuring things wrong, but it'd be nice to have some way to do this.
Hmm. That's unusual situation. I've seen descriptions that sound similar in B data validation. For now, I'll just make a note of this comment to bring up when I next bump into formal methodists that might have answers.
Implementing SIP a couple of times, part of it is the other implementations just flat-out misread. Even when there's a formal syntax specified, they simply ignore it. For instance, SIP specifies a "lr" (loose route) parameter. The syntax is just that: lr. But many implementations get it wrong and require a value, like lr=1, or lr=true.
Even things as simple as line endings are implemented wrong (in HTTP too), and that can cause security consequences as proxies end up reading headers differently than the user agents. I've seen this live on the public Internet.
SIP and the IETF to some extent encourages this with Postel's Law, telling implementors they should guess what the intention of the message is. We need less of VB's On Error Resume Next and more panic-abort type functionality. Look at this insane document: https://tools.ietf.org/html/rfc4475 "SIP Torture Tests". The authors gleefully come up with ridiculous yet legal permutations of messages that are allowed under their arcane rules. The fact this exists should send the opposite message: simplify your damn protocol. And this is only at the parsing level!
I know people say that HTML could never have had strictness because it'd have been too hard, but I don't buy that. One common issue was getting nesting wrong, like <b><i></b></i> or leaving unclosed tags. By removing the (silly, honestly) name out of the ending tag and just using </>, a whole class of errors is removed. Add in a browser that just fails to render and explains exactly why, and people would quickly not publish pages that are broken.
> SIP and the IETF to some extent encourages this with Postel's Law, telling implementors they should guess what the intention of the message is. We need less of VB's On Error Resume Next and more panic-abort type functionality.
I don't think it's as clear-cut as that. I think the main thing is error handling must be defined: it doesn't matter whether it's panic-abort or whether it's defined how to deal with any stream of bytes. Don't let implementers choose what they should do.
RFC 7230, for example, has error handling sometimes as SHOULD return 400 Bad Request and sometimes as MUST return 400 Bad Request. But that only applies to servers and proxies parsing requests, which are really the relatively well-implemented part of HTTP. (The vast majority come from browsers, which are always syntactically valid; responses can come from arbitrary CGI scripts and you get all kinds of syntactic nonsense there.) Sadly, there's comparatively few normative requirements when it comes to parsing responses, even ones which are needed for backwards compatibility (at what point do you conclude you're just talking to an HTTP/0.9 server?).
> you can just implement the spec and have a web browser that works
Don't forget the amazing work being done at http://testthewebforward.org/ with the Web Platform Tests and your own work on the CSSWG tests! Beyond the specs, having a cross-browser set of tests that have especially good coverage on newer specs has been a huge leg-up for Servo!
It not only helps us ensure interop for those features that are tested, but (puts on former dev manager hat) it really helps quantify how many work items are left until a feature is closer to completion. I especially notice it when there are areas like CSS 2.1 where the tests are less comprehensive and it's harder to quantify what our progress/status is.
To be fair, having tests somewhat relies on having a spec. OK, so in theory we could've had a cross-browser set of tests with some requirement along the lines of "this web page relies on this behaviour", but ultimately you'd still end up with contradictory tests. :)
I wonder if it's worthwhile to spend some time working on the 2.1 testsuite to try and improve coverage at some point. Probably? Unlikely to find so many bugs in mature implementations, hence why so much effort has been spent elsewhere.
For too long it was considered OK to have a spec with either no tests, or tests that focused on the wrong thing ("is it possible to implement this spec" vs "do implementations behave in compatible ways"). It is still considered too OK for browser vendors to write their own tests for a feature and not share them with the rest of the community. However, pressure from other platforms is changing attitudes here as it becomes increasingly apparent that without a stronger degree of interoperability, developers will vote with their feet and target platforms with fewer bugs to trip over.
I think it's worthwhile! Lack of a comprehensive test suite means:
1) There are seem to still be regressions in pretty fundamental stuff when new features are added. Usually caught before things reach "release" trains, but I see it in bug trackers all the time.
2) Other layout engine managers I've talked with who want to do a layout system rewrite are terrified of anything that doesn't look like a large-scale refactoring because of this issue. I've had the "how are you testing and measuring progress / how do you know you're done with layout?" conversation with most of our engine peers at this point :-)
The main site is down due to dain issues. Googling will also give you a blog and Github. Their Cobra engine was Java IIRC. All that code might make a nice head start even if this project is dead or goes that way.
> By now, HTML and the standards surrounding it have become so big that starting from a blank slate nowadays is next to impossible.
One important mitigating factor is that newer standards often generalize older ones. For example, originally all the list elements had special case handling (and still might in other engines), but now it's possible to do all that with the user agent style sheet with new additions to CSS that generalized these things.
It saves an enormous amount of work to implement the web at a particular point in time as opposed to starting at the beginning and keeping up with the changes until that same point in time. Not only that, but all the failed branches that were already explored mean that a new engine avoids quite a bit of technical debt.
It's still quite hard. Here's a nice example of a layout bug on HN we thought would be simple to fix for the release, but turned ugly fast: https://github.com/servo/servo/issues/11821
This has been interesting test. Thanks for your suggestion on the -c option for "can't load servo." That worked. Filed a bug on HN login where tab autosubmits form instead of changes textbox. I can neither see the text I'm typing nor scroll down on HN. I'll check to see if open issues on that then file them if there aren't.
>I would say that since the late 90ies, there hasn't been a single new engine being made. Everything we have now is a development based on KHTML, Gecko, Presto or Trident.
Compared to KHTML, Webkit and Blink were pretty much 5 or more whole new projects on top of it. And even the latest IE is hugely rewritten compared to Trident.
If you start servo with the devtools port on, you can use the Firefox tools to get a remote console. This may or may not be working well at the moment. The JS console.log stuff will get output to stdout or stderr when you run as well.
There are discussions underway about how to bring in other tools, and how to get stuff like this into Servo or browser.html itself.
not working at all
./runservo (as per instructions)
thread 'main' panicked at 'Failed to create window.: OsError("GL context creation failed")', ../src/libcore/result.rs:785
note: Run with `RUST_BACKTRACE=1` for a backtrace.
./servo
welcome to servo, than it changes to a white window
Most likely this is due to your system only supporting OpenGL version < 3.0. I had the same problem. Replacing '-w' by '-c' in the 'runservo' script might fix your startup problem.
We've gotten several reports of this, and this is exactly why we wanted to get some wider testing. Hopefully we'll figure this out and get a patch in soon. In the meantime, the suggested workaround will get you going I think at the expense of some rendering performance.
Just opened up the MacOS build. It is pretty impressive! But the autocomplete is just awful. Typing in "http://localhost:3000" at normal speed produced "http:///localhost:300000". Especially while it's in such a primitive developer-oriented state it would be much more useful to disable that silly autocomplete.
Edit: Additionally, it doesn't yet support native keybindings like Command left-right or Alt left-right.
I doubt that there's going to be much effort into improving the HTML based browser that they're using as a shell to the engine outside of making sure you can go to a specific URL. Things like hotkeys and autocomplete are nice, but not really necessary for developing on Servo. I could be wrong on that though
Well that's exactly the point (concerning autocomplete anyway). It would be nice to have if it worked... but right now it just gets in the way. And it's not even that relevant to the underlying engine, right?
The browser.html team works on the frontend, so they are very much interested in improving this. You're right that Servo devs themselves often use an even more stripped down test harness that you supply URLs to on the commandline, but that may change now that browser.html has gotten quite nice :)
We found some weird edge cases with autocomplete last night, but couldn't reproduce them well. If you have some time and can reproduce it, we'd love a bug report so we can try and figure it out.
I think the issue would much be better solved by not automatically completing and instead showing a grayed out suggestion that can be selected by hitting right arrow.
Anyway, what it is doing right now is seeing "http:/" and thinking <Ok, I can just add /sitethatobviouslyhasnorelevanceyet.com and everything will be fine!>. But while it was thinking, I already started typing the second "/" and didn't notice so I got in "lo" before I saw the result. At that point what's in the field is "http:///sitethatobviouslyhasnorelevanceyet.comlo". The issues here are serious enough but it's not entirely surprising you can't reproduce them well.
There are several issues at play here, native keybindings for input fields has not being fully implemented in servo yet.
As of auto complete, few people have reported issues. In our testing we did not run into this issue. Right now we have an issue and a reproducible test case to tackle this issue:
https://github.com/browserhtml/browserhtml/issues/1157
I like how a few comments up you're saying how harmful Windows is and implying how easy it is to run Linux and Servo and here you are complaining about mismatched libraries.
It because Servo authors chose to not distribute libraries with Servo, and their chosen version is not same as mine. It's known as "DLL hell", because it frequently happens on Windows with DLL libraries.
It's looks like you are misunderstood me. Linux is great for development. It's free. It's made by developers and for developers. It has lot of development tools integrated into OS. You can tweak, modify, and debug just anything, including kernel. It well documented and has full sources. If you want to help Servo developers, Linux is great choice. If you want to USE Servo, then you need to wait until developers will release version for your platform, because it is not ready for end users. It is sad that I need to explain that on Hacker News site.
Congrats on the release! Servo itself has been getting most of the attention in this thread, and rightfully so, but I just wanted to mention that I really love the progress the team has been making on the browser.html side as well!
It's great to finally see a browser UI built from the ground up on top of standard web technologies and with first-class support for vertical tabs to boot!
While there are still lot of work to be done on servo. I wonder if other browser engine be able to benefits from the research mozilla has done. Like Webkit / Blink.
I know it's not the point, but having this publicly available in such a usable form - even with this bare minimum UI - is a huge milestone. A big barrier to adoption I hear from many potential users is, "Are there any big users of Rust yet? No? Why would we?" This answers that.
Half jokingly, all you need now is an emscripten target, so it can be run inside a browser, and anyone can try it out without needing to install an app :)
The thing I love about the programming community is that even though that's clearly meant as a joke, someone'll make it happen, just to see if they can do it :)
It goes without saying, but as this is the very first nightly (not by any means a full-fledged release), expect severe bugs, crashes, and missing functionality. Many of your favorite sites will be broken. Don't expect to use this as your everyday browser. We'd love feedback on what issues folks hit the most, so we can prioritize—especially if you're a Web developer!
reply