Well... actually the code contains a minimal 3D engine, it is not an illusion, you have the observer variable and could use it to navigate inside the map created in the initialization function. What is more 3D than that? :-)
Update: Thanks for the repliers. You are absolutely right: this is a 3D rendering.
What I meant to say is that it is using Canvas 2D, instead of Canvas 3D, also known as WebGl. Antirez gave a wonderful explanation above[1]:
And indeed it is pretty impressive how fast the rendering happens given that the code operates at such lower level... Even selecting the color of every pixel requires non trivial work in the inner loop
I appreciate it is written onto a 2d plain. But it does look like 3D. This appearance of looking like 3D when a screen is 2D is complicated and very mathsy (not a word - I know). I've tried reading about it before but it is not straight forward.
It is relatively straightforward (at least conceptually) - you're just projecting 3D vectors down onto a 2D plane (your screen)[1]. Conceptually, this is just the bread and butter of vector algebra (first year physics undergrad).
[1] Of course, it gets more complicated if you want to be more realistic, like including a field of view (cone) rather than a screen-sized chunk (prism) of 3D objects, and so on, but these are just details.
In that sense, all games are rendering in 2D, because screens are 2D. They just do it in a much more complicated way.
What I'm seeing in this demo is 3D in every sense, the code is basically a very simple 3D engine, and it requires knowledge of computer graphics to understand it, so the top-level commenter here asked a valid question.
It looks like he's using a matrix to create cartesian coordinates in a model-view space and then performing some basic linear algebra on the points to animate it. For a primer on 3D worlds check out http://learningwebgl.com/blog/?page_id=1217 where they go over it using WebGL.
Awesome implementation, actually no advanced feature is used, this code could easily ported even to assembler in any device where you can set pixels in RGB format.
And indeed it is pretty impressive how fast the rendering happens given that the code operates at such lower level... Even selecting the color of every pixel requires non trivial work in the inner loop.
> And indeed it is pretty impressive how fast the rendering happens given that the code operates at such lower level.
Sorry if this is a stupid question, but wouldn't you normally expect low level code, whilst more time consuming to write, be faster or just as fast as higher level code.
For example, how C is used as a standard for fast code since it's so low level.
You would expect low level code to be very fast in a low level language because it runs directly on metal. But on a high level language low level code get's translated into another code, so the code runs on the hardware is much more complicated. By using the high level features on Javascript actually leads to better optimized low level code.
If you are using C, you can afford to compute everything at low level as Notch is doing in this example. And you can afford writing pixel by pixel into a frame buffer that is later rendered.
However if you use this approach in Javascript you get poor performances unfortunately. And indeed this demo requires fast computers to do what otherwise could be done in C with direct access to the video frame buffer with maybe 50 times slower computers.
Even C coders almost no longer try to do this low level things and let the graphic chip do a lot of the work when the goal is to produce a game.
So in Javascript instead you want to usually call a library that implements a 3D engine in a lower level language like C and with more direct hardware access.
But in this case, it makes sense to do all this in a low level way and in Javascript: it is just a demo that is full of interesting code and with an appealing visual output.
Yes, but when you write in a high-level language like javascript without taking advantage of all of the high-level features like webgl (which are provided by the implementation and assumed to be fast), performance suffers.
In this case, Notch is writing code that doesn't necessitate the overhead of using JS. It could easily be rewritten as C and probably run faster that way. Not that it isn't interesting, of course.
I respectfully disagree with your use of low level, though I do understand your analogy about the pixelbuffer being like a frame buffer.
But he's actually using Javascript arrays which can get passed around inside a javascript VM, and do not have a set size. Not very "low level".
I spent a moment and changed the arrays to Float32Arrays which have set sizes and sit in memory without being fucked with by the VM and sped the demo up dramatically.
Indeed, writing straightforward code with no dynamic memory allocations, and simple and predictable types, lets the VM do wonders with optimization and JITting. Good stuff.
I would argue that this is terrible code. It's not using the appropriate tools (webGL), it's badly structured, uncommented and unmaintainable (seriously, why on earth would anyone name their variables zd, _zd and __zd?). The Minecraft source code is also of notoriously poor quality. I honestly don't understand how you came to your conclusion.
It's an handful amount of lines of code doing a world generation, texture map generation, and ray casting rendering with a few tricks like a simple lightning model that still looks good.
All this using nothing more than a frame buffer, so you could port this easily from a C64 to any other computer. This code contains a lot of knowledge VS use of pre-build APIs.
If a programmer reads this code and understands how every part works, he or she ends knowing a lot more about computer graphics than before.
>This code contains a lot of knowledge VS use of pre-build APIs.
It contains a lot of general knowledge about 3D graphics, and absolutely 0 specific knowledge about the environment in which it's running. All he's accomplished by doing it this way is show off that he knows trig and prevent any of it from being hardware accelerated.
>If a programmer reads this code and understands how every part works
Correct me if I'm wrong but does this procedurally generate the textures for each block. I can't see any code for loading assets and the init function, has some quite complicated color code. If so that is awesome!
Yes, the procedure generates the texture of every "type" of block, the first loops after the init() function. 16 different types of blocks are generated.
Then a random map is created, with the "center" that is set to empty blocks with high probability. Finally a function traces the map on the screen at every tick of the clock.
This would be a great demo to see running in a responsive programming environment, ala Bret Victor's amazing Inventing on Principle talk (http://vimeo.com/36579366). I want to be able to click on hex values and get a color picker that updates the demo in realtime, and be able to click on various magic numbers and drag a slider to change them.
Edit: I managed to get this running in livecoding.io, which does some of what Bret Victor was talking about (basic sliders and color pickers): http://livecoding.io/4191583. Not sure why it's running so much slower though...
Thanks for the livecoding link. I assume there's some overhead in constantly checking for and applying code changes - that's probably the reason for the lower frame rate.
Good point, I hadn't thought about the fact they might be polling for changes. Might be better if livecoding used an event based model to check for updates...
Or if it works at all like Scrubby (http://nornagon.github.com/scrubby/) which was on HN a couple days ago, it might rewrite all constants into global lookup table indexes, which surely wreaks havoc on performance.
How do you suppose event-based models are notified of changes? Ultimately you are still polling, you're just abstracting it out of your higher-level design.
This "responsive programming environment" you speak of has been around for a really long time. Don't expect sliders and colorpickers to do cool stuff, but this should be a good jumping off point for you to start learning from: http://stackoverflow.com/questions/3305164/how-to-modify-mem...
Thanks, I'm plenty aware of gdb and debuggers:) (ex-google software engineer, etc etc) Go watch the video. Bret presents a very cogent vision for a dramatic improvement on most of today's standard engineering/design tools.
And here I'm going to point to Bret's later writing, http://worrydream.com/#!/LearnableProgramming. It's not about the sliders it's about the understanding. A good example is the first large loop setting up the texture.
for ( var i = 1; i < 16; i++) {
var br = 255 - ((Math.random() * 96) | 0);
for ( var y = 0; y < 16 * 3; y++) {
for ( var x = 0; x < 16; x++) {
...
}
}
}
If this was a first introduction to programming it would scare off many people.
My interpretation of Bret's idea is that it would be better to have the ability to highlight the section of the texture being written by each section of the loop. It's not the sliders & live editing that are most important it's about linking abstract control into meaning within the learner.
yes - that is also my interpretation - you mouse over the canvas, and the block of code that "lead" to the bit that you "highlight" in the canvas lights up so can directly navigate from the final output back to the source code (and vice versa).
Press play in the bottom left to have it go, and scrub numbers/colors to your hearts content!
Part of the reason it slows down in livecoding is because of the way the original code uses setInterval, and every change reevaluates the code which polutes the scope and starts way too many threads going. I've added optional functionality to Tributary which gives you a run loop (using requestAnimationFrame) that doesn't polute anything.
Very awesome:) It's _almost_ fast enough to feel like you're modifying it in realtime on my macbook air. This is a great demo of what might be possible with this responsive programming approach.
Such a great example of what can be done in javascript/canvas. As it is, I was completely blown away by how little code it actually took to do that. My only gripe would be, why couldn't he have used descriptive variable names, so I can better go through and understand it? :-)
This code uses zero canvas / javascript specific functions. It is using basic trigonometric functions and is using canvas only to write RGB pixels. Canvas are much more powerful and high level than that, but this is not a good example as Notch just needed to set pixels.
Having watched Notch write Prelude of the Chambered (which I then ported to JRuby) it seems he really loves this direct pixel manipulation style - as do I!
The sad part, though, is it doesn't particularly scale well with canvas (yet) and using sprites, shape primitives, or even WebGL is way to get full speed at a regular resolution. (Imagine a 640x480 pixel field of randomized colors, you can get hundreds of FPS for that in Java without a sweat. Not so in the browser and most certainly not in JRuby.. my PotC port struggled to hit 12fps with equivalent code.)
The title of "ChamberedTest" here makes me wonder if notch is planning to use JS and canvas in the forthcoming Ludum Dare 25 (the gamedev contest where he created Prelude of the Chambered). I hope so!
Thanks so much for this! I watched this back when ludum dare was happening, but after he moved his streaming channel to twitch.tv this part of the stream was lost.
> Having watched Notch write Prelude of the Chambered (which I then ported to JRuby) it seems he really loves this direct pixel manipulation style - as do I!
> The sad part, though, is it doesn't particularly scale well with canvas (yet) and using sprites, shape primitives, or even WebGL is way to get full speed at a regular resolution.
It doesn't just not scale well with canvas, it doesn't scale well with modern hardware. A single call to draw a textured polygon of any size is ~as expensive as the call drawing a single pixel. A call to draw a stored model with with thousands of polygons is ~as expensive as the call to draw a single polygon.
When doing pixel at a time drawing like that you are completely ignoring the specialized hardware found in the GPU, which is vastly more efficient at pushing pixels than the CPU, even in the lowest-end Intel integrated GPUs of the newest generation.
If you want to do serious graphics, you really should just get used to OpenGL/DirectX.
I think the point was that this code can work in any environment where you can write to specific pixels (without magic WebGL transformation/rasterization operations or complex Canvas drawing routines).
And if that sounds crazy, my last implementation of the GoL was built for micro-optimization... my field space was a 1D array that held both the current generation and the swap space for the next generation. So I could see things being made slightly more simple by having the draw space semantically match the logic.
"Notch coding in JS" is a bit more descriptive than "trying jsfiddle", specially since it looks like code posted after the fact. A minecraft-like env rendered on canvas with it's own mini-3d-engine is amazing nonetheless :D
"Trying jsfiddle.net" was in reference to his previous tweet, the previous host he used crashed under the load from his twitter link. He wasn't "trying jsfiddle.net", he was "trying jsfiddle [as an alternative to the previous host that crashed]"
The first set of three nested loops is procedurally generating texmap, which is a multidimensional Array(16 * 16 * 3 * 16). 16x16 pixel textures, one for each the top, sides, and bottom, and then 16 of those. The inner loop tests against i, which is the current block "type" and performs customizations to the procedurally generated textures.
The next set of three nested loops is generating a random "world" with a tunnel cut out of it.
The renderMinecraft function is performing a minimal perspective projection http://en.wikipedia.org/wiki/3D_projection for a single ray cast into the world. Each pixel is cast and then, for each object hit by the ray cast, the closest is found, and a texture mapped pixel is calculated and written into the frame buffer.
It is worth to note that lightning model is trivial and only maps given directions of faces to given amount of bright. But other than that the bright is adjusted by distance.
It is pretty cool to see how a trivial lightning model like that can produce a pretty looking result.
If you want to see just the brightness of faces without texture to see more easily how light is used, just change the line:
var cc = texmap[u + v * 16 + tex * 256 * 3];
Into:
var cc = 255+(255<<8)+(255<<16);
To also remove the "distant is less bright" effect just add:
I've recorded a screencast that digs into how the programmatic texture generation works: https://www.youtube.com/watch?v=WaZvDCmlERc (but I'm not touching the 3D.. yet ;-))
I personal do this for legibility and ease of comprehension. Kinda loses its value when dealing with such nicely rounded numbers but, as an example, 1000ms / 24 is easier or me to comprehend as 24 ticks per second than 41.6666ms.
Why is the framerate poor even when I shrink the viewing area down 10x10? It seems to have the same stutter as the original large view. Is it because it's doing the same calculations but just showing fewer pixels?
Yes. It's the calculations and direct pixel setting that's slow, not the actual "rendering" of the canvas (which is accelerated like crazy in most browsers now).
I enjoy that everything is procedural generated. The software rasterizing is pretty cool too. I don't mind that he uses short variable names, sometimes it's nice to have multiple lines line up perfectly. But this is just silly...
for ( var x = 0; x < w; x++) {
var ___xd = (x - w / 2) / h;
for ( var y = 0; y < h; y++) {
var __yd = (y - h / 2) / h;
var __zd = 1;
var ___zd = __zd * yCos + __yd * ySin;
var _yd = __yd * yCos - __zd * ySin;
var _xd = ___xd * xCos + ___zd * xSin;
var _zd = ___zd * xCos - ___xd * xSin;
I assume because there's four different "xd" variables differing only in the number of underscores prefixed ("xd", "_xd", "__xd" and "___xd"). (Same for "yd" and "zd").
I don't enjoy comparing the length of relatively similar lines. Why not use xa, xb, xc, etc... instead of xd, _xd, __xd, ___xd, yd, _yd, __yd, ___yd, zd, _zd, __zd, ___zd?
Very cool, but a few notes to anyone getting their hopes up on the powers of Canvas:
As you may have anticipated, this demo runs at less than 1 frame per second on my Nexus 7. :/
In terms of per-pixel manipulation of a Canvas, it is very slow, even with javascript's (not very well supported) typed arrays. Simply put, multidimensional loops (unrolled or not) will always kill performance in JS with any significant dimensions. I learned this the hard way by trying to write a pixel-based GUI, and even that (with some crazy optimizations) could not render fast enough for all devices. If you are just doing blits, Canvas works very well (especially since this operation often uses the GPU).
For now, I think the best option is still WebGL, even though it is not widely supported yet, mobile devices are beginning to pick it up (Blackberry, for example).
My iPhone 5 is considerably smoother and faster than my iPad 3. That's surely also related to the iPad's insane pixel count, but the iPhone also loads and switches apps faster, before CPU-based rendering is a factor.
Note also that he's got his canvas set to half the CSS size, i.e. each canvas pixel is 4 screen pixels on a normal display. On a retina display each canvas pixel will be 16 screen pixels. So if you tried to manipulate real pixels this way on those devices it wouldn't be pretty. (Retina version: http://jsfiddle.net/RFLaW/)
While this is cool, isn't this exactly what WebGL is for? I know drivers/support is an issue, but Chrome also ships with a multicore software renderer (SwiftShader) which can probably get a lot further than a JS putImageData engine.
That's faster than the JS demo (even the improved one) in any browser. It's sad that browsers still don't have at least the same performance as flash :(
The DOM is not being heavily abused; it's just a canvas element and nothing else. The demo is so CPU intensive because the code is drawing the rasterized image pixel by pixel on the canvas, instead of relying on the GPU and using WebGL which would, AFAIK, result in much better performance.
One positive effect of doing this is that the code is _very_ portable and doesn't require any crazy feature in the browser or the PC (for example, i'm able to see the demo on my work computer even though WebGL doesn't work on it because of lack of drivers).
How is that a bad question. Yes he could have looked it up but it wouldn't have hurt to have "Minecraft creator Notch trying jsFiddle" as the title, would it?
Interesting for an untrained programmer to tweak and see what happens each time - changing colours, speeds, sizes, reversing the flow, changing textures, etc. Accomplishes so much with such concise code.
its a good exercise to work out how to stop it from moving "forward" and get the mouse to control the camera. Its a little tricky, but you can do it with about 10-20 more lines of code.
This is actually quite fluid on iPhone 5. I asked a co-worker to try this on his Galaxy S3(quad-core, no lte) and it was really slow there. And in Chrome it was even slower than that on the S3.
I believe Firefox for Android does, and as far as I know so does the stock Android browser! Perhaps the pixel writing that Notch is doing is not well optimised on Android?
Please do NOT write code like this. It was originally written for the Java4k
competition, which focuses on executable code size, and is as a result almost
intentionally poorly written. It got even worse when I ported it to JS.
It was mainly meant as a test for me to see what HTML5/JS can do, and an
exercise in porting code over.
It would run a bit smoother if it was written in C++ (mainly due to not having
to rely on the GC, and having it precompiled gets rid of the warmup time), and
modern OpenGL would help quite a lot as well. A lot of the stuff done in CPU
now could be moved to a shader, which both makes code simpler, and gets rid of
the slow JNI calls required now.
The main reason why Minecraft is slow is mainly 1) There's a LOT of polygons,
combined with 2) The difficulty of making an efficient culling algorithm in a
dynamic world.
If someone solves #2, very interesting things could be made with a game similar
to Minecraft.
No, they don't get merged because the textures are all pulled from the same atlas
to avoid texture swapping. With GLSL, they could be merged, saving quite a lot of
polygons. For a while, I did attempt switching the atlas from a 16 x 16 grid to a
1 x 256 grid and merging horizontal runs of the same texture, but the resulting
texture size was to tall some graphics cards (on low end computers) would
automatically downsample it.
The problem with the occlusion culling is not about knowing what parts are static,
but rather figuring out what occluders there are. It would be very beneficial not
to have to render caves under ground below the player, for example, or not to
render the entire outside when a player is inside a small closed house. Figuring
this out in runtime on the fly as the player moves around is.. expensive.
There's been quite a bit of analysis as to what's happening here, if anyone could do a breakdown (or provide good examples) it would be awesome for people like me who are amazed by the power of the maths involved here.
Tooting my own horn here, but here's my own little "Minecraft" renderer in Neja, circa 2005 ( http://www.youtube.com/watch?feature=player_detailpage&v... ). Over 4 years before Minecraft, and even then that was nothing new. It was kinda cool for JavaScript, in 2005, but all things considered, it was pretty lame compared to Ken Silverman's voxlap for instance.
The "cool" thing about this demo, was that Canvas was not widespread back then so I generated all the chunky effects on the fly as 24bits BMP image, then made a data: URI and update an IMG tag.
reply