Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Notch trying jsFiddle (jsfiddle.net) similar stories update story
659.0 points by vasco | karma 12745 | avg karma 4.47 2012-12-02 22:29:43+00:00 | hide | past | favorite | 143 comments



view as:

So cool.

Can anyone explain how this works? I never have got to grips with 3D.

Looks like he's ray casting it, no 3D code implemented, just the illusion itself.

Well... actually the code contains a minimal 3D engine, it is not an illusion, you have the observer variable and could use it to navigate inside the map created in the initialization function. What is more 3D than that? :-)

There is no explicit geometry, polygons, or GPU acceleration, but raycasting in 3D is certainly 3D.

Update: Thanks for the repliers. You are absolutely right: this is a 3D rendering.

What I meant to say is that it is using Canvas 2D, instead of Canvas 3D, also known as WebGl. Antirez gave a wonderful explanation above[1]:

And indeed it is pretty impressive how fast the rendering happens given that the code operates at such lower level... Even selecting the color of every pixel requires non trivial work in the inner loop

[1] http://news.ycombinator.com/item?id=4862910


I appreciate it is written onto a 2d plain. But it does look like 3D. This appearance of looking like 3D when a screen is 2D is complicated and very mathsy (not a word - I know). I've tried reading about it before but it is not straight forward.

It is relatively straightforward (at least conceptually) - you're just projecting 3D vectors down onto a 2D plane (your screen)[1]. Conceptually, this is just the bread and butter of vector algebra (first year physics undergrad).

[1] Of course, it gets more complicated if you want to be more realistic, like including a field of view (cone) rather than a screen-sized chunk (prism) of 3D objects, and so on, but these are just details.


In that sense, all games are rendering in 2D, because screens are 2D. They just do it in a much more complicated way.

What I'm seeing in this demo is 3D in every sense, the code is basically a very simple 3D engine, and it requires knowledge of computer graphics to understand it, so the top-level commenter here asked a valid question.


It looks like he's using a matrix to create cartesian coordinates in a model-view space and then performing some basic linear algebra on the points to animate it. For a primer on 3D worlds check out http://learningwebgl.com/blog/?page_id=1217 where they go over it using WebGL.

Awesome implementation, actually no advanced feature is used, this code could easily ported even to assembler in any device where you can set pixels in RGB format.

And indeed it is pretty impressive how fast the rendering happens given that the code operates at such lower level... Even selecting the color of every pixel requires non trivial work in the inner loop.

AWESOME code.


If the creator of Redis approves this, it's definitely awesome! =)

I totally agree. You probably meant "code operates at such a _high_ level" :)

low level = close to hardware, high level = high abstraction, like JS


No I really mean low level :-)

Javascript is high level, but here notch is using it as a low level framebuffer.


Ah ok, thanks for clarification

> And indeed it is pretty impressive how fast the rendering happens given that the code operates at such lower level.

Sorry if this is a stupid question, but wouldn't you normally expect low level code, whilst more time consuming to write, be faster or just as fast as higher level code.

For example, how C is used as a standard for fast code since it's so low level.


You would expect low level code to be very fast in a low level language because it runs directly on metal. But on a high level language low level code get's translated into another code, so the code runs on the hardware is much more complicated. By using the high level features on Javascript actually leads to better optimized low level code.

If you are using C, you can afford to compute everything at low level as Notch is doing in this example. And you can afford writing pixel by pixel into a frame buffer that is later rendered.

However if you use this approach in Javascript you get poor performances unfortunately. And indeed this demo requires fast computers to do what otherwise could be done in C with direct access to the video frame buffer with maybe 50 times slower computers.

Even C coders almost no longer try to do this low level things and let the graphic chip do a lot of the work when the goal is to produce a game.

So in Javascript instead you want to usually call a library that implements a 3D engine in a lower level language like C and with more direct hardware access.

But in this case, it makes sense to do all this in a low level way and in Javascript: it is just a demo that is full of interesting code and with an appealing visual output.


Yes, but when you write in a high-level language like javascript without taking advantage of all of the high-level features like webgl (which are provided by the implementation and assumed to be fast), performance suffers.

In this case, Notch is writing code that doesn't necessitate the overhead of using JS. It could easily be rewritten as C and probably run faster that way. Not that it isn't interesting, of course.


I respectfully disagree with your use of low level, though I do understand your analogy about the pixelbuffer being like a frame buffer.

But he's actually using Javascript arrays which can get passed around inside a javascript VM, and do not have a set size. Not very "low level".

I spent a moment and changed the arrays to Float32Arrays which have set sizes and sit in memory without being fucked with by the VM and sped the demo up dramatically.

http://jsfiddle.net/uzMPU/2612/


Side by side, this is amazingly smoother than the OP.

For me, the difference is only visible using Firefox. Chromium even renders the inefficient original version smoothly.

The magic of Just In Time compilation perhaps?

If possible the map array should be uint8_t, while texmap uint32_t

Interesting -- this is slower on my iPad.

Indeed, writing straightforward code with no dynamic memory allocations, and simple and predictable types, lets the VM do wonders with optimization and JITting. Good stuff.

I would argue that this is terrible code. It's not using the appropriate tools (webGL), it's badly structured, uncommented and unmaintainable (seriously, why on earth would anyone name their variables zd, _zd and __zd?). The Minecraft source code is also of notoriously poor quality. I honestly don't understand how you came to your conclusion.

The code is a direct port of Minecraft4k[1], it's optimised for space considerations.

[1] https://twitter.com/notch/status/275329867984302081


But that directly backs up my argument. From the horses mouth:

"Code is awful due to the nature of the project"


It's an handful amount of lines of code doing a world generation, texture map generation, and ray casting rendering with a few tricks like a simple lightning model that still looks good.

All this using nothing more than a frame buffer, so you could port this easily from a C64 to any other computer. This code contains a lot of knowledge VS use of pre-build APIs.

If a programmer reads this code and understands how every part works, he or she ends knowing a lot more about computer graphics than before.


>This code contains a lot of knowledge VS use of pre-build APIs.

It contains a lot of general knowledge about 3D graphics, and absolutely 0 specific knowledge about the environment in which it's running. All he's accomplished by doing it this way is show off that he knows trig and prevent any of it from being hardware accelerated.

>If a programmer reads this code and understands how every part works

But he never will, because the code is illegible.


I hope this simplified version helps. I removed texture mapping and camera rotation, plus simplified a few unusual constructs.

https://gist.github.com/4195130


JsFiddled version: http://jsfiddle.net/ter7G/

Correct me if I'm wrong but does this procedurally generate the textures for each block. I can't see any code for loading assets and the init function, has some quite complicated color code. If so that is awesome!

Yes, the procedure generates the texture of every "type" of block, the first loops after the init() function. 16 different types of blocks are generated.

Then a random map is created, with the "center" that is set to empty blocks with high probability. Finally a function traces the map on the screen at every tick of the clock.


Ok, that's pretty cool :P

I've recorded a video explaining how it works: https://www.youtube.com/watch?v=WaZvDCmlERc

Wondeful

This was incredibly helpful - thank you.

Great. Good humor too.

well done, thank you

Great - thank you! (also thanks for your newsletters)

This would be a great demo to see running in a responsive programming environment, ala Bret Victor's amazing Inventing on Principle talk (http://vimeo.com/36579366). I want to be able to click on hex values and get a color picker that updates the demo in realtime, and be able to click on various magic numbers and drag a slider to change them.

Incidentally, Bret Victor's talk was the starting point for Khan Academy's CS curriculum that John Resign led (http://ejohn.org/blog/introducing-khan-cs/).

Edit: I managed to get this running in livecoding.io, which does some of what Bret Victor was talking about (basic sliders and color pickers): http://livecoding.io/4191583. Not sure why it's running so much slower though...


Thanks for the livecoding link. I assume there's some overhead in constantly checking for and applying code changes - that's probably the reason for the lower frame rate.

Good point, I hadn't thought about the fact they might be polling for changes. Might be better if livecoding used an event based model to check for updates...

Or if it works at all like Scrubby (http://nornagon.github.com/scrubby/) which was on HN a couple days ago, it might rewrite all constants into global lookup table indexes, which surely wreaks havoc on performance.

How do you suppose event-based models are notified of changes? Ultimately you are still polling, you're just abstracting it out of your higher-level design.

it's not polling, its reevaluating code when codemirror triggers an event for text changing (which uses the dom underneath).

the problem is from calling setInterval every time the code is evaluated, leading to way too many function calls/second


This "responsive programming environment" you speak of has been around for a really long time. Don't expect sliders and colorpickers to do cool stuff, but this should be a good jumping off point for you to start learning from: http://stackoverflow.com/questions/3305164/how-to-modify-mem...

Thanks, I'm plenty aware of gdb and debuggers:) (ex-google software engineer, etc etc) Go watch the video. Bret presents a very cogent vision for a dramatic improvement on most of today's standard engineering/design tools.

And here I'm going to point to Bret's later writing, http://worrydream.com/#!/LearnableProgramming. It's not about the sliders it's about the understanding. A good example is the first large loop setting up the texture.

    for ( var i = 1; i < 16; i++) {
        var br = 255 - ((Math.random() * 96) | 0);
        for ( var y = 0; y < 16 * 3; y++) {
            for ( var x = 0; x < 16; x++) {

                ...

            }
        }
    }
If this was a first introduction to programming it would scare off many people.

My interpretation of Bret's idea is that it would be better to have the ability to highlight the section of the texture being written by each section of the loop. It's not the sliders & live editing that are most important it's about linking abstract control into meaning within the learner.


yes - that is also my interpretation - you mouse over the canvas, and the block of code that "lead" to the bit that you "highlight" in the canvas lights up so can directly navigate from the final output back to the source code (and vice versa).

Here is a version adapted for Tributary: http://tributary.io/inlet/4199801/

Press play in the bottom left to have it go, and scrub numbers/colors to your hearts content!

Part of the reason it slows down in livecoding is because of the way the original code uses setInterval, and every change reevaluates the code which polutes the scope and starts way too many threads going. I've added optional functionality to Tributary which gives you a run loop (using requestAnimationFrame) that doesn't polute anything.

Hope this helps!


Very awesome:) It's _almost_ fast enough to feel like you're modifying it in realtime on my macbook air. This is a great demo of what might be possible with this responsive programming approach.

I showed it to my mom, and she was like ":Ooohh mother fucker"

Such a great example of what can be done in javascript/canvas. As it is, I was completely blown away by how little code it actually took to do that. My only gripe would be, why couldn't he have used descriptive variable names, so I can better go through and understand it? :-)

This code uses zero canvas / javascript specific functions. It is using basic trigonometric functions and is using canvas only to write RGB pixels. Canvas are much more powerful and high level than that, but this is not a good example as Notch just needed to set pixels.

Having watched Notch write Prelude of the Chambered (which I then ported to JRuby) it seems he really loves this direct pixel manipulation style - as do I!

The sad part, though, is it doesn't particularly scale well with canvas (yet) and using sprites, shape primitives, or even WebGL is way to get full speed at a regular resolution. (Imagine a 640x480 pixel field of randomized colors, you can get hundreds of FPS for that in Java without a sweat. Not so in the browser and most certainly not in JRuby.. my PotC port struggled to hit 12fps with equivalent code.)

The title of "ChamberedTest" here makes me wonder if notch is planning to use JS and canvas in the forthcoming Ludum Dare 25 (the gamedev contest where he created Prelude of the Chambered). I hope so!


For those interested, here are the first three hours of the "Prelude of the Chambered" stream.

http://www.youtube.com/watch?v=rhN35bGvM8c&t=10m05s&...


Thanks so much for this! I watched this back when ludum dare was happening, but after he moved his streaming channel to twitch.tv this part of the stream was lost.

An hopefully ad free version will soon be available here (I trimmed the first ten minutes where nothing was happenning but infringing audio):

http://www.youtube.com/watch?v=xYRBBXq3s6c&hd=1


Please disregard. Other copyrighted stuff was instantly detected.

> Having watched Notch write Prelude of the Chambered (which I then ported to JRuby) it seems he really loves this direct pixel manipulation style - as do I!

> The sad part, though, is it doesn't particularly scale well with canvas (yet) and using sprites, shape primitives, or even WebGL is way to get full speed at a regular resolution.

It doesn't just not scale well with canvas, it doesn't scale well with modern hardware. A single call to draw a textured polygon of any size is ~as expensive as the call drawing a single pixel. A call to draw a stored model with with thousands of polygons is ~as expensive as the call to draw a single polygon.

When doing pixel at a time drawing like that you are completely ignoring the specialized hardware found in the GPU, which is vastly more efficient at pushing pixels than the CPU, even in the lowest-end Intel integrated GPUs of the newest generation.

If you want to do serious graphics, you really should just get used to OpenGL/DirectX.


I agree this is not taking advantage of the GPU, but it's not making a draw call for every pixel either.

It looks to me like it's probably making a single call for each frame. Perhaps it mostly depends on how well the pixels.data[] call is implemented.


Notch gave a little more context[1]:

Spent most of today learning new stuff. Ported Minecraft4k. Code is awful due to the nature of the project, but here: http://jsdo.it/notch/dB1E

[1] https://twitter.com/notch/status/275329867984302081


Aren't the following canvas/javascript specific?

ctx = document.getElementById('game').getContext('2d');

pixels = ctx.createImageData(w, h);

ctx.putImageData(pixels, 0, 0);

EDIT: Thanks for the clarification!


I think the point was that this code can work in any environment where you can write to specific pixels (without magic WebGL transformation/rasterization operations or complex Canvas drawing routines).

Antirez meant to say that notch is using canvas as a dumb framebuffer.

He builds the raster image pixel by pixel instead of using higher level primitives (polygons).


I would never have thought canvas performance would be good enough for this.

Reminds me of when I wrote a Game of Life implementation in JavaScript in college using a bunch of tiny absolutely positioned <divs>.

You know, you could have just floated them...

And if that sounds crazy, my last implementation of the GoL was built for micro-optimization... my field space was a 1D array that held both the current generation and the swap space for the next generation. So I could see things being made slightly more simple by having the draw space semantically match the logic.


That sounds...fun? ;)

"Notch coding in JS" is a bit more descriptive than "trying jsfiddle", specially since it looks like code posted after the fact. A minecraft-like env rendered on canvas with it's own mini-3d-engine is amazing nonetheless :D

To be fair, that's the description notch used in his tweet: https://twitter.com/notch/status/275331530040160256 :)

"Trying jsfiddle.net" was in reference to his previous tweet, the previous host he used crashed under the load from his twitter link. He wasn't "trying jsfiddle.net", he was "trying jsfiddle [as an alternative to the previous host that crashed]"

Can someone dissect this code and explain what it does?

The first set of three nested loops is procedurally generating texmap, which is a multidimensional Array(16 * 16 * 3 * 16). 16x16 pixel textures, one for each the top, sides, and bottom, and then 16 of those. The inner loop tests against i, which is the current block "type" and performs customizations to the procedurally generated textures.

The next set of three nested loops is generating a random "world" with a tunnel cut out of it.

The renderMinecraft function is performing a minimal perspective projection http://en.wikipedia.org/wiki/3D_projection for a single ray cast into the world. Each pixel is cast and then, for each object hit by the ray cast, the closest is found, and a texture mapped pixel is calculated and written into the frame buffer.


It is worth to note that lightning model is trivial and only maps given directions of faces to given amount of bright. But other than that the bright is adjusted by distance.

It is pretty cool to see how a trivial lightning model like that can produce a pretty looking result.

If you want to see just the brightness of faces without texture to see more easily how light is used, just change the line:

    var cc = texmap[u + v * 16 + tex * 256 * 3];
Into:

    var cc = 255+(255<<8)+(255<<16);
To also remove the "distant is less bright" effect just add:

    ddist = 255;
Before:

    var r = ((col >> 16) ...

I've recorded a screencast that digs into how the programmatic texture generation works: https://www.youtube.com/watch?v=WaZvDCmlERc (but I'm not touching the 3D.. yet ;-))

    setInterval(clock, 1000 / 100);
Is there any reason for writing 1000/100 instead of 10?

edit: Thanks guys!


It's a good way to remind you the figure is in ms, and easily manipulate the interval as a fraction of a second.

It just makes it clearer that he wants it to run at 100FPS: (1000ms / desired FPS)

The usefulness is more obvious when you want 60FPS, which is not easily represented without rounding.


It was probably for readability, i.e. "100 times per second".

Similarly, people will write (10 * 60 * 60) if the value is seconds and the programmer intends to indicate 10 hours.


100 frames per second. He probably changed that during development.

Probably just to emphasize that clock will be called 100 times per second (which will result in 100 fps).

I personal do this for legibility and ease of comprehension. Kinda loses its value when dealing with such nicely rounded numbers but, as an example, 1000ms / 24 is easier or me to comprehend as 24 ticks per second than 41.6666ms.

Why is the framerate poor even when I shrink the viewing area down 10x10? It seems to have the same stutter as the original large view. Is it because it's doing the same calculations but just showing fewer pixels?

Yes. It's the calculations and direct pixel setting that's slow, not the actual "rendering" of the canvas (which is accelerated like crazy in most browsers now).

Yep, the hard part is the calculations required for every pixel being displayed.

It does get better framerate changing the variables: http://jsfiddle.net/wATAp/


Part of the reason it's slow is that it doesn't use typed arrays. This may be much faster for you: http://jsfiddle.net/uzMPU/1573/

Yes, at least 2x here. I'll pull up Chrome's FPS display later.

Interesting, and pretty damn cool!

I enjoy that everything is procedural generated. The software rasterizing is pretty cool too. I don't mind that he uses short variable names, sometimes it's nice to have multiple lines line up perfectly. But this is just silly...

    for ( var x = 0; x < w; x++) {
        var ___xd = (x - w / 2) / h;
        for ( var y = 0; y < h; y++) {
            var __yd = (y - h / 2) / h;
            var __zd = 1;

            var ___zd = __zd * yCos + __yd * ySin;
            var _yd = __yd * yCos - __zd * ySin;

            var _xd = ___xd * xCos + ___zd * xSin;
            var _zd = ___zd * xCos - ___xd * xSin;

Can you explain why?

I assume because there's four different "xd" variables differing only in the number of underscores prefixed ("xd", "_xd", "__xd" and "___xd"). (Same for "yd" and "zd").

I don't enjoy comparing the length of relatively similar lines. Why not use xa, xb, xc, etc... instead of xd, _xd, __xd, ___xd, yd, _yd, __yd, ___yd, zd, _zd, __zd, ___zd?

I think the underscores standout better.

Here's a rendering of the texture map so you can see what it generates directly: http://jsfiddle.net/cXvKa/

Very cool, but a few notes to anyone getting their hopes up on the powers of Canvas:

As you may have anticipated, this demo runs at less than 1 frame per second on my Nexus 7. :/

In terms of per-pixel manipulation of a Canvas, it is very slow, even with javascript's (not very well supported) typed arrays. Simply put, multidimensional loops (unrolled or not) will always kill performance in JS with any significant dimensions. I learned this the hard way by trying to write a pixel-based GUI, and even that (with some crazy optimizations) could not render fast enough for all devices. If you are just doing blits, Canvas works very well (especially since this operation often uses the GPU).

For now, I think the best option is still WebGL, even though it is not widely supported yet, mobile devices are beginning to pick it up (Blackberry, for example).


Runs about 6-7 fps on my iPhone 5.

Nice :) Shows you how fast ARM is progressing...

But only 4-5 fps on my iPad 3. Hmmmph, thought that'd be faster.

My iPhone 5 is considerably smoother and faster than my iPad 3. That's surely also related to the iPad's insane pixel count, but the iPhone also loads and switches apps faster, before CPU-based rendering is a factor.

Note also that he's got his canvas set to half the CSS size, i.e. each canvas pixel is 4 screen pixels on a normal display. On a retina display each canvas pixel will be 16 screen pixels. So if you tried to manipulate real pixels this way on those devices it wouldn't be pretty. (Retina version: http://jsfiddle.net/RFLaW/)

While this is cool, isn't this exactly what WebGL is for? I know drivers/support is an issue, but Chrome also ships with a multicore software renderer (SwiftShader) which can probably get a lot further than a JS putImageData engine.

It's just a demo. It's for fun.

Obligatory flash port: http://wonderfl.net/c/sqL5

That's faster than the JS demo (even the improved one) in any browser. It's sad that browsers still don't have at least the same performance as flash :(

It's faster because he made the Flash version use strongly-typed arrays.

This shows how far the browser has to go....was it just me, or did your browser start to choke while rendering this?

It rendered it fine, but I could tell that the DOM was being abused heavily. I don't mean that in a bad way, but it just felt very 'heavy'.

On the flip side, this is awesome that the browser can do this - and it's awesome that browsers can do this.


The DOM is not being heavily abused; it's just a canvas element and nothing else. The demo is so CPU intensive because the code is drawing the rasterized image pixel by pixel on the canvas, instead of relying on the GPU and using WebGL which would, AFAIK, result in much better performance.

One positive effect of doing this is that the code is _very_ portable and doesn't require any crazy feature in the browser or the PC (for example, i'm able to see the demo on my work computer even though WebGL doesn't work on it because of lack of drivers).


I've recorded a screencast that digs into how the programmatic texture generation works in this code: https://www.youtube.com/watch?v=WaZvDCmlERc

Enjoyed that immensely, thank you!

Great analysis of whats going on. Thanks.

great work.

it'd be also great to do an analysis of the geometry data as well, if you can!


Thanks a lot for taking the time! Any chance you can do one for the game loop and the rest of the code?

who the heck is Notch?

Notch created a popular pc game called minecraft where you mine blocks and build things. It became very popular and made him a multi-millionaire.

This demo has a little bit of minecraft in it. :)


thank you very much!


How is that a bad question. Yes he could have looked it up but it wouldn't have hurt to have "Minecraft creator Notch trying jsFiddle" as the title, would it?

Funny how he writes it so much like Java. I see so many different styles with JavaScript.

Interesting for an untrained programmer to tweak and see what happens each time - changing colours, speeds, sizes, reversing the flow, changing textures, etc. Accomplishes so much with such concise code.

its a good exercise to work out how to stop it from moving "forward" and get the mouse to control the camera. Its a little tricky, but you can do it with about 10-20 more lines of code.

I'm shocked and pleasantly surprised that js/canvas can putpixel fast enough for this. It must be around as fast as assembly on a 486!

Part of me can't decide if that's awesome or tragic.


This is actually quite fluid on iPhone 5. I asked a co-worker to try this on his Galaxy S3(quad-core, no lte) and it was really slow there. And in Chrome it was even slower than that on the S3.

Doesn't any web browser on Android use a JIT?


I believe Firefox for Android does, and as far as I know so does the stock Android browser! Perhaps the pixel writing that Notch is doing is not well optimised on Android?

Why is init called twice? Once at the end of the js and then again in the html.

The one in the html should not be there.

http://jsfiddle.net/uzMPU/3165/

using window.requestAnimationFrame just because we can!


Wow! This is incredibly smoother here on Firefox 18! (slow machine)

Notch has made a couple of comments on reddit :

http://www.reddit.com/r/programming/comments/146v69/how_notc...

    Please do NOT write code like this. It was originally written for the Java4k 
    competition, which focuses on executable code size, and is as a result almost 
    intentionally poorly written. It got even worse when I ported it to JS.
    It was mainly meant as a test for me to see what HTML5/JS can do, and an 
    exercise in porting code over.
http://www.reddit.com/r/programming/comments/146v69/how_notc...

    It would run a bit smoother if it was written in C++ (mainly due to not having 
    to rely on the GC, and having it precompiled gets rid of the warmup time), and 
    modern OpenGL would help quite a lot as well. A lot of the stuff done in CPU 
    now could be moved to a shader, which both makes code simpler, and gets rid of 
    the slow JNI calls required now.
    The main reason why Minecraft is slow is mainly 1) There's a LOT of polygons, 
    combined with 2) The difficulty of making an efficient culling algorithm in a 
    dynamic world.
    If someone solves #2, very interesting things could be made with a game similar 
    to Minecraft.
http://www.reddit.com/r/programming/comments/146v69/how_notc...

    No, they don't get merged because the textures are all pulled from the same atlas 
    to avoid texture swapping. With GLSL, they could be merged, saving quite a lot of 
    polygons. For a while, I did attempt switching the atlas from a 16 x 16 grid to a 
    1 x 256 grid and merging horizontal runs of the same texture, but the resulting 
    texture size was to tall some graphics cards (on low end computers) would 
    automatically downsample it.
    The problem with the occlusion culling is not about knowing what parts are static, 
    but rather figuring out what occluders there are. It would be very beneficial not 
    to have to render caves under ground below the player, for example, or not to 
    render the entire outside when a player is inside a small closed house. Figuring 
    this out in runtime on the fly as the player moves around is.. expensive.

Here is the code in an App Studio project:

http://blog.nsbasic.com/?p=1060

Runs fine on an iPhone 5!


There's been quite a bit of analysis as to what's happening here, if anyone could do a breakdown (or provide good examples) it would be awesome for people like me who are amazed by the power of the maths involved here.


Tooting my own horn here, but here's my own little "Minecraft" renderer in Neja, circa 2005 ( http://www.youtube.com/watch?feature=player_detailpage&v... ). Over 4 years before Minecraft, and even then that was nothing new. It was kinda cool for JavaScript, in 2005, but all things considered, it was pretty lame compared to Ken Silverman's voxlap for instance.

The "cool" thing about this demo, was that Canvas was not widespread back then so I generated all the chunky effects on the fly as 24bits BMP image, then made a data: URI and update an IMG tag.


Legal | privacy