Hi! I'd be happy to cover any of the fundamental questions you are refering too. I was trying to cover them last time, but if there is something uncovered, I'd be more than happy to fix it :) Would you be so nice and specify what are your concerns?
Regarding the questions, I've just put some small summary of updates on the top level of this thread. Regarding real world applications, Luna will be well suited (but not limiting to) to interactive data science, creating custom data visualisations, microservices management, rapid prototyping of any data processing networks (inlcuding IOT systems) and of course graphics or sound processing.
As well as things like graph algorithms for dealing with Google-scale data. We might even put skills like map/reduce, GPU programming, and some as-yet-undetermined cloud management API into this bucket too.
Alright, sorry for the confusion. Our commercial offering (SpaceBase) is very much targeted at MMOs and real-time LBSs. However, like many start-ups we really do like building cool stuff. And while it indeed the case that Galaxy will soon be a offered as a component of SpaceBase, it is has a very different design from other memory-grid projects/products, so we decided to open-source it to the community and let it explore other possible uses.
My post was meant to be an introduction to a series of very technical blog posts discussing theoretical and practical aspects of distributed systems. The post was not meant to serve a clear commercial purpose, so I was trying to steer the discussion away from commercial uses and more to its CS aspects. You know, we really find this stuff interesting. Some of my future posts will discuss the more theoretical sides of Galaxy and will drill very deeply into its design and algorithms, while others will discuss how SpaceBase will make use of Galaxy to help MMOs build huge, rich worlds, and LBSs track lots of moving objects in real-time. To be more concrete and give just a taste, I'll say this: when SpaceBase runs on top of Galaxy, objects are transferred from one node to another to create a dynamic area-of-responsibility for each node. This means that each node will be responsible for processing all objects in some region of the game world (or real world for LBSs). But the regions are dynamic - namely, they shrink and grow to accommodate non-uniform load, so that small busy areas will be split over several nodes, while large, relatively vacant ones will be handled by just one.
Hi all, Head of Product for Lambda here. Myself and the research team that build text2bricks will be on hand to answer any questions you might have about the project, 1-Click-Clusters (which we just launched!) or anything else related.
All of the models are available (linked in the article) and you can even play a fun game we made using the model!
It took me a while to realize it, but Jax is actually a huge opportunity for a lot of scientific computing. Jax was originally developed as a more flexible platform for doing machine learning research. But Jax's real superpower is that it bundles XLA and makes it really easy to run computations on GPU or TPU. And huge swathes of scientific computation basically run large scale vectorized computations.
When I was in astronomy (about a decade ago) I did large scale simulations of gravitational interactions. But at the time all these simulations were done on CPU. Some of the really big efforts used more specialized chips, but it was a huge effort to write the code for it.
But today with Jax, if you want to write an N-body simulation of a globular cluster, you can just code it up in numpy and it'll run on a GPU for free and be about 1000x faster. From what I can tell though, very few people in the sciences have caught on yet.
This is a real kick in the butt, I'm working on a similar project for physics for the last summer. There's a lot that I can translate from my experience of physicist PhD and co-founder of Taipei Hackerspace.
Thank you very much for a very detailed answer. It is much clearer to me what are the strengths of the project now.
Something I wonder is if there are any plans to make blabs capable of dealing with the heavy computations needed in scientific applications (obviously using some sort of FFI or external solvers), or are they only intended for "interactive exploration"?
In general it was built for math and signal processing (broad field). Within those fields, more specifically it was designed initially for real time signal processing (image analysis, communication, decryption). Turns out that makes it a pretty good fit for other things as well (like neural nets..). Here is the publication list showing some of the apps. (for later, server is flooded now): http://parallella.org/publications
Oh cool, I enjoyed the posts on differential dataflow and incremental/streaming systems on your new site! If you don't mind my asking, how did you get into independent research?
I used it a few months ago to develop some HPC code for analyzing the timelines of a cosmological instrument (ESA Planck/LFI), and I was quite happy with the result. I even wrote a blog post about my experience (http://ziotom78.blogspot.fr/2015/01/lfi-data-analysis-with-f...).
I interned last summer at Esperanto. Glad to see it on the list! It's definitely interesting. Some colleagues worked on porting OpenJourney (fine-tuned stable diffusion) to Et-Soc-1 and it was fast and more efficient than Nvidia gpus!
Especially the first three would give you something to put your teeth in because they are decidedly non-local and it would be pretty tricky to get that to work in a coarse grained parallel environment with a high cost of communication between nodes.
Most of the project that are being done right now fall in to the 'embarrassingly parallel' group (and it would be nice to see that change).
There's a lot of similarities between a good scientific platform and a good creators platform. Text processing, image/signal-processing, 3d modeling/simulation, low latency, input/output devices etc. even things like storage.
Regarding the questions, I've just put some small summary of updates on the top level of this thread. Regarding real world applications, Luna will be well suited (but not limiting to) to interactive data science, creating custom data visualisations, microservices management, rapid prototyping of any data processing networks (inlcuding IOT systems) and of course graphics or sound processing.
reply