I wrote a little hobbyist AI project [1] with no neural networks at all and was delighted with how good the results were. Definitely think the field is ready to start incorporating some different approaches.
this is a really nice example of a simple, practical, "non-deep" NN. I'm wondering, are there any good learning resources you could recommend to start a project like this?
It makes me want to try a personal project that makes use of a tiny light-weight NN as well. I see giant AlphaGo neural nets mentioned so frequently that I forgot they could be lean :)
I'm not sure what point you're trying to make. Do you think a neural net is not capable of generating the code in the gist? Because it's pretty easy to do that. The harder part that we're still trying to figure out is getting that code to do something meaningful.
I remember doing something like this without the neural networks that is, but my results were very very bad. If i find that project in old laptop i'll post it to github.
This is cool. The documentation is very entertaining, although not something you'd show to your boss. Looks like it implements a lot more than just neural networks.
Shameless plug: For a minimal neural network implementation in ANSI C, check out: https://github.com/codeplea/genann Sometimes lack of features is a feature.
I've been having a lot of fun with https://github.com/harthur/brain (neural network implementation in JavaScript) lately. Perhaps it's not something you would use on a real, live site, but it is fun to prototype neural network stuff in the browser with easy access to the DOM, canvas, and WebGL.
When I looked for a small ANN library with little external dependencies and which could be statically linked I settled on FANN[0].
Worked reasonably, solved my problem as well as I hoped it would. It is rather limited in features though- no training on GPU, single-threaded by design, etc.
tiny-dnn appears to have a lot more choices regarding network architecture, parallelization options. Would definitely have tried tiny-dnn first if I had known about it.
I started to learn machine learning relatively seriously. But I've always had issues with frameworks where the team goes out of their way to be clever rather than straightforward. ML frameworks to me are plagued with that. And I hate python. A lot. Mostly because of white space formatting.
So, I decided to make my own neural net in C#. For fun, it'll never be released. I spent a solid month learning anything and everything I could about how brains work in the animal kingdom. Then I built out a neural net according to what I learned. My cells aren't really similar to most of the conventional types out there. But it does work fairly well with numerical data. If I spent more time, like a solid year instead of spare time over 2 months, I think it could be respectable.
What I really learned from this project was optimization to the extreme. I spent a hell of a lot of time testing different ways to accomplish the same math and pull out as much performance as possible. I'd guess for every hour of code, I spent 4 or 5 hours research, testing and optimizing. Mostly because it's all CPU instead of GPU. I never got into cuda and I never will. It's not like I've never optimized before. The difference now, I spent time finding out if conventional wisdom was correct. Also, I discovered a bunch of methods in C# that I never knew about.
I dont do development anymore for work (and God willing, never will), so this was just a distraction/curiosity project for me. In reality, I wish I took the time early on in my career to do a project like this. Anyone fresh in dev needs to do a 3 to 6 month pure optimization project learning, for themselves, what works and what doesn't. Conventional wisdom really is only the tip of the iceberg.
[1] https://littlefish.fish
Feel free to play around with it.
reply