Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I read the OP. Okay.

I have a good track record in AI, about the best: I was right and for the right reasons! I got fairly deep into expert systems and said clearly at the time that I thought that they were, in a word, essentially junk. And history has shown that I was correct. In AI, being that correct gives one about the best track record!

Then I did some more: One of the main problems we were trying to solve was monitoring computer server farms and digital communications networks. I worked up a quite general approach, as some math complete with theorems and proofs, starting with some meager assumptions quite reasonable in practice, and totally blew the doors off anything AI was doing. Got to select false alarm rate, in small steps over a wide range. And used Ulam's result 'tightness' to show that the techniques was not trivial. Programmed it. On some real data, it worked fine. Published it. It was successful, on the real problem, better than unaided humans or expert systems could hope to do. And it had some solid math guarantees, from theorems and proofs from meager assumptions. That's also relatively good for AI. But, I didn't claim it was AI and instead just claimed that it was useful, progress on the real problem.

So, I've had, in comparison, two successes! Soooooo, if the author of the OP can give opinions, I should be able to also. So, will do that here and now!

First, I would like to see AI research compare with, say, baby animals, mammals, yes, but also some reptiles and maybe even some insects. I'd like to see our AI systems do as well as the baby animals.

Second, I'd like to have our systems, that do as well as baby animals, learn as well and fast as those animals do as they grow up. A good goal might be a kitten at 3 months.

Third, I'd like to have our systems learn English as fast, easily, and well as a human of 3 years old.

So, in summary, for research in AI, I'd like to see work that has some promise of achieving these three goals.

For now, I'll stop here!



sort by: page size:

And what did the AI cite as sources for these conclusions?

Sounds like you learned a lot about AI actually :)

Especially for the time


Anyone with a better grasp than myself on the AI domain care to comment on this piece?

I did a senior reading/research class in AI back in 1989, as my final class I needed to graduate with a BSCS. The idea originally was to do a survey of the methods known at the time and determine which was the best. This included things like e-mycin, the first generalized expert system toolkit.

I ended up with the premise that each of them had their relative strengths and weaknesses, and it would actually be best to use all of them, but only in their own areas of strength. Then have them use something akin to a shared blackboard where they could all read the results from the other systems and write their results as well. That the sum of all the available algorithms working together, each in the areas in which they were best, would result in better outcomes than any one algorithm could achieve on its own.

My professor was not impressed. I only got a C.

Now, the story of how my dad had to work his ass off to finally get the College of Engineering to force the professor to actually give me a grade that he owed me, when all the professor really wanted to do was focus on his new job at one of the big airlines -- well, that's a story for another time.


Nice. Are you a professional AI researcher?

Finally. someone talking sense about AI.

haha this a project my current Ph.D. advisor worked on (he's the lead author of that study).

He likes to say AI=avian intelligence...


I studied AI at University, and it was mostly about search.

I learned Prolog, which was a real eye-opener about functions that can find paths backwards, a kind of reverse search.

I also learned about Bayes, and have come up with an algorithm (myself) which is similiar to Bayes, but it's top secret at the moment.


I wouldn’t worry too much on a lot of these points.

I won’t say the math behind AI is simple, but it’s mostly undergrad level. You can get up to speed on it if you really want to. The hard part is writing fast implementations, but many others are already doing this for us.

We do not have a grand theory of AI or a deep understanding, but every year we make improvements in machine understandability, and you can “debug” models if need be.

Lastly, the author is right, the best models are closed source, but open source is hot on its tail. There are plenty of good local LLMs and they get better every month. Unfortunately it still is out of reach for a hobbyist to train a good LLM from scratch, but open source pretrained models can mitigate this for now.


I started to work on something similar but way behind your project. I really believe AI models can help us as humans learn better! Do you have a blog or any other writeups on how you approached these problems?

Can you explain a bit more? I'm taking Andrew's courses and he's so inspiring, but I want to know if he's been right or wrong in the past about AI.

I put it online for you at http://paulgraham.com/lib/paulgraham/lm.tex

I haven't re-read it, but I don't remember it being particularly good. This was not published in Artificial Intelligence, but in AI Expert, a popular magazine I wrote articles for to make a little extra money in grad school.


Thank you, that was an excellent summary!

On a somewhat related note, it seems clear that AI research and breakthroughs are occurring at breakneck speed. I wish there was a place where you could see expert commentary like your in layman terms on interesting or important papers that stand out.


Quite possible. I do have a recent PhD in AI, but it's a big field and I can't claim to know more than a small percentage of it.

The problem with the dataset-first approach is that humans are still providing a lage part of the intelligence: defining the domain, defining what good performance on it looks like, carefully designing model architectures, collecting and labeling large datasets, etc. This is fine for narrow task-specific problems, but is not really the end-all of AI, and does not even seem to work well on all well-defined tasks. As an example of another kind of inference, how about mathematical reasoning? I purposely pick one here that is seemingly very formal; should be possible for a computer to do it. Mathematicians are somehow able to invent conjectures and prove theorems without first being exposed to terabytes of labeled mathematical facts. The scientific method is kind of an even messier version of this. Or to take something laypeople do, people can usually learn games to at least a passable level from just a handful of playthroughs, not AlphaGo-style millions of plays (imagine if you had to play even 1,000 games of MT:G before you got the basic hang of it...).

All this kind of stuff is quite well-represented in the literature though, if you mean the scientific literature. Pop-press AI writing tends to cover a pretty specific subset of what's going on at AI conferences.


Never claimed any expertise on AI research. Thanks for the suggestion nonetheless.

Methodology? Isn't AI magic? \s

You know how to read, great for you. But that doesn't say much, as all AI that was ever attempted was trained and supervised by people.

Plus this was blackbox testing and I found it to be an interested read nonetheless.


Is there a handy list of generally recognized AI advancements, and their owners, that you would recommend reviewing? Or perhaps, seminal papers published? I'm only tangentially familiar with the field but would be curious to learn about the clash of the Titans playing out. Thanks!

Good point, and my original opinion should at least have what you said as fine print. Personally, I am finishing up a PhD in CS. When I started my PhD, deep learning had already been all the craze but I chose another subfield -- that did not pan out so well. Hence, I still think AI (maybe AI + X) is a good choice in general.
next

Legal | privacy