Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

The AI isn't creating a new recipe on its own. If a language model spits something out it was already available and indexable on the internet, and you could already search for it. Having a different interface for it doesn't change much.


sort by: page size:

I think you are missing the conditional, contextual nature of language models. They mix things in coherent ways, they adapt to the request. Google doesn't create new things when they don't exist, and the pre-written code examples on the internet will never adapt to your needs.

But I agree with you that everything they do seems intelligent because 'intelligence' was in the training data. Not much different from us, if you raise a human removed from society (take his intelligent training data away) he will not accomplish almost anything on his own.


Also it doesn't seem like such a leap to couple these language models with dedicated computation systems and similar. Think training a model to feed prompts to Wolfram Alpha to actually compute the results, then reporting back.

It's called "language model" for a reason, the AI is modeled after existing human language.

I wouldn't say I'm particularly hyped about "AI", but I would say that language models have gotten to the point where they are boring and useful enough to integrate into my daily workflow.

I use Kagi Ultimate, and I have gone though a valley of language model usage. I started using them a lot when I first got Ultimate to play around with them and understand their limitations. I stopped using them as much in favor of plain search after I hit those limitations. Models have since gotten considerably better, and in many cases they are more useful than regular web searches, so I have started using them a lot again.

I also run Llama 3 locally on a 7900 XTX to process information that I don't feel comfortable with / can't share with external APIs. That's definitely not the greatest, but it's also definitely good enough to be useful in a pinch sometimes.


the issue is that the large language model doesn’t understand what it’s saying, cannot reason or come up with novel ideas, yet it can convince a human that it _is_ doing so.

so then if the world begins to prefer AI-generated content, any question you ask the internet will only show you AI-generated answers, which can only offer you answers based on its training set, and over time a system with no new inputs just ends up being static generated from static, albeit static which is convincing enough for a human to accept it.


Both had a program running it with real Go rules. We don't have a program that can run with the real rules for logic and text to train a language model from scratch.

By the time languages have the DWIM feature, AI already took the whole job.

It's a language model. It models language not knowledge.

yes but it's still very much just a language model, not a knowledge model.

A language model is just a language model. It may well be an important part of an AI at some point, but it’s not going to be the whole thing.

The point I was trying to make is that we can get better language models with more computation, not that we should stop researching new ideas and architectures. In hindsight, perhaps the language in my post wasn't sufficiently clear on this. See my PS. You and I are not in disagreement :-)

Until a language model can develop a generalized solution to a real-world phenomena, it's not even close to AGI. The current iteration of ML algorithms are useful, yes, but not intelligent.

I'm not questioning the AI revolution, I'm questioning the applicability of large generative language models to information retrieval. We need a different kind of model for that, not just a few tweaks here and there to this one.

Either the language model would need to know what it's doing or the host program would have to know what the AI is doing. Both seem out of reach. The latter seems more doable since you could hack something up for simple scenarios, but you'd effectively have to match the capabilities of the neural network in a classical way to handle every case (which would render using a neural net moot).

It is difficult to see an argument that the output of a language model is not derived from the language model, other than people would prefer it wasn't.

Besides the AI being able to interpret the questions, which is the entire point of a language model, isn't this just the same as someone googling the answers to the exam?

A language model isn't Skynet :)

The idea that effective language models won't completely change search is laughable.

But the idea that the application that will do it is language generation is about as ridiculous.


> why no one had hooked up a learning AI to Wikipedia/dbpedia and the rest of the Internet.

This is because Natural Language Processing is apparently #@!$%@^ hard. Sure it's easy for a computer to extract things from a pile of assertions, and it's not even that difficult to work with fuzzy/probablistic assertions. But turning ordinary everyday English (or Spanish or whatever) into a pile of assertions (with appropriate certainties) is something that's still being figured out.

next

Legal | privacy