Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
A.I. Is Learning What It Means to Be Alive (www.nytimes.com) similar stories update story
12 points by bookofjoe | karma 80833 | avg karma 3.59 2024-03-10 17:48:46 | hide | past | favorite | 24 comments



view as:


>It took humans 134 years to discover Norn cells. Last summer, computers in California discovered them on their own in just six weeks.

>The researchers did not tell the computers what these measurements meant. They did not explain that different kinds of cells have different biochemical profiles. They did not define which cells catch light in our eyes, for example, or which ones make antibodies.

>When the machines were done ... they could classify a cell they had never seen before as one of over 1,000 different types. One of those was the Norn cell.

>The model gained a deep understanding of how our genes behave in different cells. It predicted, for example, that shutting down a gene called TEAD4 in a certain type of heart cell would severely disrupt it.

>GeneFormer recommended reducing the activity of four genes that had never before been linked to heart disease. Dr. Theodoris’s team followed the model’s advice, knocking down each of the four genes. In two out of the four cases, the treatment improved how the cells contracted.

>“You can bring a completely new organism — chicken, frog, fish, whatever — you can put it in, and you will get something useful out,” Dr. Leskovec said.

>“It’s going to force a complete rethink of what we consider creativity,” Dr. Quake said. “Professors should be very, very nervous.”

Wow. We are so simple that even a computer understands.


> When the machines were done ... they could classify a cell they had never seen before as one of over 1,000 different types. One of those was the Norn cell.

So, unsupervised classification. Ten years ago we were using K-means clustering to do the same thing, but I didn't know we were learning what it means to be alive! </s>


No, you don't understand! It uses words in plausibly English ways! Everyone knows that natural language is composed of meaning!

The sequence prediction arguments are a lot more convincing than they were a few years ago, if you care about realized progress. Sequence prediction is intelligence, the philosophers can deal with the fuzzy edge stuff that's less important.

Markov models were intelligent? Neat!

No, they aren't efficient enough. And the "typical" toy versions pretty much don't work. LLMs demonstrated the feasibility of sequence predictors in a much stronger way.

> aren't efficient enough

Unless you can point to a hard boundary, a line that distinguishes "enough" from "not enough", this doesn't mean anything.


Please define consciousness [impossible].

What are your thoughts on this part, which had a 50% "novel correct" rate:

>GeneFormer recommended reducing the activity of four genes that had never before been linked to heart disease.

?


Literally every gene expression model ever published has come with some claim of novel biological discovery. If they didn't, they wouldn't be published in the biological literature -- nobody serious cares that a classifier was applied to a dataset.

Good papers will go so far as to take a random sample (or even better: all) of such "recommendations" and validate them using experiments. Then you attempt to quantify the success rate, the sensitivity, selectivity, etc. Did they do that here? We go back to the article:

> When her team put the prediction to the test in real cells called cardiomyocytes, the beating of the heart cells grew weaker.

OK. How many predictions did they test? How many failed for the one that succeeded?

...and later:

> Dr. Theodoris’s team followed the model’s advice, knocking down each of the four genes. In two out of the four cases, the treatment improved how the cells contracted.

So we have two fairly classic examples of cherry-picked experimental validation. How many predictions did they validate? Did they just select the ones they thought made sense? All we know, from this, is that they selected some subset of results to test, and failed roughly half the time in the latter experiment. That's not a great batting average for this kind of work.

None of this is new. We've been applying ML to these problems for decades. Wrapping this kind of stuff up in fuzzy claims of AGI is just another way of escaping scientific rigor via application of hype [1].

[1] To be clear, I'm not accusing the researcher of this. For all I know, the underlying paper is good. But this article is dumb.


>Wow. We are so simple that even a computer understands.

You are wildly underestimating how truly complex we truly are

We don't understand (computers too, if they "understand") how caenorhabditis elegans work, let alone humans, which are a tad bit bigger than these worms

Knowing what individual parts do does absolutely not mean knowing how all/some/two parts work together

Scale also changes things drastically, if something is bigger than something else by 2 times, it doesn't mean that it's only 2 times more complicated than it. Nonlinearity


I only briefly attended medical school, a dropout... I have reduced several cadavers into buckets of slop: we are much simpler than most believe =D

----

Do you have any thoughts on DishBrain's complexity?

DishBrain: https://www.perplexity.ai/search/What-can-you-GK9aohldRJqh9o...

Deeper dive into DishBrain (from HN, yesterday): https://news.ycombinator.com/item?id=39651909


Maybe we should have introduced those researchers to regression analysis earlier.

[flagged]

New York Times pushing clickbait nonsense again.

Today's AI is pattern matching. It's good at that.

I wonder if that is all that human intelligence is, just pattern matching? With biological quirks and urges thrown into the mix.


Legal | privacy