Good catch, I don’t think they mention that in the main article (at least in reader view).
In this case I guess the “AI” label is relatively well applied (compared to the scope of ALL things I’ve seen it haphazardly slapped on), but yeah, still - people need to stop calling any kind of data analysis “AI”.
I don't see it as a question of whether the research uses AI, but rather what the subject of the research is. Putting all research under the 'AI' banner because it happens to use that technique in some of its data processing seems like a miscategorization.
> Also, I've noticed the definition of the word "AI" has grown to encompass pretty much any type of software that does something with data.
Afaik AGI research is still a separate thing, so it's not really that misleading to use "AI" for applications like this were machine learning is involved, resulting in a very specialized "artificial intelligence" that can spot otherwise hidden patterns.
Confused by this whole interaction. The idea is to use AI + automated data collection to detect and surface patterns and correlations of behavior in order to improve quality of life.
Did you think this was somehow directed at you, personally?
Wait, how? It seems to me that it gives people an easy way to focus on one area of AI, saying "Guess we didn't need fundamental AI research after all".
I somehow don't see it as anything more than AI that learned on dataset of unknown quality/origin, making shit up, while users pretending that because computer did it it couldn't be wrong.
That is correct. It is actually all about marketing. For example, there shouldn't be a huge difference between something working intuitively vs using AI/ML. But somehow parading it as something build on AI gives a lot of airtime and credence to an extent.
"We are a small - as-of-yet unincorporated - nonprofit providing pro bono consulting on algorithmic and policy issues arising from the proliferation of: Statistical inference / Automated decision making - often called 'AI', ..."
From the author's website, listed at the bottom.
Imagine if journalists and other people posting to the www called "AI" what it actually is, instead of constantly portraying it as something futuristic to capture people's imaginations.
Even terms like "statistical inference" and "automated decision-making" could probably be explained using more common language to be comprehensible to the general public.
This isn't AI. It's a random forest with FOUR inputs and ONE output. The fact they put a huge big blue AI brain in their paper to represent this algorithm tells me this is mostly hype.
Well.. it is nice, I guess, but really dumbed down version in my opinion to actually be of practical use. Sure it's great publicity for Reaktor and Finland and whatnot, but I'm dubious of its actual benefits. Teaching basic statistics would be much more useful than "AI", but I guess it lacks the hype factor so marketing-wise it wouldn't just be as cool.
It's as if there wasn't enough of snake oil salesman promoting their business as "AI-powered" already. And what AI does even mean in current context? A bag of machine learning algorithms? I always likened AI to be more of general AI rather than how it's nowadays labelled.
reply