I'm from the camp the singularity is a continuous process.
When you travel away from the Sun, the light from it gets dimmer and dimmer and dimmer, until it becomes single photons appearing with more and more delays but on average energy of 'dimmer' light.
Fly towards the Sun and there isn't a sudden cutoff point where it goes from dark to bright, or bright to incandescent, it's a continual increase.
Same with life, lots of people born, lots of intensity around birth and infant children and parents and education, intensity diminishes with age as people spread out and dissipate, everyone's looking back in at the source of new life while getting further and further from it and more and more separated from each other and lower energy levels and less and less 'happening'.
Singularity? Moving towards the intensity with technological development and more intricate, faster, and sheerly more connectivity all over, more encoding of ideas into information patterns.
Unless you want to imagine it like flying towards the Sun becoming falling towards the Sun, becoming a specific point where everyone burns up and dies. :/
the whole premise of a singularity is that we will hit a point of exponential technological growth, extrapolated from the accelerating pace of technological development over human history and usually based on the idea that we'll make a self-iterating AI which will lead to runaway intelligence. The "singularity" specifically is intended as a tipping-point where the acceleration of technology becomes faster than human capability (i.e. the approaching-vertical line on an exponential curve).
Yep, this is why the singularity is a zero-sum game... I expect many entities to vie for the title, but the first one there will subsequently dominate... which is why it's my intention to program my conciousness into my own AI which seeks to become the singularity and therefor my digial conciousness version will become the ultimate god of the universe.
The singularity cult is real. The geek rapture will occur when they blog about someone else making God in a Box hard enough to drown out the normal charlatans. Please fund their blogging, it is the most impactful way you can spend your money.
One way he profits is in the expectations of self driving cars. If people expect it in 3 years and it to be 10x better than humans [to quote musk] then there is huge value in any company that can provide it.
Tesla has a market cap larger than Ford. Tesla also has been on the edge of bankruptcy every couple years and recently raised $1 billion in capital.
Hard to time that kind of thing. Not worth it when you risk losing your shirt. You can lose an infinite amount of money shorting, whereas going long you might only lose all of what you put in.
Options solve it in the sense that you limit your downside, but you can still very well lose your premium. Even if you're 100% convinced that $TSLA is overvalued, markets can remain irrational longer than you can remain solvent, and your puts may very well expire worthless.
Yup. But it may be that some people with enough information and time to watch the stock and relevant data on it literally second by second could make a killing. And some people might be in a position to influence the time of the turning point and stimulate an avalanche of a fall.
Broadly, no way do I believe that many people, not even fleets, especially not individuals, will put up with the charging time of a battery powered car. It's just so darned much more convenient to fill up a 20 gallon tank with gasoline.
My guess is that the promise of electric cars was boosted by both the anti-fossil fuel push and the self-driving push. IMHO, with Trump, the anti-fossil fuel people just lost, big time, for a long time. E.g., we won't have carbon taxes, the theme will be "drill, baby, drill", and US gasoline prices are on the way down. For self-driving, my view is that the lawyers will make the insurance rates too high and more lawyers will pass some very restrictive laws. Then self-driving will be for some special cases and/or will need a lot of highway engineering that will take a lot of time and money and won't happen for years. For more, no way can Tesla do something significant that GM, Ford, Chrysler, Toyota, Honda, BMW, MB, Kia, etc. can't do just as well or better. E.g., there were electric cars ~100 years ago, and the story has been the same since then or as a Ford exec once said "You build me a good battery, and I will build you a good electric car." For a while there was some hope for a capacitor, which can be charged as fast as the electric power connection permits, but that approach seems to have flopped. Finally, I'm guessing that Musk has had a lot of subsidies, and I have to guess that Trump will turn off that faucet. Net, I see no real hope for Tesla. IMHO, Tesla is going down, soon. So, some smart and/or lucky short sellers stand to make a bundle.
> Broadly, no way do I believe that many people, not even fleets, especially not individuals, will put up with the charging time of a battery powered car. It's just so darned much more convenient to fill up a 20 gallon tank with gasoline.
This may not hold true in the near future. With charging stations capable of putting out 350kW, we will have drastically reduced charging times. [0]
> Finally, I'm guessing that Musk has had a lot of subsidies, and I have to guess that Trump will turn off that faucet.
Actually, Tesla doesn't lose much if ZEV credits go away... In fact, the way things stand, other auto manufacturers are able to fully monetize their ZEV credits, while Tesla only makes 50 cents on the dollar. [1]
Sure, the grid can provide the power for fast charging, but charging a battery heats the battery, and charging it too fast heats it too much. That's my concern. If Tesla can make the charging time short enough to be convenient enough, then I'm wrong.
The charging time of the electric car is overnight in your garage, while you're asleep. So the vast majority of the time for the vast majority of people, that's not a relevant concern. On road trips, 30 minute breaks every few hours aren't the end of the world for most people.
For self-driving, there's no way that's going to be held back once it's better than a human driver. The demand is going to be incredibly high, and insurance companies will probably charge human drivers more - they tend to be very good about accurately assessing risk and charging accordingly.
Those other car companies tend to be very lacking on the software side of things.
Have you ever driven a Tesla?
Feel free to short it, but I think it's a very bad idea.
For charging overnight, my concern is that (A) if drive very much during the day and (B) the house has only a 100 A circuit breaker box, then the charging time will still be too long, that is, that overnight is not long enough. The back of the envelope arithmetic I did on overnight charging at 100 A boiled down to driving a relatively large golf cart not very far or fast. I could be wrong. Yes, a Tesla is a lot bigger than a relatively large golf cart.
> Have you ever driven a Tesla?
No, but I fully accept that a powerful electric car would be a total hoot to drive. So, it would be quiet, smooth as silk, and be the fastest accelerating wheel driven vehicle so far. The key, of course, has been known for 100+ years: A series wound electric motor has, from the view of the math, infinitely large torque at stall, that is, before the armature has started to rotate. Then, before the armature is rotating very fast, the torque is still very high.
Also, get to put a separate motor at each wheel and do without a transmission, drive shaft, differentials, etc. So, get rid of a lot of mechanical parts. Also get rid of the often corroded exhaust system.
The torque and power can get from a carefully designed electric motor now is astounding.
Moreover, there may be some good opportunities to adjust dynamically, many times a second, the motor torque separately at each wheel to eliminate wheel spin, which actually hurts acceleration.
Again, my main concern is charging time: E.g., for summer driving, will want A/C, and that can take a lot of power; in the winter, want heat, and that also takes power.
Do a lot of high performance driving, and a Tesla is to be regarded as a high performance car, accelerating onto Interstate highways, passing cars, trucks, farm equipment on two lane roads, driving at 90 MPH, etc., can use a lot of battery energy. Once when I drove from my home in Maryland to my office at FedEx in Memphis, I drove my Camaro nearly the whole way at about 90 MPH -- didn't get stopped. Once when I was driving at night from Maryland to Indiana to see my fiance, I drove my Firebird nearly the whole way at 90 MPH -- didn't get stopped. I was lusting for pencil beam high speed driving lights, an air horn that could be heard for miles, and a more powerful fuel pump for pulling long hills at 100 MPH at full throttle at about 5000 RPM (Camaro 396, Turbo 400 transmission, second gear, 2.56 rear axle ratio) -- did that daily driving home from the Johns Hopkins University Applied Physics Lab!
When Dad and I went fishing in Arkansas, I did the driving, before dawn, while he slept, and I put our Buick on 100 MPH and kept it there. For people driving near the Rocky Mountains, they can want a lot of battery energy to climb hills.
Mostly just for commuting to work, I've put 200,000+ miles on each of three cars, a high end Camaro, a Buick Turbo T-Type, and a slow one, an SUV.
So, to me, a car means a lot of driving, maybe fast driving. So, that would make me concerned about Tesla charging time. That a Tesla can be considered a high performance car gives me more concern about charging time if that performance were used very much, e.g., as I have used high performance cars in the past.
My other concern about a Tesla would be the lifetime of the battery after a few thousand full rechargings as might be needed for 200,000+ miles of high performance driving.
For high performance driving,
I'm lusting for a supercharged Corvette, and I'm happy that all
it needs for energy is premium gasoline! :-)!
Well, I think 100A at 120V should theoretically be enough - 12kw. The biggest Tesla battery is 100 kwh, so assuming no charging losses (bad assumption, I know), you should be full in 8ish hours from empty. In reality, you won't want to use your whole rated electrical capacity on one thing, but on the flip side, you can go higher voltage, or upgrade your breaker, assuming your wiring isn't too marginal. Tesla sells a high voltage high amperage charging station for your garage for not too much. I think most charge on a 240V or greater line.
But yeah, charging up mountains at 100 mph will probably give you less than 3 hours :-D
Re driving fun, the torque is great, but I think the biggest thing is 0 throttle lag. Less of a problem with a supercharger than my old twin turbo, but still not instant like this. It's worth taking on a test drive, that's what really convinced me that this was the future. That and the 0-60 times of a 7 seater on par with an M5 (at the time), and the relative mechanical simplicity, and the better marginal economics, etc.
You make a point: Of course, my house here in NYS is drawing 240 V from the electric utility. So, IIRC the 100 A of the main breaker is for 100 A at the 240 V, not just the 120 V. So, just have an electrician run a 240 V line from the breaker box to the garage for the car, say, the same way as there is a 240 V line to the kitchen electic stove and the electric clothes dryer.
But, the 100 KWh is 123 HP for an hour or 61 HP for 2 hours or 30 HP for 4 hours. That's not a lota juice.
Apparently gasoline, burned, releases
33.4 KWH
So, the Tesla battery has the energy of burning 3 gallons of gasoline.
If a gasoline powered car is 33% efficient, then the 100 KWH of a Tesla is about like a 10 gallon tank of gasoline.
Hmm -- my old Chevy S-10 Blazer has a
20 gallon tank for gasoline, and
I can fill it in a few minutes.
But if only want to zip, the fantastic
acceleration, around Manhattan a little
during the week and take
a 2 hour round trip to the Hamptons
on weekends, then it appears that
a Tesla will work!
IIRC, the time I used my Firebird to
blast overnight from my place in
Maryland to my fiance in Indiana,
the whole trip was about 9 hours.
At the speeds I was driving, I
might have been using 100
HP. So, the Tesla battery might
have lasted a little over an hour.
On that trip, even with gasoline,
I had to stop a few times.
A Tesla would have been a
bummer, make me late to
get to my fiance,
in particular,
truncate my manhood -- OUCH!
Car was a Firebird 400 4 speed with,
sadly, a 3.23 rear axle ratio. The first
gear was 2.52, and fourth was 1.0.
Bummer. Now, wisely, much wider
ratios are common and more
gears, e.g., in the Corvette
automatic 8 gears. So, can
try to compete with a Tesla
for standing start acceleration
but on a highway in one of the
higher gears cruise along
at maybe 1200 RPM and get
maybe 28 MPG. Gasoline
powered cars have been
improving, too!
It just hit me: Maybe a Tesla
is using the electric motors
also for braking and, then,
also charging the battery!
Nice.
Trains and big boats commonly
have a Diesel engine driving
a generator and the generator
driving electric motors --
"Look, Ma, no or a perfect
transmission!". Could also
have a battery in there!
Use the battery around town,
and charge the battery with the
engine.
Tesla funding is driven more by proven demand for the product (huge order backlog with paid deposits) than any belief in autonomous vehicle technology. That plus historically low interest rates which have caused investors to do silly things in search of higher returns.
I'm very interested in this topic, but I'd hold off on discussing until the other posts are out. The current post is largely just a summary of what future posts will say, and doesn't cover much by itself.
Yeah, terms like "machine learning" and "AI" have basically become buzz words which, to most laymen, probably encourage the idea that we're on the cusp of creating Data from Star Trek.
Unfortunately, the reality is that.... sorry... it's mostly just statistical algorithms based around regression and intermediate calculus. State-of-the-art "deep" neural networks are not really anything like the absurdly parallel, asynchronous biological networks that power our neo-cortex - rather, they're basically an application of matrix multiplication designed to "learn" a function by iteratively minimizing an error value using gradient descent. It's still very unclear what, if anything, this algorithm has in common with how a human brain actually operates. It turns out that these kind of statistical algorithms can work pretty well when you have petabytes of data to learn from. But we're still not anywhere near the unsupervised learning capabilities demonstrated by a human infant.
> It's still very unclear what, if anything, this algorithm has in common with how a human brain actually operates.
Not very much at all. Most psychological evidence shows that human beings seem to operate off composable, hierarchical generative models and probabilistic inference. Deep neural networks are basically just huge continuous circuit approximators.
>composable, hierarchical generative models and probabilistic inference. Deep neural networks are basically just huge continuous circuit approximators.
What is human intelligence, if not the result of a colossal, distributed function optimisation process driven by evolutionary forces over millions of years?
true, it's the result of an optimization process, but intelligence itself (whatever it is...it's a suitcase-term) isn't as simple as that. Or maybe it is, I don't know!
"Optimization" describes the result, not the implementation. Unless you consider intelligence to be average-suboptimal, it's entirely reasonable to say that it's about optimization without committing you to any claim about how intelligence is implemented.
While I don't take exception with the word machine learning (it's reasonably well defined), I agree with most of the points here. Some are grossly exaggerated (e.g. needing petabytes of data to do machine learning). In my research on medical time series, we get strong results with a few thousand examples (megabytes). Same for natural language datasets which are modestly sized. But yes, absolutely agree that function approximation for supervised learning and some kind of artificial consciousness as portrayed in the media are very far apart.
Lets say an infant is 3 years old, thats 26,280 hours, lets say a baby sleeps about 2/3 of the time - lets call it 8600 hours of awake time.
8600 hours of 4k video at 60fps (i would say this is lower quality then the human eye) is 2.657 petabytes.
Thats not including the sense of touch or smell, which i am sure are comparable in the amount of data processed by a human.
We're looking at 5+ petabytes of data that a human has to learn from by age 3 - Whats wrong with taking petabytes of data in order to learn meaningful things?
They won a bunch of awards for them but most of the technologies, if they came to be, were brought to you by people other than AT&T.
Same thing with the next steps in AI. They might happen, but they aren't going to come from the companies who currently have large marketing departments looking for something to hype.
I generally agree with this. I used machine learning almost exclusively in the article. Yet "AI misinformation" seems more appropriate in the title since the misinformation, generally, is in reference to "artificial intelligence".
Yup, we've had several AI winters. Now we have an AI Spring of Hype. Then we will have a Summer of Hope. Then a Fall of Failure. Then another, long AI Winter.
I think AI is accurate, but it's actually short for "aggregated intelligence". Most AI algorithms can't actually generate "intelligence" as we know it. Rather, they aggregate the intelligence of the humans who contributed to their training data, and then generalize it to situations where those humans are not present. If the humans are stupid (as we saw with Microsoft's Tay chatbot), the AI is going to be pretty stupid as well. If the humans are racist, the AI is going to be racist as well.
Not just expert systems: any supervised learning system also fits into this category (including neural nets), as do Markov chains, collaborative filtering, Bayesian inference, anything that involves counting actions that humans take.
From all I can see, nearly all the present AI boils down to one word -- hype.
From what is really going on, nearly all of it looks like (1) in some cases a lot of new data, (2) some new, faster processor hardware, e.g., based on graphical processor units although the x86 processors are astoundingly fast anyway, able to manipulate the new data, (3) for what manipulations to do on the data, some tweaks of some of the work of L. Breiman and his CART -- Classification and Regression Trees, (4) fitting with S-shaped (neural) sigmoid functions, and (5) the radar, etc. engineering of autonomous vehicles. E.g., the neural networks might be able to simulate the operation of a neuron in a worm.
What I don't see is (A) much progress in better methods for how to manipulate the data, that is, the basic applied math and (B) progress in working with concepts and causality -- in the history of science, progress with concepts and causality did well where the new methods would need Nevada full of disk drives of data. E.g., space flight navigation is based, first, on Newton's law of gravity and second law of motion, not on fitting massive of data via deep learning.
Uh, when the AI people use classic regression analysis as in Draper and Smith, etc. and the IBM Scientific Subroutine Package, SPSS, SAS, R, etc. to find some regression coefficients, they claim that their machine learned the coefficients. Gee, I didn't see that in, say,
C. Radhakrishna Rao,
Linear Statistical Inference and
Its Applications:
Second Edition,
ISBN 0-471-70823-2,
John Wiley and Sons,
New York,
1967.
or Breiman's CART.
Again, the rest I see looks like hype.
E.g., it appears that there is a basic, clever, publicity, hype idea: Whenever do some technical work, give it a catchy name. Then just use the name in the hype and ignore what is really going on in any math or science.
E.g., a while back I published some work in statistical hypothesis testing that is both multi-dimensional and distribution-free. The intended use was for high quality hypothesis testing of zero day problems in computer networks and server farms. Alas, I neglected to give the work a catchy name!
The OP seemed surprised that a big, famous company might say things where they know better. Why not? Just assume that they are trying to sell something to some people with money.
Similarly for the news media: They want eyeballs for the ad revenue. That the newsies are willing to write junk to get eyeballs goes back at least to Jefferson's remarks as in
We have long had a good filter to apply to the writing of the newsies: Does the writing meet common high school term paper writing standards for solid references and primary sources? Rarely is the answer yes.
My view of the printed news is that it can't compete with Charmin, not even with cheap, house-brand paper towels. For the electronic versions, they are not useful even as fire starter, shredding for cat litter, or wrapping dead fish heads -- that is, are useless.
So, don't read them.
And don't debunk them, either -- debunking wastes your time and is something the newsies long since have ignored. The newsies have no shame.
Ignore the newsies. They ignore the debunking efforts.
My startup manipulates some data, and how is from some applied math I derived. From what I've seen of AI, my work would qualify as quite good and innovative AI -- besides the work is solid theorems and proofs. Still, I see no good reason to give my work a catchy name or call it AI. E.g., I'm not trying to fool anyone. Or, why would I want to associate my good work with a lot of hype to fool people?
I can say with high confidence that there is not even one venture capital person in the US who would invest even 10 cents in my work, call it AI or not, before they see usage significant and growing rapidly, and then they are not investing in the math or the AI but just the traction and its rate of growth.
Besides, I'm a solo founder with a meager burn rate so that by the time I have the traction the VCs want, I will be nicely profitable with plenty of cash for growth just from retained earnings.
Or, my back of the envelope arithmetic is that with common ad rates from ad networks, a $1000 server kept half busy 24 by 7 would generate $250,000 a month in revenue. For just one server, a cheap Internet connection, just one employee, that's a heck of a profit margin and plenty of cash for 10 more servers. Half fill those, say, in two spare bedrooms I have, with some window A/C units, some emergency power supplies and an emergency generator, and I will have annual revenue more than a VC seed or Series A equity check. And, "Look, Ma, 100% owner of just an LLC and no BoD!".
Sure, once I have the traction, VCs will call me, and then I will check and tell them about all the times I sent them e-mail they ignored and explain that my plane has already left the ground, has altitude, and is climbing quickly and it's too late to buy a ticket.
Sure, I typed in all the 25,000 programming language statements myself and implemented my math derivations in 100,000 lines of typing -- lots of in-line comments! The code is all in just Microsoft's Visual Basic .NET with ADO.NET for getting to SQL Server and ASP.NET for the Web pages with IIS for the Web server plus a little open source C called with Microsoft's platform invoke.
I wrote my own Web page session state store using two collection classes and some TCP/IP sockets with class de/serialization -- sure, could have used REDIS, but my code is so short and simple writing my own was likely easier. Besides, now I'm about to copy that code, rip out most of it, and get a log file server that I will like a lot better than what I'm using now from Microsoft.
Otherwise the code looks ready for at least first production, to, say, well past $250,000 a month in revenue. At IBM's Watson lab, I wrote AI code that shipped as a commercial IBM Program Product, and what I've written for my startup is more solid (got an IBM award for some of the code I wrote in an all night session -- let one of our programmers be done in the next afternoon instead of two weeks and got a MUCH nicer result for the customers -- trick was to do some things with entry variables to keep some run-time code on the stack of dynamic descendancy).
My project needs data, and I have a lot but need to get more.
Curiously, there's some good news: My development computer was crashing about five times a day, apparently a hardware problem, maybe on the motherboard. I did mud wrestling with it and, then, shopping for parts for a new computer. But my old computer now, for no good reason, is no longer crashing! The computer has plenty of free disk space for some more data. So, for now I get to set aside all the system management mud wrestling of getting a new computer and getting all the software moved to it and running, can review the most critical parts of my code a third time, write the log server, write some code to make working with SQL Server easier, get some more data, and do an alpha test, a beta test, and get some publicity. Maybe I will even go live with my development computer, get some revenue, and get a really nice first server.
The problem is important; the math is solid; the code is solid; I suspect a lot, at least enough, people will like the results (it's intended to please essentially everyone on the Internet), etc., but no VC in the country wants anything to do with my work now. Nothing. Zip, zilch, zero.
Lesson: To VCs, nothing but nothing matters but traction.
So, the flip side of that lesson is an
Opportunity: Be a solo founder where the traction the VCs want is enough for profitability and plenty of cash for organic growth.
The founder of Plenty of Fish was a solo founder who eventually sold out for $500+ million. Some old remarks by A16Z confirm the possibility of a one engineer unicorn -- at
"This is the new normal: fewer
engineers and dollars to ship code to
more users than ever before. The
potential impact of the lone software
engineer is soaring. How long before
we have a billion-dollar acquisition
offer for a one-engineer startup? "
I agree!
I saw the need -- obvious enough. I cooked up a new solution with a new UI, UX, and some new data to be just what some new math I derived needed.
Easy enough -- I've worked harder single exercises in Rudin, Principles of Mathematical Analysis. The work was easier than my Master's paper, my Ph.D., and any of the papers I've published. I wrote the code for the math and checked it various ways including with programming those calculations again in another language to check. I designed the data base tables, the Web pages, and wrote the code for the Web pages. Then the code for the session state store. Then I got sick, then got well, then my computer got sick and got well, now I'm back to progress. So far, being a solo founder, nothing particularly difficult. No reason to need a co-founder.
You give your work a catchy name because you want other people to help develop it, via attention, promotion, word of mouth, etc so other people can find it and fund it.
To do so you speak in their terms and their terms are catchy one-liners. Don't be bitter about it, some people just don't have the level of concentration to deal with our specifics.
I can accept Newton's second law of motion and Einstein's general relativity, but so far I'm less accepting of deep learning for finding some coefficients for fitting some data or neural networks for fitting some sigmoid functions to some data. Apparently the sigmoid functions have some nice theoretical properties and can be useful in approximating some complicated data; e.g., IIRC D. Bertsekas at MIT used such fitting to approxmate the optimal value functions in stochastic optimal control. Or, have some big data and a lot of computing cycles? One way to soak them up is stochastic optimal control, possibly with some sigmoid fitting. To me, that's more promising for the future than what I've seen in the current AI based on Breiman, etc., and I'm a really big fan of Breiman.
What is going on in such cases -- learning, neural -- is at best some applied math, but the learning, neural, and intelligence are trying to suggest, hint, hint, that the math is somehow close to what intelligent humans with biological brains do.
The effects of the hints are strong enough to give lots of newsies another of their favorite means of getting eyeballs and clicks, that is, there are big threats, the sky is falling, nearly all humans will lose their jobs, and the robots might take over as in some movies.
Yes, somewhere in a buried paragraph there maybe an admission that the learning is likely nothing like what humans, kitty cats, puppy dogs, whales, or crows do, but that disclaimter, if want to call it that, is hardly a drop of water on the hype inferno.
At one time some IBM publicity got some housewives to believe that iBM's computers were "gigantic electronic human brains".
I know; I know; such hype is not the worst problem in the world and, really, is affecting me hardly at all. But when I was in an AI group at IBM's Watson lab, still I was embarrassed to say artificial intelligence. I knew, yes, that the meaning was supposed to be just a computer doing something that would require human intelligence if a human did it, e.g., play chess well, but that the approach of the AI chess, a big tree search with alpha-beta pruning, etc., was nothing like what humans did playing chess. But with this definition some 0-1 integer linear programming for scheduling at an airline should be far above merely intelligent since it is far above what an unaided human could do -- although humans invented the applied math that is 0-1 integer linear programming.
I call the crucial core technology for my startup not AI but just some applied math. The world is awash in good applied math -- statistical hypothesis testing, the fast Fourier transform, numerically exact matrix inversion (number theory techniques), Wiener filtering, deterministic optimal control, and hundreds more -- all without a single claim of anything intelligent in any of the books, papers, courses, etc.
From what I currently understand in the CNN/deep learning arena, sigmoids are avoided in favor of RELU, due to a number of reasons, chiefly that of avoiding gradient fade during backprop.
I'm surprised that you don't mention this (or don't seem to be aware of it), though? Is this intentional, or is there another reason?
Oh you don't have to preach here, I am SOOOOOO converted. I spend way too much time pissing against the wind in reddit.com/r/futurology where it seems waitbutwhy is mandatory biblical reading. Then they pretend they like science at the same time :S.
Still, my point is that I think IRRESPECTIVE of how we present it or what we do THEY will always jump to that conclusion.
Remember that these are people that think vanilla programming is the work of a "computer" as opposed to it being an echo of the mind of a software engineer.
> in the history of science, progress with concepts and causality did well where the new methods would need Nevada full of disk drives of data. E.g., space flight navigation is based, first, on Newton's law of gravity and second law of motion, not on fitting massive of data via deep learning.
If you're thinking about conceptual reasoning vs. purely data-driven reasoning, you'd really get a lot of value from reading Ayn Rand's theory of concepts. Yes, she's very unpopular amongst modern philosophers, so all I'll say is you should read her writings and judge for yourself.
Welcome to the club. Now you know how Economists, healthcare professionals, skateboarders, basket weavers and anybody else whose domain knowledge runs deeper than the average joe's feels whenever their area of technical expertise becomes the subject du jour for the public at large.
Well OTOH, in the land of the blind, one-eye is king - this goes for a lot of subjects. If you can give a general introduction to AI or the latest update, you can write for thousands of online outlets, each of which attracts some views and therefore money.
>If you can give a general introduction to AI or the latest update, you can write for thousands of online outlets, each of which attracts some views and therefore money.
Can you also put into words why? Sincere question, not being snarky; I also know the feeling of something feeling off but not knowing how to express it in words (yet).
In more established field it tends to be that people outside the field exaggerate things because the can't put them into the context of the field i.e. they lack domain knowledge. In "tech" it seem like it's the people in the field that exaggerate things because they can't put them into the context of the world.
Not op, but also from economics background - the issue with economics in public perception for me is that is't became modern "religion"/ideology and moved far from proper science. Especially "mainstream" economics theory.
So the issue is not that it's getting misinterpreted due to lack of understanding. The issue there is that it's getting unscientific and politicised purposefully.
Great book on this is James Kwak's "Economism" - the abuse of purported economic insight for political purposes.
As just one simple example, good old Hekscher-Olin trade theory (a neat general equilibrium model with 2 countries, 2 goods, and 2 factors of production, capital and labor) "shows" that free trade is a good thing, leading to a Pareto improvement.
But of course, that's predicated on a whole host of assumptions that might or might not hold, and furthermore predicated on the assumption that the "winners" compensate the "losers", through redistribution.
So, the economic case for free trade more or less includes the case for redistribution and compensation of those negatively affected by it - but that's often conveniently left out by proponents.
Right, but this is a political problem. H-O (or Ricardian trade) is about the simplest model of trade you can come up with to show the concept.
It's not like economists don't know there are problems in distribution of wealth -- one of the most discussed papers last year (see these podcasts [1] [2]) talks about the effects of China massively expanding trade with the US in early 2000s on some parts of the country.
Pretty much everyone has known for half a century what trade does, there's a political and logistical problem in redistribution, though (counties tend to get devastated, and people don't like to move).
Interestingly, trade has extremely similar labor market effects to automation.
Popular economics and popular "AI" as described in the article both seem like situations where everybody with an idea to push cites popularisation of research conclusions without understanding the limitations to the models and people with actual expertise in the field are often happy to play along with these hyperbolic, caveat-free popularisations because it helps their end goals.
Sure, the economics profession and its popularisers might be a little more incentivised by political aims and AI research and AI's hypers and commercialisers a little more by money, but both fields suffer from the fact whilst researchers agonise over tradeoffs between tractability and predictive accuracy and fitting and overfitting and wonder whether the class of problem they're looking at is even soluble, the people with the most confidence in their assertions tend to get the column inches, even if they barely know what they're talking about.
I only wonder whether it will ultimately lead to similarly widespread middlebrow dismissals[1] of the entire field of AI...
[1]for the avoidance of doubt, not an accusation I'm levelling at the poster above
Is you "economics background" a BA, by any chance?
The people who say economics "became religion" are usually those whose only exposure to economics is through secondhand knowledge (ie. they read about it in media) or took a handful of undergraduate classes.
Take a look at what modern economics research looks like [1]. Seriously, read __any__ of those articles and come tell me with a straight face it's not doing normal science (come up with theory, test with empirical data).
Economics is the most scientific it's ever been, most graduate curriculum, and even some undergrad, are veering towards the applied statistics arm of economics because it's what's been most successful in the last 20 years. Granted there are empirical problems in, say, macro, but that's mainly due to lack of data.
Economics is also probably the most politicized it's been in a long time at the moment, I agree with you in that. Apart from a few venues like Planet Money or Freakonomics, there isn't much pop-economics like there is pop-science in other fields like physics. Moreover, the incentive to politicize economics is much greater than other sciences.
Sure. The first NBER paper in [1] features the following jewel (p.34):
"B. Calibration
To quantitatively decompose the contribution of different factors to the growth of shadow banks and fintech firms, we first have to calibrate the model to the conforming loan market data."
I can tell you with a straight face that is not normal science. Economists themselves increasingly recognize so-called "calibration" is a farce.
If the paper gets traction and the model specification is indeed not robust, the first thing you'll see in a few months is something like "paper xyz: comment" driving holes in the methodology getting even more traction. Empirical microeconomics is fairly open about methodology flaws and critiques.
Also, by the way, you see pretty much the same type of thing in a ton of fMRI neuroscience, medical and psychology studies (even the ones you'll later see on NPR or ted talks). You shouldn't ever believe any one empirical result in basically anything except maybe CERN particle physics type work.
Your response, and the ostensible fact my critique went completely over your head, probably shows you should definitely spend more time familiarizing yourself with the discipline, before going around defending it.
You should start with 'calibration' in economics. No, it is not quite what you (seem to) think it is. No, it is not quite "the same type of thing" as p-hacking and low-powered studies in psychology. (Whose poor reliability, by the way, is almost common knowledge by now.)
My point is that it's very easy to check "model calibration" (eg. "Plug values from outside data"). Just run the code with different values.
Because it's so easy to check those things (assuming the data is not proprietary or whatnot) I'd argue it's a much lesser problem than the "garden of forking paths" in lab experiments where it's much harder to test robustness of the result.
Moreover, I don't think anyone intelligent is foolish enough to take the coefficients in an econometric study literally; at least I would hope not.
Yup, I'm quite familiar with what modern economics research looks like. My problem with them start when I get to "We now use formal econometric techniques", when they get very limited model with huge amount of assumptions and then draw some conclusions from that.
Math checks out. No questions about that (Although there's always stories about reproducibility of results). But after that there's huge gap, when you try to extrapolate the conclusions of the model to the real world precisely due to assumptions/limitations of the model.
I am rather sceptical about this approach - it's formal to the point where "my model of my virtual world totally works due to ideal nature of this world and mathematical logic", or "we managed to fit our model to carefully selected dataset". Yes, it's formal 'scientific methodology', yes small scope research is the practical way to publish paper after paper (the only real 'currency' in scientific world), but again it seems just to be 'safe haven' for economists, when they are abstracted from real world enough to not interfere with real politics. As with biology in Darwin times, when it was fine to be clergyman and study biology (but mainly in taxonomy sense - listing all 'god creatures' and not thinking about origin of species)
Also, "The people who say economics "became religion" are usually those whose only exposure to economics is through secondhand knowledge (ie. they read about it in media) or took a handful of undergraduate classes." - yup, the same as some nobel laureates like Krugman or Hayek.
Can we agree that the state of the scientific field is as good as a decisions taken based on the achievements in this field?
> Can we agree that the state of the scientific field is as good as a decisions taken based on the achievements in this field?
Certainly not, or else climate science would still be in thorough disagreement about whether or not there's climate change and/or how anthropogenic it is.
Even if you have massive amounts of evidence pointing to something, if it goes against entrenched interests, nothing is going to be done.
Also, you need to differentiate Krugman the political commentator and Krugman the economist. He's much more of the former these days, posts labeled "wonkish" are generally from the latter. The fault is on him, though, for discrediting himself that way.
> Certainly not, or else climate science would still be in thorough disagreement
Looking at Europe - I see the decisions, policies and agreement. Looking at USA I see conscious decision to ignore the science predictions when it suits some political goals in short term and not to ignore when it's vital - like when planning military strategy in Middle East.
The US (and Canada and Australia) are bad on climate change because they have entrenched interests in fossil fuels which leads to political lobbying.
Europe doesn't have those specific problems, but it has others.
Politicians will do whatever has political benefits to them. Economic benefits, or even reality, are purely secondary. You might get the occasional exceptionally altruistic politician, but the average one will act on his incentives.
This is why you see stuff like the Reinhart & Rogoff paper, which never passed peer review, touted for austerity politics even long after it was retracted. Austerity is popular with a part of the population and politicians will use whatever to justify whatever they're trying to do.
Judging anything on its political success is nonsense
This is actually very similar to the issues HCI suffered from a few decades ago, and which indirectly lead to Interaction Design trying to separate itself from it. The former was too dogmatic about trying to apply quantitative models to everything, and the latter consciously said "nope, we're going to go look at the humanities, anthropology, and all other fields known for qualitative research and see what we can learn from there to make better human-oriented designs".
(mind you, this was before the term "UX design" got diluted to "graphic design for webpages and buttons on touch interfaces")
In Danish we have a word "fagidiot" which means idiot of your field. It's the kind of things that make it impossible for someone who's been in the army to enjoy a movie if they don't use the machine gun properly or a designer to appreciate the casting in the end if the typography is not kerned properly.
The things that matter to someone who spent a lot of time in any given field are rarely the important things to anyone else.
Edit: And no it's not pronounced with a hard g but with a soft g.
Similar, but more similar to lots of the people mentioned in the original post is the german phrase "Gefährliches Halbwissen" or 'dangerous half-knowledge': when you know just enough to seem knowledgable about a subject, but actually have a very basic surface knowledge.
(ps. I'm not a native german speaker, so sorry if I misrepresented the phrase, but this is how it was explained to me)
On a deeper level, economists generally don't know what they're talking about. They create linear models of nonlinear systems, they assume Gaussian distribution when fat tails are common and their assumptions about rationality don't align with people's actual behavior. Trusting the models of economists to manage economies gives us the ability to trade a little bit of efficiency for a large amount of risk.
Um, no. At this point it's some PhD candidate claiming to start a series of blog posts. Do you know how many blog posts there are claiming to have a second part?
Either do something or don't. Don't waste other peoples time publishing your grandiose plans. If you want accountability, tell people you actually know, not the internet.
Not so much his fault really, but I'm a bit disappointed that this got upvoted so much.
I only read the first three paragraphs and decided it was probably yet more self-indulgent rambling on the interwebs, came here to read the comments and find out if there was anything interesting here. Apparently not.
The AI and Singularity hype irks me, because I'm genuinely in agreement with Peter Thiel's argument that technological progress is actually decelerating relative to how it was moving 60 years ago once you look beyond the advancements we have experienced in information technology and finance.
True that. 1880-1950 (70 years) took us from train-and-telegraph to jet-and-atom bomb. Horse and carriage was the top-tier tech for personal transportation in 1880, and they were just starting to understand that diseases are often caused by these little blobby creatures called germs.
1950-2020 (70 years) will take us much less distance.
Almost everything we're developing seems to be about information processing. If you look at the technologies that actually do things in the physical world, it seems like almost nothing has changed since the 60's. Cars, trains, bridges, rockets, sewers... all more or less the same.
compared to what though ? are we assuming a linear rate of development ? in evolution organisms change gradually with occasional bursts, and the same seems to be true of human tech evolution. There are periods of stagnation with tiny increments or even regress in places, then there are rapid bursts of unpredictable change, often enabled by information from one area reaching another previously unrelated one.
Peter Thiel has made some other interesting observations:
- that our language about the ‘developed' vs ‘developing' world is excessively bullish about globalization while implicitly pessimistic about technology.
- that government has changed from thinking that progress can be achieved via planning, into thinking that it's more just there to watch random forces & statistics evolve the world. This change in mindset away from planning for innovation has made it impossible to achieve grandiose mission like the Apollo space missions. As a result if there is going to be a government role in getting innovation started, people have to philosophically believe again that it’s possible to plan.
- that environmentalism has induced a deep skepticism about anything involving the manipulation of nature or material objects in the real world. This skepticism explains why computer tech has been able to advance so much but not physical technology like transportation.
- that peer review and grant approval processes are too political, science has suffered because it is hard to find scientists who excel at both science and politics.
- that a shift from manufacturing to non-tradeable services has led to a political class (such as lawyers) that is weirdly immune to globalization and mostly oblivious to it.
in this podcast http://www.talkhouse.com/comedian-tim-heidecker-talks-with-a... Adam Curtis argues that we no longer subscribe to grand visions and as a result we're floundering as our internet-induced bubbles send us into every more fragmented echo-chambers
We're never going to achieve anything great again...it's the beginning of the heat-death of human progress! I think that the recent idolatry of the individual has weakened our ability to act en-mass. Previous generations were more cohesive.
WRT skepticism of modification of the natural world - it's so stupid. There is no "natural state" of things. Everything is modifying everything else. We probably even got the ideas to modify ourselves and our environment as early humans from other animals (ants & termites - cities, beavers - dams etc).
The thing to understand about nature is that it is made up of complex systems that have evolved into finely balanced equilibria. Our reductionist science only works well in linear systems where variables are cleanly separable, which complex systems by definition are not. Thus tinkering with nature under the assumption that it is a linear system is likely to have lots of unintended consequences. That is why science's track record in promoting health is so abysmal, and our agriculture is completely unsustainable.
The only way to really evolve a complex system is to make a lot of small gradual changes, constantly course correcting as you go.
Once you cut down all the trees you've got yourself a big ass problem, doesn't matter if a world without trees is more "natural" or not.
Case in point, all the CEOs and big-money managers who benefit from the mining and extracting businesses buy their (big) residences in pretty "natural" places, surrounded by water and lots of greenery. I'm pretty sure they and their kids don't live in the areas affected by their companies' non-natural dealings.
I disagree with that. The majority of opinions in previous generations were discounted. A small percentage of people in power said: "We want this, therefore we will have it, and the rest will be damned".
We look more at the negative effects that our overreaching plans have on groups in the minority and on the environment itself. There is no such thing as a grand vision that doesn't have an unwanted effect on some group. It's much harder to unilaterally enforce a vision when everybody gets to voice their opinion.
>- that government has changed from thinking that progress can be achieved via planning, into thinking that it's more just there to watch random forces & statistics evolve the world. This change in mindset away from planning for innovation has made it impossible to achieve grandiose mission like the Apollo space missions. As a result if there is going to be a government role in getting innovation started, people have to philosophically believe again that it’s possible to plan.
It's pretty bloody ironic to hear Peter Thiel say we need state planning.
And, I believe, it will stay like this for the foreseeable future. Any really new technology, that is anything that isn't an iteration on what we've had since the 60's, is going to require a leap in the ability to manipulate energy and the harnessing of forces hitherto beyond our grasp.
As scientific knowledge advances on more fronts, extracting ever more marginal gains by spreading our intellectual capital more thinly across the expanding frontiers of knowledge, maybe AI will help reduce the decline in the rate of advancement.
A thesis like that isn't exactly career-making pop sci blockbuster material though.
Yep. But... What could be better than a car, I mean the concept of "mechanized individual transportation". Same question with train "mechanized mass transportation". Sewer ? What could be better ?
While any of these is "not perfect", it's tough (at least for me) to come with something conceptually better right now.
And also, I think we have to rememeber that many of these were possible because of oil. So, I think that if we don't find a significantly better energy source, the material world will just evolve very slowly.
When I think about it, it's obvious that new stuff happens in knowledge/information space : it has a very strong innovation/energy ratio...
> A world where you could walk to the things you need because they're close?
That was the world before cars - it's cars and access to good infrastructure that changed the world to its current state where local businesses can't compete anymore.
We've made tremendous progress in that direction since 1960. What's your point?
> A world where you could walk to the things you need because they're close?
That's not a technology problem; much of the developed world is like that and more of the world is getting to be like it with progressive urbanization. The US made a deliberate social choice to focus on auto-centric development that inhibits that a bit in the US, but that's a social problem, not a technical one.
Better than a car? Mechanized individual transportation that isn't bounded in the 2d plane.
I think Elon also asked the question of what could be better than a train and threw out one possible answer: Hyperloop
Sewer? It doesn't take a lot of creativity to see how a sewer could be better. How about micro waste treatment directly inline with the sewer so now it is not carrying waste, but clean water, and just pipe it directly back in.
Low return-on-investment - they did all the science they wanted back then. We could do it again (and multiple countries are considering it), but, why? The moon landings were part science (mission achieved), part reputation (done).
Actually the point has been made that we have actually lost a lot of the knowledge and expertise that existed back then, and that it would take considerable time and money to accumulate that knowledge together to do it again.
To reuse a point brought up quite often in this context, making predictions about when and if AGI might arrive based on extrapolations of current technological trends is like trying to predict when the atom bomb would have been created based on a chart of conventional explosive yields leading up to 1940. Undiscovered discoveries are not predictable, certainly not by extrapolating from the current progress. This applies to Kurzweil's highly optimistic predictions as much as it does to negative ones.
Whether or not it's likely we'll see AGI any time soon is, IMO, a matter of wild-ass-guessing more than anything else. There are no experts in AGI right now, only philosophers, because the field doesn't even exist. So nobody has any remotely educated guess about what algorithmic discoveries need to happen and whether they're in reach for human understanding, or when we'd have the computational power to exploit such algorithms if discovered.
All we can really say is that it's theoretically possible, we're living proof of that. Also that there must be a lot of intelligent needles in the algorithmic haystack if evolution was able to stumble across one without a clear fitness mandate to do so (most of the creatures that thrive don't have what we consider intelligence, so it's not that evolutionarily important). I'd probably bet on it being more a matter of "when?" than "if?", but I wouldn't make any assumptions about timescale other than "not this year, probably not next".
Actual real research in AGI was a thing when I was doing AI research in the late 1980s early 1990s (I got out just as the last AI Winter came along in the early 1990s.)
Consider things like SOAR - that was explicitly an architecture for general intelligence. Whether it was any good or not is another question - but there was an active good-old-fashioned AGI field back then.
Mind you - one of the reasons I left AI research to do web stuff from '92/'93 onwards was I was pretty sure the symbolic approaches in favour back then wouldn't scale up and that there really was only intelligence on one side of a computer screen.
The idea of the atom bomb was well established prior to 1940, the question was who would get there first, not if it was possible.
1934 was a turning point in this respect, by the 1940's it wasn't a question any more of whether or not atom bombs could be created, it was more a question of 'how' than 'if'.
It's problematic that the "experts in AGI" are generally self-designated, and have mostly produced little that is concrete, either in the form of theory or engineering. We should be concerned about long-term outcomes of technological progress, but I'm not convinced the current conversation centering on the term "AGI" is productive.
> most of the creatures that thrive don't have what we consider intelligence, so it's not that evolutionarily important
I dispute that this is because they don't poses intelligence. Whenever we find an animal displaying some ability we had said required intelligence — language, tool use, self-awareness, solving abstract puzzles, driving a car — we seem to change our definition of intelligence. Just like we have done with A.I., although probably because we want to eat the former and make slave labour of the latter. ;)
I don't think it's reasonable to equate 'technologies that existed since the 60s' with 'technologies that are ubiquitous today'.
I'm in the bottom 15% of my country (UK) by income and I have access to things like: near-instantaneous hot water at all times, affordable next-day delivery services from a pocket-sized device I carry around with me, international air travel that is affordable for me, efficient fridge/freezer technology, fresh groceries from around the world at every local convenience store.
And yes things look a lot less impressive if you filter out information-technology related things. But advancements will always look less impressive if you filter out the most impressive advancements. The fact that we are close to permanently connected across geographical boundaries now isn't something to be dismissed out of hand. The level of effectiveness and miniaturisation of communications devices (e.g. 'true wireless' earphones with mobile internet) is really approaching the point of being practical telepathy insofar as it can be used.
> I'm in the bottom 15% of my country (UK) by income
Dude, no offense, but you're talking about the center of an empire that's literally the biggest that the world has ever seen.
That your country is wealthy and can provide for your basic necessities has more to do with this little historical fact than it does with technological advances.
That's a fair point, but it does align with what I was trying to get across.
That being that achieving 'technological progress' with a milestone that applies only to the super wealthy is only one part of technological progress. Bringing it to millions of people at an affordable cost is also something worth considering.
Former empire. Yes it's still rich, yes the boundary of the bottom 15% of the UK is the same as the top 9.7% of the world, but most of the former empire violently rejected the UK in the second half of the 20th century (and most of the rest before then). This collapse caused economic damage that was solved in part by joining the EU, so yay irony for bringing up the long-gone Empire just as my leaders are trying to get it back. ;P
For the rest of the world… In the last 70 years, Ebloa and HIV were not only discovered but so were partial treatments; Smallpox has gone from "a vaccine exists" to "wiped out"; Rinderpest didn't have a vaccine and has been wiped out; Polio has gone from no-vaccination-possible to almost eradicated (37 cases in 2016); Guinea worm — mostly affecting the poor — has gone from multiple millions of cases per year to 25 individual cases.
For wider medicine: the first organ transplants were kidney in 1950, pancreas in 1966, heart and liver both in 1967, ovary in 2005, penis in 2014; DNA has gone from "???" to fully readable and partially editable; ultrasound, pacemakers and IVF were invented, and antidepressants have improved.
In industry, robotics relies on computer power, so I disagree with anyone who dismisses the development of computing power; likewise, telecommunications have gone from "phone calls within your town are expensive" to "let's have a live video chat across 8-time zones".
The entirety of high-temperature superconductors happened after I was born, never mind going back as far as 70 years; Kevlar, Nomex, and carbon nanomaterials were all made for the first time in the last 70 years.
The poor of the world don't have everything we have, with the exception of vaccines for illnesses we wish to eliminate, but even the stuff we discard often contains things that could not be bought at any price 70 years ago.
excellent perspective. if we take the very narrow computing field, there have been considerable inventions, especially in the field of medical science.
Almost everything you listed there already started becoming common by the 1960's.
Step back and forget about informational technology. Look at the world of atoms, not at the world of bits.
Since the decommissioning of the Concorde, our fastest commercial means of transportation has actually been getting slower, not faster. Man hasn't reached farther out in space than the moon missions of the 1960s. The first half the of the 20th century brought us: antibiotics, electricity, automobiles, air travel, rockets, space travel, satellites, radio, reliable clean drinking water, indoor heating, laundry machines, diswashers, widespread indoor plumbing, and massive improvements in sanitation / literacy.
The second half of the 20th can't even come close to this level of technological growth.
Not sure I'd consider stuff like the vast majority of human knowledge sitting in my pocket to be less useful than the dishwasher because it has fewer moving parts or most people can afford to fly to be less of an achievement than the megarich can fly 50% faster.
Recent decades might a disappointment compared with the millenialist interpretation of exponential curves, but that's only the same as the disappointment a Victorian idealist might feel with the lack of stuff that happened in the twentieth century as a whole given that neither utopian socialism nor the Second Coming has occurred yet.
>Not sure I'd consider stuff like the vast majority of human knowledge sitting in my pocket to be less useful than the dishwasher because it has fewer moving parts or most people can afford to fly to be less of an achievement than the megarich can fly 50% faster.
Not sure anybody told you to consider that.
What they did ask you to consider is that technological progress, outside of the digital realm, has been slowed down.
The fact that a relatively low income individual has access to a large number of goods and services which were once considered exclusive to the wealthy is a clear indication that technological progress has continued at a steady pace.
While 'slowed down' or 'sped up' are difficult to quantify, the fact that 3 billion human beings have gained access to smartphone technology over the last decade seems to support the idea that technological progress has in fact sped up.
Increasing air transportation speed is a very narrow application of technology and hardly constitutes a meaningful gauge of technical innovation.
>The fact that a relatively low income individual has access to a large number of goods and services which were once considered exclusive to the wealthy is a clear indication that technological progress has continued at a steady pace.
No, it's just a clear indication of market efficiencies and/or better engineering.
It doesn't say anything about what the grandparent asked for: the rate and magnitude of new scientific/technologic discoveries.
One era (early 20th) is the industrial revolution; one era (approx. 1960-now) is the information revolution. It is true that the industrial revolution has slowed down and mostly focused on incremental improvements, but it seems strange to discount the information revolution gains completely. There's been a lot of information revolution gains.
That said, I would argue that medicine too has made some pretty significant advances in the 2nd half of the 20th century... mostly in surgery techniques (transplants are 2nd half 20th century), scanning techniques (NMR and CT both were 2nd half 20th century), and pharmaceuticals. Significant vaccines (polio, measles, mumps) were 2nd half of the 20th century developments. Genetic science has made huge gains as of late. Etc.
But half of the "world of atoms" things the grandparent listed were also market efficiencies or better engineering. Trading average speed for vast improvements in energy efficiency in aircraft in order to make flight accessible to the masses certainly involved vastly more progress of a technical and scientific nature in the late twentieth century than the cited early twentieth century example of wider replication of well-understood principles of plumbing to give the masses indoor toilets.
"Market efficiencies and/or better engineering" are often enabled by technological breakthroughs.
Consider the efficiencies generated by Amazon's highly automated warehouses; their level of automation wouldn't have been feasible in, say, the 60's. Substantial technological progress on multiple fronts has been required.
Actually it isn't. As Peter Thiel would say, globalization is the copying of existing technology, it is not the manufacture of new ones.
One reason we have stagnated can be revealed in the bias of our new language. Where we once used the terms "1st World" and "3rd World" we now use "Developed" and "Developing" -- language that is excessively bullish about globalization while implicitly pessimistic about technology.
>Technological progress, as long as you don't count all of this massive technological progress (which when left out conveniently makes my point), has been slowed down.
Edit: I should be less combative and provide some actual content. Take a look at semiconductors from the 60s vs now. They are clearly related to 'the digital realm' but are purely physical technology. The fast computers we have now (which enable our digital realm) are at the tip of a screaming bullet train of physical technological progress.
Consider cameras that can shoot 10000 frames a second at ultra high definition resolutions; in the 1970s we had shitty auto developing film as the height of consumer tech.
Battery tech has gotten so advanced you're literally carrying around a firestorm's worth of energy in your pocket and only notice in the rare case that things go spectacularly wrong; compare that to the heavy inefficient NiCds of your parent's age.
Semiconductors are 14nm. That's not for scientists, that's not for researchers, that's for any joe that has 300 bucks. 70 years ago the idea that a pace described by Moore's law could even exist was seen with skepticism (remember it's a marketing "law" not a physical one).
We have rockets that can land themselves, cars that weigh half as much as they did 70 years ago and are twice as safe, a logistic network such that exotic foods are available year round world wide.
It just kills me to watch people post on the Internet using more than likely their mobile super computers, saying tech clearly has slown down because "they've got a feeling".
1. The important point to emphasize here is deceleration does not mean that progress isn't happening. It simply means that the groundbreaking progress we witnessed in the first half of the 20th century is not being replicated as quickly anymore.
2. Technological progress is based on the outcomes they bring with them. Easy or hard to achieve -- doesn't matter in terms of this measurement. Insane advancements in physics is not technological progress unless it enables us to vastly improve our existing capabilities to do things as humans.
Thiel argues this by pointing to a tech were all excited about: self-driving cars.
Thiel argues the original invention of the car was still a bigger innovation.
electricity -> cars -> airplanes -> rockets -> space travel
radio -> tv
laundry machines -> dishwashers
antibiotics -> vaccines
^^^All first 3/4 of 20th century. Self-driving cars would be merely a close match to any one of them. For the next 50 years to match what we saw in the start of the 20th century, we are going to need at least 4 or 5 other technological revolutions in the world of atoms for the comparison to be even similar.
It may be, but there isn't a real good metric for societal improvement. And it seems like progress would always have ebbs and flows as different technologies have new opportunities that are squeezed -- and then other technologies look plumper, so they are squeezed, and so on.
And that's fine. I don't think anyone is complaining today about timepieces not keeping even more accurate time. Clocks today (in whatever device they are embedded in), while not 100% accurate are pretty good. The ROI in improving them further just isn't there -- in part because the good they'd serve humanity isn't there.
I think there are several quality metrics for gauging societal improvement:
- healthcare access and outcome statistics
- poverty & homelessness statistics
- education levels
- access to clean water in sufficient quantity
- access to healthful foods in sufficient quantity
- levels of environmental contaminants
- delta in income of top and bottom economic tiers
And so on. Toss in a few bullet points intended to flag totalitarian/authoritarian/fascist tendencies, worker exploitation, and the like and I think that while likely not a comprehensive blueprint that's more than enough to get started. Especially considering how poorly so many industrialized, first world, "democratic" countries in the world do on so many of these basics.
That said I agree there's no ROI on improving household items like clocks. The problem is there's also no obvious ROI on resolving homelessness.
What needs to happen is for VC and other sources of investment to stop chasing easy wins and realize huge money can be made by moving more out into the world of atoms.
How do you know moving into the world of atoms will lead to huge economic wins? Its easy for you to say someone should chase less risky investments when its not your money being invested.
Would you know a few VCs who have lost several million or more on an investment. Do it sometime and then express your optimism for the future.
> Since the decommissioning of the Concorde, our fastest commercial means of transportation has actually been getting slower, not faster.
Our means of travel have also gotten vastly more efficient.
Prioritizing progress in average efficiency over speed of a showpiece that is used for a very small share of actual travel isn't the end of progress, and it is very much change in the world of atoms, not bits.
It has gotten twice as cheap (if you look at average ticket prices adjusted to inflation) from 70 to now. Also point about concorde - it's more fuel efficient to fly slower (look at drag coefficient [1])
So from point of view of affordability and mobility for general public - it's good progress.
>It has gotten twice as cheap (if you look at average ticket prices adjusted to inflation) from 70 to now.
Which is consistent with the gp's point that progress has slowed down.
It had gotten 100 to 1000 times cheaper, faster, more efficient to e.g. travel intercontinentally between the 1900 to 1970 that it has between 1970 to now.
Oil prices are essentially capped at $60/barrel for the foreseeable future due to unconventional extraction techniques -> was not long ago oil production was thought to have peaked. Solar power can be economical without subsidy. Robotic and minimally invasive surgery. Battery cost becoming viable for mass-market cars. Gene sequencing. Sensors for pennies. New materials (cheaper carbon fibre, metal alloys, semiconductors, graphine/carbon nanotubes). Most of the things mentioned as 'first half of 20th century' coming to the next 5 billion people.
For aerospace, how about affordability of a ticket over time (Ryanair vs Concorde)? Space travel: number of journeys per vehicle or time astronauts are able to spend in space (was minutes in early space flight, unlimited now). Transportation speed is a straw-man, the industry optimises for cost not speed.
No, because you're failing to distinguish between game-changer tech - transistors, DNA sequencing, operating systems, powered flight, all as classes of original and unexpected inventions - and refinement tech, which is made of game-changer inventions made smaller, cheaper, and more widely avaialable.
Game changer tech changes what can be imagined. Refinement tech changes what can be bought by consumers.
There's been plenty of refinement over the last few decades, but not nearly as much original game changer invention as in the previous decades.
I learned a lot from researching the topics mentioned in this post, and hopefully someone else can also benefit from the following information.
tl;dr: Indoor heating significantly predates 1900. Though relevant inventions largely predate 1900, widespread adoption was indeed between 1900 and 1950 for electrification, automobiles, air travel, radio, washing machines, and indoor plumbing. The years after 1950 include most of the developments and use of the Concorde, moon missions, antibiotics, rockets, and dishwashers. I can only describe the inclusion of "space travel" and "satellites" in a list of things from the "first half of the 20th century" as trolling. The remainder of my post is what I found for each topic.
The Concorde flew from 1969 to 2003. The last time humans traveled past low-Earth orbit was 1972. (Space exploration itself started in 1957, and since then unmanned space probes have explored no shortage of interesting targets.)
Penicillin was identified in 1928 but first made available to civilians in 1945. Of the ~50 antibiotics on the WHO Model List of Essential Medicines, I found 7 discovered before 1950: penicillin G, penicillin V, chloramphenicol, dapsone, 4-aminosalicylic acid, streptomycin, and pyrazinamide. The final one was not used until after 1950.
The basics of electricity, including arc lighting (1802), incandescent lightbulbs (1878), motors (1832 or even the 1740s?), telegraphs (1774), power stations, alternating current, etc were all invented and in use before 1900. Of course, the period of electrification in now-developed countries was roughly from the mid-1880s until around 1950.
The basics of automobiles also predate 1900, but mass production began around then. Many major developments in automobiles indeed happened between 1900 and 1950.
Air travel via balloons predates 1900 by more than a century, but both rigid airships and (controlled, powered) heaver-than-air flight developed between 1900 and 1950.
Rockets were first developed by the 13th century, and found significant and widespread military use before 1900 (eg, the Star-Spangled banner was written in 1814). Modern rocketry probably began in 1926 with the first liquid-fueled rocket launched by Goddard. Germany used the V-2 rocket throughout WWII, and this presumably is the reason for inclusion in "the first half of the 20th century".
As mentioned previously, the use of rockets for space exploration (and thus also satellites) did not begin until 1957 with the launch of Sputnik. Human spaceflight began in 1961.
Early radio predates 1900, but indeed radio largely developed between 1900 and 1950.
I had difficulty summarizing the history of water treatment; it seems that many inventions predated 1900 but indeed the period between 1900 and 1950 was the main significant period of development in the United States.
Indoor heating predates the invention of writing. Almost every development in the history and widespread use of heating predates 1900.
Washing machines predate 1900, but the development and widespread use of electric washing machines was indeed between 1900 and 1950.
Dishwashers also predate 1900. Significant aspects of their development happened between 1900 and 1950, but their widespread use was after 1950.
Indoor plumbing predated 1900 by millennia, but indeed the widespread period of adoption in the United States was between 1900 and 1950.
I do not know how to summarize the improvements in sanitation, but I would probably say that 1900 to 1950 did not feature particularly notable improvements in literacy.
Very fair points. Peter Thiel actually argues the slowdown began in 1970, which would capture nearly every technology mentioned.
The period of comparison is really the start of the industrial revolution until the 1970's. This 100 year period was dramatic, and arguably, we are not matching it right now with our rate of progress outside of information technology.
Again, that's information technology. That's one area where the past still moved fast like we are now, while the rest of us I am arguing hasn't (in terms of speed).
None of the above are new technologies (last decade(s) or so) developments -- except those that concern computers (which is in accordance to what the parent said).
We can consider them "successful new developments" or whatever else we want.
We just can't consider them important new discoveries in the sense that the creation of things like steam engine, air travel, jet engines, etc were (to stay with transportation).
If you wish, the former are basically incremental engineering/efficiency improvements upon the latter.
What the parent observed as lacking/slowing was non-incremental improvements.
Not sure about your bridges point - I see the bridges over the Forth most days which are from the 1880s, 1960s and the new one opening this year and they all look pretty different to me (cantilever, suspension & cable stay respectively).
Would something like the Millau Viaduct have been possible in the 1960s?
> Would something like the Millau Viaduct have been possible in the 1960s?
That's two questions in one.
The crossing of large spans was a solved problem in Roman times (pillars and arches).
To do it exactly like that did not require anything new in terms of materials, but there would have been quite a problem getting such a design approved without the computational homework done that it would work well in all weather conditions. Which in turn would likely lead to it being overdesigned.
Computers are excellent at such simulations leading to a higher degree of confidence in designs that have never been tried before or that push the envelope in some manner.
We can manipulate matter in a number of ways unimagined in 1970. For example; trapping atoms with lasers, artificial proteins, meta materials, high vacuums, near absolute zero temperatures, high energy plasma, photonic memory, microwave atomic control, ion traps, scanning tunneling microscopes, high temperature high tesla super conducting magnets, gene sequencing and splicing.
I think it depends where you look at technology and how closely...
Also look at additive manufacturing - we can make things that were literally impossible to fabricate five years ago with a 200k sintering setup. 4d printing is pretty cool to!
In your examples consider a modern turbofan blade, a single crystal of titanium grown in a xenon atmosphere and then heated white hot before being blown up like a balloon so as to allow refrigeration during operation! Another mind blowing sector is the change in medical scanning, ct, pet, mri - all new since 1970
I think it's also important to consider and analyze technological advances in the context of various periods in the historical development of humanity.
For much more than half of the 20th century, nations were at war with each other, the British Empire was declining (and there had to be a replacement fitting for the times), etc. Man's taste for blood has not necessarily gone away with the 20th century, it only found expression in soft power and other psychological means of domination (e.g., Hollywood movies whose central theme is American? Exceptionalism). This fresh awareness that actionable information is what may determine the fate of nations has also polarized technological progress in favour of advancements in understanding data and processing large volumes of it in the most efficient way possible. To be fair, Claude Shannon and Alan Turing made outsized contributions to this space.
Just as there were Economic Consequences of the Peace (thanks, J.M. Keynes), I think we will also have to chronicle the The Effects of Peacetime on Technological Progress.
A flying car would be nearly magical for someone from 1880.
But a smartphone to someone from 1950? It would seem pretty fancy, but not impossible to imagine. The guy from 1950 already knows the basic tech behind it - it has a screen, it uses wireless data, and is powered by computers.
I think you have that backwards. A century ago people imagined flying cars but not smartphones. (See Albert Robida.)
Now, we would find flying cars "nearly magical" (myself included, having been recently amazed by modern vertical takeoff fighter jets) but apparently aren't impressed by smartphones (myself not included, even before having recently learned about semiconductor fabrication techniques).
Robots and remote drones do things in the physical world, and in pretty much every area of life from medicine to scientific research to manufacturing to search and rescue to war. While there was some of that around in the 1960s, the actual use and effect on the way people work and live has been radically transformative.
Do you think this could have something to do with ingrained interests that have subverted disruptive technological progress? I have a feeling that there have been at least a few cases of this preventing the public from enjoying the benefits of "future living" that the 60's envisioned.
I used to think this way too. And actually I would get a little depressed, as a software engineer, because I worried that I wasn't doing much useful work only creating software all day.
But then one day I had a profound insight: For nearly all of human history, the ONLY agent of change has been software. That is, our bodies have remained exactly the same for the last 80,000 years and the only thing that's changed is our culture, knowledge, languages, etc. We've used those changes in software and information to create new physical tools but the only thing that's taken us from the beginning to today is slow, steady improvements in software and "information processing".
I actually think that it is a high leverage phenomenon that most of our innovation is now focused much more significantly on pure information. It's hard to understand how this can't be causing acceleration in change.
It's entirely possible that progress is being made in areas not immediately visible to the public.
You see the horses replaced with cars - what do you see of the advances in sewage management, or plastics recycling?
Also, an electric car is still a car, but it's still very different from a combustion engine, or a coal steam-engine, for that matter. Does the fact that a horse is so dissimilar to an electric/petrol car mean the two are relatively similar? That seems to be a fairly superficial distinction of change.
There were many advances in chemistry, medicine and biology, materials and manufacturing, but not all of them translated into something that's truly transformative as far as society as a whole is concerned.
Agreed. The one thing I would add though is that while a lot of the technology we are using now has roots in the first half of the century, the second half has been big in terms refining and making said technology available and accessible to an increasingly higher % of human population across geographies and across economic levels. I think that in itself is a big step forward.
That's the problem. Globalization is merely the copying of existing technology, but many in the west have mistaken it for real progress.
This is one reason why economic growth in the west is stagnant. Technological progress is what fuelled our strong growth half a century ago. Without it, politics becomes a zero sum game.
That's ridiculous. Everyone has a personal computer in their pocket and can communicate with anyone in seconds across the world. The world's libraries are at your fingertips by mail order or on your screen.
We may not be on the cusp of a singularity, but that's nothing to be ashamed of.
We should be proud we haven't seen a major world war in 70 years.
I think it all depends on which technology. They all seem to follow a sigmoid curve where progress accelerates and then tapers off. The fastest improvement may be say 1860 for sewers, 1960 for jet engines and now for computer tech.
Perhaps it was a greater achievement to get the first 10^4 people online than the next 10^9. More of the same-or-slightly-smaller.
I certainly got more delta-value out of pre-web Internet (email, Usenet, chat) and Web 1.0 (online papers and libraries, nearly-free publishing) than Web 2.0 onwards (Facebook, etc). Maybe it's not accelerating constantly.
I think you're really underselling things like 3G data plans which are now worldwide. That didn't exist 10 years ago.
Even just the widespread use of SMS has helped medical teams assess and treat diseases in remote areas.
When people complain about some lack of technological advancements, it sounds to me like they're just not aware of what we have accomplished. And I am not surprised that Thiel is in this category. He has a JD, not a PhD
There's a big difference between technological advancements and engineering & standard development. Deploying a cell network is an engineering and logistical task, not a feat of invention.
Except this is talking about deploying 3g globally. Once the problem of a device moving from LA to SF to NY is solved, it's nothing new to add in Johannesburg and the whole of Swaziland. What's more is that this system was solved decades ago, it's not even a 3g problem.
No, I'm staggered by the wonderfulness of modern networks. It's just that taking the Internet to the masses is slightly less staggering to me than getting from POTS (analog telephone) to the Internet. And I do have some idea what is involved.
I'm in the camp that abstains from social media, but people are almost always going to find distractions, be it social media, movies, shows, games, novels, or consumables. In moderation, that's perfectly fine. Not everyone wants to work a crazy amount of hours or considers what they are doing from 9-5 fulfilling.
I actually think distraction from our IT devices is a major reason why things are slowing down.
Without our smartphones, spending hours tinkering would be how most people ended up spending their time. It sets one up much more so for a maker mindset over a consumer mindset.
>because I'm genuinely in agreement with Peter Thiel's argument that technological progress is actually decelerating relative to how it was moving 60 years ago
Is it possible we just covered most of the low-hanging fruit in the 1800s and 1900s, and now the only remaining discoveries that are really ground-breaking are the really difficult ones, hence the slower rate?
Excellent video. I highly recommend everyone working in or with IT to see it.
I think he is right.
PCs and smartphone are mostly recreating thing from the past, and not the best things either. However, this isn't because we invented everything there is to invent. It has more to do with how research in computing is currently funded and how companies approach invention and innovation.
You probably should watch that video. This is one of its core subject. Kay makes a valuable distinction between invention and innovation, and what you describe falls into the "innovation" bucket. It's incremental. Yes, our cellphones are more powerful than mainframes of the old, but do we utilize them to the same extent?
With enough work one can convince oneself that each 'discovery' or 'invention' is a trivial follow-up of existing technology. DNA sequencing is in the post-laser era. Can you build a convincing argument as to why it's not new but the laser is new?
I've always thought - and described PC's as mini mainframes.
Even the software we run on them is mainframe derived - Unix and VMS.
The mobile/tablet is truly the first "personal computer", which we've barely tapped yet. I am hopeful, but at the moment we haven't gone much beyond what we could do with very efficient punch card sorting.
There's an (underrepresented) idea when looking at the history of tech called a Kondratiev_wave[0] that says global economic changes happen in a series of predictable peaks and valleys. If the recent trends of computing really took off in the late 1990's, we should expect the wave to stagnate between 2015 and 2030, and recede into stagnation and negative growth around 2030-2050.
I don't think this is foolproof, but it _is_ a tool and a perspective for analyzing history, and one that runs counter to the most common narrative of increasing acceleration in technological development. The physics way of saying this is that velocity is always positive (we're always going up or near-neutral) but the rate of acceleration isn't spectacularly greater than the new ones. If we really are talking about the rate of change of economic/tech growth (the rate of change of market size/capability), then we're talking about acceleration!
Anyway, this was a whole rant, but my personal take is that we tend to underestimate the amount of turbulence in history, leading us to believe that the past was a stable, peaceful and simpler time, while the present day is going nuts. This kind of thinking has fueled messianism, apocalyptic cults, futurism, dreams and nightmares for centuries, and will continue to if we don't blow ourselves up. We forget that even in places like Ancient Rome the rate of change in tech and human growth was still positive and rapid, that during the Dark Ages science and technology were still exploding in the Middle East, that the differences between 1650 and 1750 were vast in terms of human thought, that between 1850 and 1950 they were seemingly greater. There was no time of mythological stasis before the present day, and it's unlikely that there ever will be as long as we're still human.
It's possible, even plausible, but I think it's not so. My guess is that in the decades around 1900 America and western Europe were considerably more dynamic -- there was less overall resistance to innovation. To pick a tiny example, driver's licenses weren't required in all states until, iirc, the 1950s. Radio had decades of development before the FCC. Some doctors tested new treatments on themselves. Immigration from Europe to the U.S. was unlimited. I'm not promoting going back to those days -- I don't want to get into that argument -- but I'd like to reach better understanding of how much the growth slowdown since 1970ish is about the intrinsic difficulty of innovating vs. how much is about the way we collectively think and coordinate.
>Once you look beyond the advancements we have experienced in info tech and finance
And medicine, and materials, and semiconductors (this board in particular loves to forget that these computers we all use are built on top of something), and antennas; the list goes on. What was once highly theoretical phd knowledge is now a byline in a junior level microelectronics course; the cutting edge having evolved to be unrecognizable to a person 60 years ago. It took us 40 years to go from a computer that could play chess to a computer that could win; took less than twenty to go from winning chess to winning Go (and exponential jump in complexity). In the same amount of time we've gone from mapping the human genome to editing it. Rechargeable battery technology that could power a vacuum for 30 minutes to driving a car for 400 miles.
Never forget that exponential change looks linear on the small scale.
For instance, let's say you have a small circle with a small bump on it. The circle represents current technology, and the bump is a change in technology (i.e. innovation).
Now take a much larger circle that has many small bumps on it. The bumps look tiny in comparison, but in aggregate, they are much larger than the bump from the smaller circle. In fact, at some point, they will be bigger than the old circle, even though it just looks like an un-smoothed circle.
Further, 60 years ago we were coming out of wartime, which meant that many innovations were able to come to market in a short period of time (they had been held up do to wartime limitations like rationing, material shortages, etc.)
What about genetic engineering? We can create new organisms by directly modifying genes, in fact we frequently do for drug production, etc. We can sequence genes and the cost is falling faster than Moore's law. We can introduce stem cells with different genes into the body (including brain) to cure genetic diseases. We'll be able to tweak genes in human zygotes soon.
Does this compare favorably to internal combustion or indoor plumbing? Seems to me that it does.
And AI is a huge force multiplier in all of these abilities.
That said, for most of the arguments that modern-day thinkers talk about, accelerating technological progress is totally irrelevant. You're most likely thinking of Kurzweil's school of thought around a singularity, which is nowadays far from the "norm".
If you're talking about the arguments around technological unemployment, it doesn't matter if tech is accelerating - we could, for example,, over the course of the next e.g. 30 years, replace 50% of human jobs. This might be "slower" in some sense than previous tech changes, but it doesn't change the bottom line. (note: these are made up numbers to illustrate a point, I have no good guess at the real numbers, but lots of smart people, including economists, think it's a problem we will face soon-ish).
If you're talking about the arguments around "unfriendly AI", then again, it is totally immaterial wether tech in general is accelerating. All that you need to believe in order to worry is that eventually, at some point, we will be able to create a machine that thinks, and it is not automatically guaranteed to share our values.
> You're most likely thinking of Kurzweil's school of thought around a singularity, which is nowadays far from the "norm".
I wish this were true, but it's gotten a new lease on life with Nick Bostrom's book and the various high-profile people who have been influenced by it (like Elon Musk and Bill Gates), so there's a bit of a boomlet around singularity speculation again.
But that's where I disagree with you. I think Bostrom et al's take on things is correct (at least potentially), and has nothing at all to do with the Kurzweilian norm. Bostrom's argument requires far fewer background assumptions.
Since we are on the topic, how do the big firms (Google, Facebook, etc) view Singularity University? Is it considered positive, neutral, or negative? Would appreciate inside perspectives.
I'm sure many AI experts at these places view it negatively. These are not monolithic organizations, but massive companies filled with real people with varying opinions.
Here is my annoyance with AI Hype: people seeking extra tailwinds pitch their startups as "ML companies" even with the most tangential usage of ML. It drowns out real ML companies. Most people cannot tell the difference.
At a hackathon recently, someone pitched their app as an "ML-driven app" though the only ML in there was some 1-line language translation feature they were consuming off Watson REST services for a tangential feature on their app.
Meanwhile, my submission actually used a self-trained CNN on a custom dataset using TensorFlow and changes to start/end layers on the NN. The image classification features were the core of the app and it wasn't something that was just a wrapper over an Off-the-shelf API. We actually tried multiple networks and went thru the trouble of parameterizing everything.
At the end, I wonder how many judges actually understood the difference in effort/value to the two attempts at ML.
Difficulty level 1: The only ML in there was some 1-line language translation feature they were consuming off Watson REST services for a tangential feature on their app
Difficulty level 2: Self-trained CNN on a custom dataset using TensorFlow and changes to start/end layers on the NN
Difficulty level 3: Custom rolled 12 layer CNN trained with with novel hand labeled data.
How many people to you think know the difference between these and the fact that each step is a magnitude or more harder than the last? That's not even getting into trying to apply research grade stuff.
Why should people be able to tell the difference vs the outcome of those products, unless someone (investors?) is specifically just buying into the AI hype itself? (in which case, this is nothing different from when everything was "social")
I'm not saying people should be able to tell the difference, nor am I discounting the value of the final outcome. Just that pitching an app as ML-driven, when it is barely 1% ML-driven is deceptive and mis-informs the public.
Consider taking the side mirror from a Ferrari, putting it on a Ford Escort, and then pitching the Ford Escort as a race-car. The Ford Escort may be a great family car, but it is not a sports car. People who don't know the difference might come to think of it as a sports car, and might even question the wisdom of spending money on a Ferrari.
I get (and perhaps even agree) to the sentiment, logically at least, but realistically all marketing is deceptive. It's pragmatically more useful to understand the how's and why's of that and play along.
Most off the shelf ML tools are not that powerful - and they don't make a difference in how you differentiate as a business. If you have a ML team and know how to build models from scratch with novel data, then you can iterate on your model and expand, something you can't do if you don't own the model.
You should show your differentiation by differentiating. But in many cases the ML differentiation isn't the same as differentiating the value prop of the product.
I wasn't at the hackathon for the prize, but to find enthusiastic developers to recruit for employment, those that aren't usually found through traditional searches.
Effort doesn't have much relationship with value. Nor does the complexity of the core tech. People don't pay for implementation details; geek points don't mean prizes.
Effort where it maximizes return, leveraging existing assets where possible, to solve problems for real customers, is what's valuable.
Newly built tech is a rapidly depreciating asset; the depreciation comes not only from the progressing state of the art elsewhere, but also in maintenance costs and lack of network effects. Open sourcing to offset the depreciation is risky too.
Your focus on "effort" and "real" sounds a bit exclusive at the behest of your own job security.
The ML boom (specifically DL) is so interesting because of the applications. Computers recognising images better than humans for instance.
Surely outside of academia its a means to an end like anything else. If Team A wires together some high level Keras components with 30 lines of code, and does something really cool with it, is it less valuable?
If said team uses Keras as just a black-box, I would argue that it is. Not having intimate knowledge of the complex algorithms that you employ to build your services is not exactly an ideal situation to be in. If things doesn't work as expected, you can't just look it up on Stack Overflow, for instance.
Most of the people who have a strong machine learning or deep learning background are actually unaware of the large amount of existing research that applies to artificial general intelligence (AGI) because that group is just not mainstream.
But there are groups who are combining cutting edge neural network research with AGI and seriously trying to build general intelligence. Researchers who are working on narrow AI tasks and are familiar with this will continue to be incredulous right up to (and possibly beyond) the point where they see a system that seems intelligent to them.
Care to share links to those labs' websites/relevant papers? I'm a deep learning researcher, and I've been noticing this gap that you mentioned between the DL community and what everyone else in AI is doing, and think it might be worthwhile trying to bridge that gap.
Google for publications by Deep Mind, Ogma, Good AI, Open AI, Numenta, OpenCog, 'Towards Deep Developmental Learning', projects related to new DARPA program 'L2M'. Also search for projects labeled AGI.
I'm unfamiliar with most of those, but aren't Deep Mind and Open AI pretty standard sources of knowledge in the DL community these days? I say this as a practitioner who has read some things from both, and had the impression that while cutting-edge, they are both "traditional" DL-focused institutions at the moment, not so concerned with AGI.
Honestly I think trying to top down design general intelligence is a mistake. The only model of intelligence we have resulted from the gradual agglomeration and integration of narrow intelligences. Not only is it a demonstrated path, but at every point along the way it is useful. Nature has given us a great roadmap, all we have to do is follow it.
Nature do not always give the correct roadmap. Planes are no birds. Humans tried for a long time to mimic birds with flapping wings and failed, sometime to their death. I think it may be the same with AI: trying to mimic human, or even animal's intelligence is the wrong path. We need to find "Computer intelligence", which might be very different in its process and usage.
But the border doesn't scale with window size, and I tend to keep my windows large on an already large monitor, so the relative effect was small and (subjectively) even pleasant. I can't get too mad at the design choice, but maybe it should scale in some way to window size.
A lot of start-ups are hyping themselves as deep-learning or something close to that in order to make themselves appear 'hot'. Recently I looked at a company that had absolutely nothing to do with deep-learning or even any kind of machine learning whatsoever and that still managed to sprinkle the various buzz-words with great regularity throughout their investor targeted docs.
What I don't get about this behavior is this: It won't work, and in fact is a net negative, so why do it in the first place?
Investing is a trust thing, if you break trust before you even get to talk to your potential investor there is no way you will raise money from them.
Sometimes it's just desperation. Sometimes it just actually works. You'd be amazed how much easier it is to pitch buzz word heavy software to non-technical people. Presumably most serious investors aren't that easily fooled, but surely some are. And some are simply cynically acknowledging that a company which exploits the zeitgeist for marketing purposes is more likely to succeed with its non-technical customer base.
What were the technical buzzwords that companies used to signal hotness in the industry before the ML hype train (startup or not)?
I know on the process & management side for large organizations, there's been a revolving door of Lean/Agile/6 Sigma/ISO 9000, each with their own set of certifications, colored belts, and army of advisors/consultants.
I like the idea of always being retrospective and asking what can be improved, but it's made me a bit cynical seeing management chase fads based on whatever a vendor tells them.
I tend to think this whole "hype marketing" thing has been going on for a very long time; I can't quantify how long.
I do know that if you look at various technology marketing around the decade after the release of Capek's RUR (1920) - almost everything that was remotely mechanical or automatic or electronic was referred to as a "robot".
So it has been going on at least that long, and I suspect longer.
Even in the midwest I've seen this a lot in the last year or two. A couple months ago I was suspicious about the CEO of a startup's claims of "machine learning", so I pressed him on it; turns out it was much less ambitious than his pitch made it sound. On the part of that CEO, he was probably making a smart pitch because most investors, _especially_ around here, aren't going to know enough to call this out, and are going to fully believe the company pitching has a magical ML-black-box giving them a significant edge over the competition.
IMO, the practical sorts, such as this author, who want everybody's vision for AI to be as narrow as their present-year work is, suffer from the reverse affliction to what they think futurists suffer from. I will call this affliction "rationality-signaling".
This is a very offensive epithet and I demand you rescind it immediately. Just because you have your own language does not mean you can make up hurtful words in English.
In German we have "Fachidiot", which is not pronounced like the Dutch 'vakidioot' at all, but many English speakers would do so nevertheless. You're free to rescind your own abomination of that word :D
Stop calling it "Machine Learning" and "Neural Networks" if you don't want the attention. Do we refer to code or computers as "Machines?". That kind of language is just begging for skynet references.
Any sort of elaborate processing network could be called "Neural" or maybe even a fancy linked-list . Something more reasonable like a learning network would keep the sensationalism down. Im sure we can come up with better names...
I do, sometimes. I don't know where I picked up the habit, to be honest, but it might date back to the eighties or nineties. (I know I'm not the only one)
Fair enough, but take anything in computers and replace it with "Machines" and it illustrates the point. ram could be called "Machine memory" or a network could be called a "Machine communication matrix" or in the reverse "Terminator 3: rise of the computers"
With the prevalence of VMs (virtual machines), I don't think it's a stretch to call physical computers "machines". Granted, I usually hear the term "computer" or "box" used instead. Besides that moot point, I completely agree with you.
I'm not sure which I find more annoying; the AI hypers, or the who people who insist that machine learning is "basically just matrix multiplication". As a researcher none of it makes a difference to me either way.
That's basically like saying that horses and buggies will just be tools in a larger toolset needed for space travel. What generalized AI will look like bears little or no resemblance to what we're doing today.
Computers don't really translate languages. They take language sample from A into something close to B, and people translate something close to B into language B.
I don't mean this in some abstract philosophical argument, rather as an outgrowth of how they operate. Modern methods are much better than this, but even the most simple mechanical translation of each word in language A into one and only one word in language B allows people to get some value. But, improving without understanding has some inherent limitations. Humans run into similar problems when they try and translate complex source material they don't understand.
Generally this is not a major problem, but it can be.
Author says (in comments) in support of Singularity being nonsense:
> Technology is not a quantity
Er, yeah, "technology" is a complicated thing, but it can be considered a quantity. Computing power grows, so does automation and material knowledge. Dismissing a useful concept as religion because it isn't mathematically rigorous isn't very objective.
The amount of glee at unemployment and disempowerment expressed during most AI discussions here is truly disturbing, especially given the nascent stage of things. The irony of laments about HIB and outsourcing then sit starkly.
It reveals not a genuine excitement about technology progress and a better world but a darker underbelly of a self obsessed and insular tech community composed seemingly of closet tinpots itching to climb up the food chain. How can this lead to any positive outcome?
Same old, only replaced by a new group. Technology can leap but humans mindsets remain stagnant around power and greed.
>see the recent Maureen Dowd piece on Elon Musk, Demis Hassabis, and AI Armaggedon in Vanity Fair for a masterclass in low-quality, opportunistic journalism
I finally read through most of the 8000+ words of it and it's not that bad - mostly a bunch of interview quotes from Musk, Kurzweil et al and some of it's a bit sceptical eg:
>When I mentioned to Andrew Ng that I was going to be talking to Kurzweil, he rolled his eyes. “Whenever I read Kurzweil’s Singularity, my eyes just naturally do that,” he said.
reply