So after Tesla hiring Jim Keller -brilliant engineer- from Apple (DEC Alpha 2116421264, AMD K7/x86-64/HypertransportK8/K12/Zen, Apple A4/A5), the cards are on the table: they will play hard for winning the AI race on cars.
Optimistically, Tesla could run more elaborate networks. AI being AI means that rarely equates to better decision-making - still, it's an option for the rare cases.
Pragmatically, there may be no benefit to the quality of the decisions that the AI makes. It will simply drain less energy and make those decisions faster. Tesla could use the chip themselves to save time and energy on training the AI.
Both scenarios depend quite strongly on the quality of the CPU/TPU and don't strongly depend on determinism.
Computer vision algorithms are hardly non-deterministic. And hell, Neural Networks (which are the best simulation of the "human brain" we've got) are fully deterministic.
Probably a whole lot, this guy is probably somewhere near the top of the pareto distribution in a given team. I doubt his abilities are limited to GP CPUs.
They also had Chris Lattner on board for a while, the guy responsible for llvm and Swift. He quit/got fired after about half-a-year though, citing irreconcilable differences with Musk as the main reason.
It is interesting that Tesla still maintains that cars sold today will eventually be self driving. That implies the remaining problems are mostly software related. Meanwhile there are still heavily investing in developing new replacement hardware. That continued investment is obviously the right move from a long term business perspective. However if I was a customer it would have me questioning the capability of and their commitment to the current hardware.
They will deliver some sort of slightly enhanced autopilot that will be able to drive autonomously in few areas and require constant attention in all others, and announce a major victory.
If you think level 4 driving AI is just a software problem (particularly with stripped sensor suite lacking a lidar), and that all it will need to run is some year old nvidia drive PX (or equivalent, whatever Tesla is putting into their cars), then you are very naive.
This could just be a strategic play by Tesla. I’ve heard NVIDIA’s hardware is extremely expensive. If Tesla can commoditize it by helping AMD, then it’ll help their bottom line. Interesting that this announcement stripped $5B off NVIDIA’s market cap, and added about $1B to AMD.
This makes little financial sense to me. At Tesla's scale, the cost to commoditize AI chips out-weighs the gains from commodity AI chips as they will not be dirt cheap. Tesla sells less than 100K cars every year. A decent datacenter with AI chips hosts tens of thousands boxes.
By your own estimates, Tesla is effectively shipping 2-10 data centers worth of AI hardware a year, with a budget of something like $500M ($5k electronics in 100k cars).
Granted, AI is probably more of a $10-50M budget, but... Does it really not make sense to talk to chip houses if you're moving 100k units of special hardware at $50M/year for the foreseeable future, and looking to scale up?
$50M is a very reasonable budget for an AI chip. If they can take advantage of AMD engineers instead of hiring their own they can probably even get away with a smaller budget.
First, they don’t have to build the chips, they just need a credible threat that they’ll do so. Second, the NVIDIA kit is very expensive relative to the price of the car. That’s margin you’re leaving on the table by limiting yourself to one supplier.
Which doesn't mean they won't get there eventually. A project like this isn't going to happen overnight anyway. And the market seems to value Tesla as if they'll eventually sell multiple millions of cars per year, so it's not irrational to make business decisions with that kind of end state in mind.
Cost of batteries is dropping and because of that the cost of electric vehicles will follow, attracting more customers which will render the market interesting to more manufacturers which again will bring costs down (rinse, repeat). It's just a matter of time; if I had enough resources I would have invested in electric cars long before Tesla was born; predicting their success is no different from saying in 2000 that one day all light bulbs will be replaced by LEDs and all film cameras by digital ones. Tesla is investing now to create their future success for being the first or among the first in that market.
Musk said that it would cost more to retrofit an AP1 Tesla to AP2 than it would to buy an AP2 Tesla outright. Perhaps they've designed the AP2 hardware with that in mind and down the line will offer hardware upgrades.
If self-driving turns out to need a new CPU/GPU/XYZPU to run the software, that's a pretty easy retrofit: take out one board and replace it with a new board of the same physical size. It's the sensors that can't be replaced, because they involve mounting and wiring in a lot of different places.
That's a good point, but given the current general trend I would expect a newer gen board to be probably more power efficient, of course that's just speculation since I don't have insider knowledge about this specific project.
Power per operation is certainly decreasing, not as rapidly as in the old days though. However, it is completely not clear how much compute power is necessary for sufficient level of cognition that would allow L4 autonomy. Experimental cars (such as Waymo or Uber) have literally a data center in their trunk that takes up many Kilowatts of power. Whether Tesla can do it in at most a few hundred watts without LIDAR is _highly_ questionable.
I've seen in the trunk of working research cars with less resources than a high-end gaming PC.
Multiple KW is more than normal car air conditioning can deal with, since a resting human outputs about 100W. Do these cars vent to the outside? Otherwise, if outputting multiple KW, your Calfornia-based research car is gonna get nasty very quickly.
My source is a chat with several engineers at companies I cannot disclose. I also used to work for a company that develops autonomous navigation solution. If you have on the order of 10 LIDARS on your vehicle and same number of cameras and other sensors, I can assure you a single Drive PX will not be enough for even basic processing of that data.
Also note, that a high end gaming computer can easily exceed peak usage of 1kW, and if it is packed with say 4GPU's it very quickly approaches 2kW. The computers have custom cooling loops and vent outside, exact solutions vary between companies.
And even with all that compute, nobody out there is any close to level 4, better yet level 5. And Tesla promises to solve level 5 with drive PX, few cameras and a radar (not a single LIDAR) and magical software. I'd really like to see this :)
Ford's in-house autonomous development team did a Medium post on what they were working with. They avoided hard numbers, but revealed a few interesting things:
>Of course, additional functions require additional power — a lot of it. A standard gas-powered car doesn’t have enough electrical power for an autonomous vehicle, so we’ve had to tap into Fusion Hybrid’s high-voltage battery pack by adding a second, independent power converter to help create two sources of power to maintain robustness.
Deivers communicate with each other all the time on narrow roads (non verbal) i cant see current tesla cars being able to do that without more hardware. I often wave a car on, so it will move out of my way. A self driving car could do that using a screen or some suite of LEDs, but teslas dont have those in them yet.
Switching out motherboards is not a simple ask. There's a ton of other issues that you'd have to test for on some non-trivial amount of existing vehicles with the old gen hardware. And some point you might get confident enough you won't see heat, power, or signal issues that you can justify a mass replacement of the hardware but it's not a quick and dirty swap.
It's entirely possible they've overspec'd the compute in current vehicles. Give yourself 10-20x the compute you think you'll need, because the cost relative to that of the vehicle is still small. Optimize in future generations.
I think you're grossly underestimating the cost of doing that to a fleet of vehicles sold with the promise of being self driving capable with the hardware package on board.
> Correction: GlobalFoundries CEO Sanjay Jha mentioned Tesla as an example of a company working with chip fabricators, but did not specifically say that it was a GlobalFoundries customer.
Think this bodes well for AMD’s datacenter aspirations with next gen cloud being heavily AI focused (and GPU)? Also what about Musk’s Neuralink?
Https://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs
I'm surprised that they went with AMD and not an ARM vendor. ARM is pushing into the ML space big time with ARM GPU with even a Caffe fork CaffeALC on the ARM GPU. These GPUs are built with mobility and energy conservation, so these processors are somewhat of a natural fit for TSLA.
reply