Or, the IoT market is massively over-hyped. Umm... smart cities, smart cars (used to be called embedded), buh wearables ... $267bn market no way. Security nightmare, drone stalkers, fridge attacks owner etc. (OK exaggerating.)
There will be some cool stuff, sure. But IoT won't take over the world. Will eat my hat in 2020 if I'm wrong of course.
> IoT accounts for less than 5% of Intel’s sales. The group had revenues of $721 million last year
This isn't a surprise to anyone who's encountered any Intel IoT product. I think for most of us, the biggest question was why it took Intel so long to pull the plug.
Intel released products for IoT, then didn't update them forever (look at how many times they announced successors [0] [1] to the Quark) and they wonder why no one was interested to build products with their solutions.
As an aside, this article is terribly written (grammatical errors everywhere, spelling mistakes like "msrket").
The IO of Quark was problematic as well [1]. It's SPI interfaces are basically the same speed as an 8-bit MCU, which is kinda problematic given that SPI is fairly common to push a dozen or two megabit/s (implementation is much, much easier than reaching for USB or PCIe, while serial is simply too slow).
[1] Disclaimer: When I looked at it when they were released. Perhaps it was addressed later.
Yep, Intel had no idea what it was doing when it was designing its IoT product line. Normally, on its CPUs Intel got to do whatever it wanted and everyone just dealt with it because it was Intel (and they were rather open to working with the industry on standardizing communication buses, slowly getting rid of all the bridges, and etc). However, on a microcontroller, you have a lot more silicon than just the CPU architecture and cache so they had to go out and license a bunch of crap (and it usually is crap) peripherals that would never have have met the standard for their consumer/server CPU product lines. Combine that with the reduce R&D budget and they couldn't even properly test have the interactions. The number of errata for even their simplest chip rivaled that of their new gen processors when tech drastically changed (pentium -> core 2 due -> i3/5/7)
What surprises me is $721m seems like a lot for a product no one buys. Also surprised it's not worth employing 100 people bringing in $7m/yr/head. I would employing such people.
>Also surprised it's not worth employing 100 people bringing in $7m/yr/head.
$721m divided by 97 would be applicable if the entire IoT group consisted of those 97 people but it's not. I can see that the article makes it confusing and some might interpret it as if Intel is laying off all the IoT employees. (I don't know how many total Intel IoT employees there are since Intel doesn't seem to not have publicized that headcount.)
My guess is that since Intel recently discontinued x86-based IoT boards (Joule, Galileo, and Edison), many of the layoffs are directly connected to that.
Intel is still in the IoT game but pursuing other strategies to compete with the low-power-more-efficient ARM-based IoT market.
$721m revenue is not $721m profit. The division may have been operating at a loss. Big shipments may have been provided to retailers and booked as revenue, just waiting to return unsold or massively discounted.
How many useful die per wafer? Is there a ballpark yield for 14nm?
I too suspect this was not a profit-in-the-now division. They were probably playing a long-term game. I think they ceded the game to someone else .. but who?
> the biggest question was why it took Intel so long to pull the plug.
Everyone everwhere in business loves growth. If there's a new market that never existed before, or if you can take away a competitor's share, you win.
It makes sense for Intel and its competitors to invest in growth markets like IoT. But if its offerings don't meet the need or the R&D expense doesn't pay off after a few years, it makes sense to retreat.
In other words, innovator's dilemma, and Intel is right in the big fat middle of it. Big companies "don't have the patience" to develop "small markets" into big ones, even if the potential is great. If it takes many years to get there, then they won't be able to show their stockholders how much growth they've had from that particular industry.
Intel is going to have quite a few more innovator's dilemmas in the next few years, including in the notebook market, which I imagine Intel will eventually deem it "unprofitable" because of AMD and Qualcomm's aggressive offerings, and it will get out and run to its server profits.
Another major Intel "experiment", following the failure in the mobile market, bites the dust. Next-up, Intel's $16 billion FPGA bet? Machine learning chips seem to become more specialized, not less.
To design, prototype & manufacture a Machine Learning ASIC with decent performance costs $10m+ to develop which is why FPGAs still have a large share of the low volume and/or high end market.
From what I heard in Malaysia's Intel sites, IoT are used as reasons to open new departments or new turfs for some second class managers. No wonder it is missing the mark in what it is doing.
Intel's main problem is that in the age of commodity OSes, bytecode executable formats, bare metal runtimes and SoC that rival a Pentium class processor, the processors doing the work are kind of irrelevant.
Sure someone still needs to write those AOT/JIT compiler backends and virtualization layers, but it has become a thin layer in the overall stack.
In practice that's just not true. There are a lot of manufactured products using $otherARchMachine, but good luck making a decent self-built video player using anything but x86 (kodi on raspi is simply too slow, hell, even retropie cannot actually run n64 games on anything but the most modern pi, whereas they're perfectly smooth on a 10 year old x86 laptop)
Having a mini-ITX for everything, including even nas storage (I hate the people calling this cloud storage), is so much better it's ridiculous.
I think there are two sectors: home and industrial. So for home sector there would be things such as smart air conditioning, smart central heating, refrigerator. Industrial sector will probably have lots of use cases for small IoT devices with sensors.
I work for resin.io - we do IoT deployment & management, and the main customers reliably come from the industrial side. Building monitoring, robots in factories, product tracking and signage in retail, etc etc.
Personally, the consumer side is the noisy part, but not the part that I see changing the world most drastically. Making the world around you and businesses everywhere more automated, better controlled & more efficient is going to make more of a meaningful difference to the world that adding connectivity to your toothbrush.
Industrial IoT is quite thriving industry. Things like machine monitoring and performance, sensor networks for process monitoring and many other niche cases.
I work on an office building iot platform. We retrofit existing buildings with iot sensor deployments so we can analyze occupancy, trigger cleaning from usage, guide people to available workplaces, etc... This is a big trend in the building service provider business, they're all looking at iot as a way of improving the QoS to cost ratio. Not much use for intel or cisco in that market though.
> decent self-built video player using anything but x86
The Pi chip was originally designed for set-top-box usage. Much of the die is video decoder, I believe. So if you have the right drivers and maybe an MPEG2 license it should be fast. https://www.raspberrypi.org/blog/new-video-features/
I'm afraid pjmlp is right, and it's baked into the business model - if you make a one-off payment for the thing you're not going to get updates because they want you to buy a new one. Manufacturers will keep doing this until they are forced not to.
Unfortunately video players that are developed for specific acceleration platforms (ie. some ARM board) suck. They're slow. They're buggy. They don't play half the formats you want it to play. Their interfaces are some buggy vendor-branded monstrosity.
Just a few years ago smart TVs, one example of such platforms, had overscan, some without the ability to disable it. (Make the screen look bigger by only showing the middle part of the video)
I like mini-itx systems and have a very nice mini-itx htpc myself but it is not because I need an Intel CPU.
UHD blows out any dual-core CPU (maybe quad can do it?) so dedicated hardware is an absolute necessity for that.
Even if you could decode video in software the power efficiency is so much worse than using dedicated hardware that it is really impractical.
I am using the video decoder on an Nvidia GTX-1060 and I am paying a huge price for a lot of wasted hardware in order to get access to that. And I am forced to use Windows anyway since I can't get Kodi to sync the refresh rate properly on Linux and a couple hours of wasted time trying to fix that is the limit of my patience.
IMO fixed hardware based on ARM (which costs about $100 vs $1000+) is obviously the better solution for price/performance on video.
What I am paying for is the aesthetic (HDPLEX HD5 case) and the enjoyment of having something I built myself, not technical superiority.
Video decoding is sequential, so adding more cores is not going to help.
It might actually be easier for you to get a dual core which can decode everything you want. Any high-clocked desktop Core i3 should easily handle any reasonable bitrate 4k video in any codec on the market. If it doesn't then there's something very wrong with your video decoder software.
It all depends on how you define "reasonable" bitrate. I am running ~40mbit h265 videos. I have a Pentium G4400 which can only do a few frames per second and I have tested on an i5 also and it still drops frames on high complexity scenes.
More to the point, why would I want to burn 100 watts decoding this in software when I can do it in single digits with fixed function hardware?
[edit]
You may be talking about videos that can use Intel's own fixed function video decoding hardware, which will work just fine on an i3. I am talking about 10bit h265 which Intel Skylake didn't support when I was building my system. Kaby Lake supports it now so if you are building a new system Kaby Lake would be the way to go.
Thought you were referring to h.264 and VP9/10 video, which are still a lot more common, even for UHD (in my experience). h.265 software decoders are still immature, in part because of the lack of content.
There is a big benefit if you depend on the Linux kernel. Generally you can use latest long term support kernels on x86. For non-x86 devices you can very easily find yourself with a kernel that is over half a decade old, end of lined, lacking support for recent hardware (eg wifi sticks) and similar problems.
strategically, the selling of its ARM department was a big mistake, the x86-IoT obviously can not make up for that and take off.
Intel has tried multiple times for the embedded market, so far not that great.
I'm now watching the yocto project for what Intel is the main force behind, it is a bad idea(over engineered to say the least) with a few good people running it.
It seems like people tend to say Yocto is a bad idea, but that's just their gut feelings (and mine as well) with no substance in it except yocto's complexity.
I have been trying to answer to myself the same question as you are, and below my personal experience of learning-curve climber:
-- Yocto is complex without giving you anything much back in exchange. Pathnames are longer, directory names are counter-intuitive and do not emphasize the main task of a developer/integrator: work with the source code. Probably more bearable if one only checks out the whole tree, starts the build and go on lunch... but then comes the next item:
-- Yocto is more resource demanding (again, personal, compared to some systems I used before): more memory, more CPU cores for the builder VM, more storage -- for the same build we did before.
-- Yocto fails its promise to decouple the delivery from the FOSS sofgtware: every once in a while, a public repo goes offline, and angry customers call back. One still has to deal with the public repos you pull into the build, no matter what yocto advocates told you -- might as well copy it into the tree already.
Buildroot suffers from some of the above as well (my BR experience dates from ~5 years ago, I am not a fan of it).
On the flip side, Yocto is probably cleaner than buildroot for cross-building. It could also be easier in including new components into the build, haven't tried that.
Over time, I worked out some personal indicators I use to gauge the building/versioning harness:
-- how many macros and environment variables the makefiles / scripts / receipes contain and refer to. The lesser is the better;
-- how easily I can navigate in the source code: how long pathnames, how much typing to go to another source file; how many additional pulls/checkouts/fetches/unpacks are required. etc.
-- how "cross-clean" building it is: all required host-, target- and cross- tools need to live inside the same tree, and be built during the main build if necessary. Having a mandatory separate VM is usually an o_O sign.
FWIW, buildroot now has the ability to build a complete cross build environment (I've used ARM and PowerPC). This is a configuration option: buildroot's cross or an external cross build environment. Buildroot does the Right Thing: all the required host-, target- and cross- tools need to live inside the same tree. This includes the option to build lib_.a (non-shared) cross libraries on the host, statically link against them for the target executable, and only install the target executable on the target installation (no carrying lib_.a baggage on the target file system).
I've created a couple of simple buildroot packages and it was nearly trivial. The buildroot preference is touse cmake for the package build support - I went with that and was very happy with the level of effort required (low!).
Yocto is not a bad idea in a vacuum. But Yocto doesn't have a good hardware platform for low cost IoT, so it is limited to a narrow category where Intel's mainstream CPUs can play. And to the slice of that market that won't go to Microsoft's IoT variant of Windows, nor to Android Things. I doubt there will be, for example, a Yocto-based car infotainment system.
Automotive Grade Linux is a Yocto based distro for exactly that. I even saw job postings on LinkedIn for "AGL experience!" After I dug in a little bit, I found that it was just built on top of Yocto/OE.
People in this thread are flaming Intel, I suspect this is just the first of many major vendors/companies quietly droppping IoT (whilst proudly announcing their new AI/VR/TLA offerings!)
I guess TLA is supposed to expand to "three-letter acronym", meaning that it does not stand for any specific buzzword technology, but the whole buzzword-powered innovation cycle itself.
The IoT market is going to grow significantly and Intel drops it? Why not reboot it, after a "lessons learned" and some serious study of the e.g. Beagleboard, Pi and Arduino ecosystems, to check what works (hint: price matters. Lots of easily driven 3.3v GPIOs matter. Battery life matters. Sometimes integrated connectivity matters, if it doesn't kill price or battery life).
Have they not learned from the Xscale fiasco that dropping a soon-to-be hugely important market is not the best strategic move?
And it's not like both predictions are/were any difficult to make.
They have the technical chops to do it right, their mainstream CPUs are ample proof of that, and this sector will get a lot bigger. Their top management either lacks the balls/vision to drive the idea to success, or is acutely aware that they would never, ever be profitable in a market where the end product is under $40 (sometimes under $5).
Why is it a given that Intel have to be there? It maybe be a market that they are in fact not geared toward. Just because there is a market doesn't mean they need to be there, doesn't mean they can make money and doesn't necessarily mean their core market will go away.
Many times companies will try to get into new markets or channels as a hedge, just in case their cash cow goes tits up. I give them credit for seeing that it was right to bail.
Closing a $700m unit and laying people off is a pretty big admission of a mistake - they just see there's no viable course correction. Giving up is sometimes the best option.
Because their name stands for INTegrated ELectronics and they just failed at it.
> Closing a $700m unit and laying people off is a pretty big admission of a mistake - they just see there's no viable course correction. Giving up is sometimes the best option.
Agreed, not disputing that, just trying to analyze the reasons behind the failure.
How many times have we seen this: an exec runs with an idea, but it doesn't pan out fast enough. Other execs then have a stick they can use to embiggen themselves at the expense of their colleague. The end result is that the idea itself is now tainted and no one will touch it.
Intel rode too long on the x86 and the horse is now tired. It has been tired a while but there were deep moats around Intel. Manufacturing technology was one but others have been catching up as mobile provided the market and cash for sustained investments. Wintel was another but both Microsoft and Apple are both looking at ARM. On the server side ARM, graphic chips and FPGAs are taking away and are threatening to take even more business.
Intel seems to know they need to break into new business. But doing that with the mindset and within the constraints of a quasi monopolist is very, very hard. Building something new requires not just resources but calendar time, commitment and freedom not just on the technical but also on the business model side. Focusing on your core strength is an excellent mantra until one is in a corner and by then it is hard to think different.
Intel's entire business is based on high margins. Trying to take that orientation and apply it to low-margin markets like mobile and IOT has been a complete failure for them.
I don't think they actually care (yet) because a few billions is peanuts to them as long as they have their high margin business but the wolves are at the door.
AMD is actually much better positioned because they have gone through a very painful restructuring and reorientation toward a business model that works with low-margins and they now seem to be coming out the other side.
Intel will eventually have to go through the same restructuring and it will be even more painful for them because they have a lot further to fall.
The only other high margin semiconductor business I see as a competitor to them is not AMD, it's Nvidia.
Gpu technology from Nvidia is continuing to unlock new capabilities that make upgrading a CPU less important and as more code becomes gpu dependent it'll matter less what CPU architecture you are running.
That opens the door for arm to enter the server market.
Ja, even now Nvidia can simply put an average general purpose core into GPU to run Windows and release something like libGPU to let devs access computations on GPU directly.
I believe that they are acutely aware of that, but at the moment they chose "to not to slaughter a hen laying golden eggs"
The hilarious thing is, Intel probably still has enough top talent to compete in an open ISA market, and they still have some edge in process technology. Then again, AMD can probably beat them in the medium to long run.
If they want to tend to these markets, they are going to have to give up on being the single source.
IMHO, the market is evolving toward tinker. Many arduino and raspberry pi have been sold. Many people are buying 3d printers. Many android phones have been produced at small scale. We love the idea that we can build (or fix) stuff ourselves.
This would mean an open market with a fierce concurrency (and low margins).
I hope that this is the future, but Intel like many chip makers are fighting against it by blocking documentation of their products (using NDA and/or huge prices).
I love that stuff, but I have a hard time seeing it move beyond early adopters. I just don't think that more than 5-10% of the population wants to mess around with the guts of their tech.
Some versions of those things may go mainstream, but only after they get to the point where people don't have to do the technical and creative work (e.g., click a button and your 3D printer makes the new iPhone case you want, so you don't have to wait for it to be delivered).
I just don't think that more than 5-10% of the population wants to mess around with the guts of their tech.
Maybe 1%-2%.
Most "Internet of Things" devices just don't do very much. They lack any actuators.
There are lots of things that can be done to make heating, ventilation, and air conditioning more efficient and comfortable. But the good stuff requires installation. Mechanisms to open and close windows. Dampers. Sensors for humidity, CO2, CO, temperature, rain, and fire. Wired connections to everything. (Battery replacement is unacceptable in commercial environments.) All this is available from major HVAC vendors. Usually for too much money. (Window openers seem to start at $500 per window, so nobody buys them.)
A few years ago I went to an IoT meetup in SF, in the Dogpatch area. This was in an old industrial building. Skylights overhead had openable windows on manual chain falls. Outside windows overlooking the bay were openable. There were overhead fans and a standard HVAC system.
None of this was powered and controlled. So the place overheated and became uncomfortable with too many people inside.
In a space like that, as more people come in and the CO2 level goes up, the overhead windows and side windows should open and the overhead fans should go into reverse to bring down the CO2 level. Then the side windows should be adjusted to maintain temperature. As evening comes and the outside temperature drops, the side windows should close more. At some point, heat may be required. As people start to leave, the CO2 level will drop, the overhead windows start to close and the fan speed drops. When everyone leaves, and the motion detectors see no movement, everything closes up and the temperature is allowed to drop to 60F or so for the night.
Some hotels have systems like that in function rooms. Any space with a widely variable people load, such as a classroom or conference room, should be equipped with CO2, humidity, and temperature sensors connected to actuators which can do something about it.
It's been said already, but these products were destroyed by support-forum-as-documentation. You can get a light blinking on an Arduino in five minutes. On the Edison, it took all my free time for a couple days to even figure out what pins were what. During that time, the excitement of building something more complicated on it evaporated. We are programmers, not documentation detectives on a scavenger hunt. I hope anyone who threw a bureaucratic monkey wrench into a proper documentation hub uses their unemployment to reflect.
I completely agree. Unfortunately, so many semiconductor vendors are going this way (for ICs) as well.
For example, there are so many parts on the Texas Instruments web site for which you can not get any support, unless you are a heavy volume customer. They even turned some of their product forums to "read-only". You can't even post a public question anymore.
Similarly, Xilinx had an official support channel (Webcase). A few years ago, they made to decision to only have that support channel open to their Tier 1 (or whatever they call them) customers. We can no longer ask for direct support from Xilinx with the money we spend on their chips (~100K USD worth of FPGAs) every year.
Not sure what these companies are thinking. They might not realize, but some small companies get big. Some of the young engineers of today will be the CEOs of tomorrow. The people who are getting shitty support from Xilinx and Qualcomm right now are likely not to want to do business with them later. Furthermore, some big companies actually do listen to what the engineers have to say. IMO, big tech companies are failing to understand why you should try to have the technical people be on your side.
They are late to the party and their stuff is too complicated and consumes too much. Also BLE devices with an 8051 (a CPU from 1980) won the game.
Also despite of the hype IoT was not such a big thing (despite for the hobbyist market, where people(me included) get quite excited by building a temperature logger)
> I'm not sure what AMP is, but it's missed its mark.
AMP pages load super fast for me on mobile so not sure how you can say that if you're just basing that on not understanding the design choices behind AMP.
Would have been nice if the author had investigated why Google is recommending so many JavaScript libraries that seem so large instead of just assuming it was a very bad idea. For one, AMP will be loading the JavaScript in a non-blocking manner so the page will appear to load fast and if you've visited an AMP page before the JavaScript will be cached (unless there's so many different versions...?).
reply