"It's all fairly arbitrary stuff, and in the end, the point is to present something cool running on the hardware and within the nominal restrictions, even if you get tricky to do so."
Another good, detailed perspective on it. Appreciate it. I'll especially agree with the part I quoted. :)
I think by limiting the simulator to an obviously non-realistic output, they avoid the uncanny valley and unrealistic-judgement of using their operating system in lower-speced hardware.
Specific to the software discussed here: Bit weird criticism of software that obviously is not intended to run on end-devices with MCUs, but on a more powerful gateway. (But yes, there is overall too much focus on that part compared to the bits that run closer to the hardware)
This is also, what this article is actually about. It's just from a more "hermeneutic" perspective, rather than a bottom-up approach based on an analysis of the hardware.
> elbow grease and ingenuity can remove theoretical hardware limitations
Of course, 'theoretical' hardware limitations aren't hard limits -- they're soft[ware] ones. However, if you find yourself battling concrete hardware limitations, what you need is to put your back into it and a little know-how.
Disregarding standards is not uncommon when you're actually running a high-performance system that faces attacks. Plenty of stuff goes into specs that just doesn't make sense while operating. Their explanation of why they are disabling ANY seems perfectly fine.
And there's nothing wrong with that. I think programming is fun, and hardware design can be too. However, if you're a company trying to sell a product based on some specific claims, you probably should do more to convince potential users.
Indeed, architecturally it's just asking for trouble. However it also lets user extend the system in ways that aren't previously planned for. Pros and cons...
Another good, detailed perspective on it. Appreciate it. I'll especially agree with the part I quoted. :)
reply