I decided I could fix my own cars when I “realized” that ‘if someone else can do it, I can do it’ as well. Turns out I had to learn a lot, but I learned it.
Fixing cars is of course learnable; we all know that mechanics exist. But computing hardware is usually built by really complicated machines. I honestly don't know whether it's possible for a person to make a CPU.
Click on Nana2Tetris in the link above. You'll find out a good bit about it. Modern CPUs are enormously complicated. Machines from earlier eras tend to only be of only middling complexity.
I think that Digilent FPGA board is discontinued so a bit more work to make it work on another board. You will have to modify the ucf file to match the pinout of another board. In particular you need a board with SRAM but a lot of newer boards have DDR memory.
I suppose it depends on how you define "from scratch". Pretty much anyone can build a desktop PC from parts in a single day. That's what would make the most sense in this context as well, probably. Target a simple CPU design with a simple instruction set that you can understand.
If you want to go further from scratch... Designing a motherboard wouldn't be too hard, though it probably wouldn't be very good or reliable. Same with a power supply. Designing my own CPU is something I could probably do with several months' worth of effort, as a CompE. But actually making a physical chip of it is another matter entirely. It would be expensive and you'd have to pay a manufacturer to do it. Physically making your own CPU at home is pretty much out of the question, but you could use an FPGA. I have no idea how to go about making RAM or hard drives "from scratch". Same for monitors.
From "scratch," as in raw materials, absolutely not. Even the most dedicated home-fabrication hobbyists ( http://sam.zeloof.xyz/first-ic/ ) have come nowhere near achieving the transistor density required to construct the "OberonStation's" CPU or memory, whether taped out (laid into a chip design using actual silicon gates) or built using a programmable gate array.
From a circuit board etched by someone else and PCB components purchased online and soldered together by a hobbyist, certainly, without too much difficulty (the package used by the programmable gate array requires a bit of specialty equipment, but is within reach).
One other point about Oberon is that I _believe_ that because the design predates open-source FPGA toolchains, a closed toolchain is still required to convert the Verilog (high-level design describing the processor logic) into a netlist (FPGA configuration), and it hasn't been taped out into a chip, either. So, the book covers the design of a computer end-to-middle - that is, without covering how the block / HDL level design becomes logic gates, and without covering how said gates are then manufactured. This is probably fine - the scope has to end somewhere. The linked NAND-to-Tetris articles cover the other end - from logic gates up to typical computing constructs (adder, registers, shifters, etc.).
It is very weird.. this code is is full of the duplicative, hard-coded constants, something which I would avoid in any software I write.
For example, take a look at the display module [0] -- this 190-line program contains over 20 instances of constant "128", in lines like those:
sa := base + u0*4 + sy*128; da := base + v0*4 + dy*128;
What does this mean? See, there is only one display supported by this OS -- an 1024 x 768 monochrome one. Since this is monochrome, you only need one bit per pixel, and you can put 8 of them in the byte. If the pixels are sent out serially and lines follow each other in the buffer, then start of each line is (1024 / 8) = 128 bytes apart.
But instead of writing something like "const LineStride = Width / 8", the code just hardcodes the value all over the place.
Exactly. Recently, when Wirth couldn't find an OS with a device driver for his favourite mouse, he built his own hw + OS + userland combination so he could keep using the mouse he got as a souvenir of his Xerox sabbatical. (I believe the monitor and keyboard were off-the-shelf.)
He's not primarily a programmer, but a computer engineer. It wasn't his first computer system, and may not even be his last. The mouse stays the same size, but the computers themselves have shrunk dramatically over the years. Maybe when he turns 90 he'll ship-in-a-bottle build a computer inside his favourite mouse?
It wasn't designed for any other display. You can pretty much treat 128 as a symbol and leave it as that. Until you have more than two of those 190-line programs that target different displays, there's no point of refactoring it. ---my guess
Sure, but that’s an extremely shortsighted assumption. Engineering projects change scope over time. Maybe that particular arbitrary decision won’t change, but there must be 1,000 other decisions, some of which might be revisited at some point. There’s no need to artificially handicap yourself for zero benefit in return.
Never mind that you’re introducing the possibility for bugs: the compiler would catch if you type the wrong symbol name, but not if you type 129 accidentally.
And anyone reading or maintaining that code now has several additional needless WTFs they’ll encounter in the course of making sense of it.
There's two aspects here. First, "wasn't designed for" and "the future" usually means changes will happen.
But more importantly, by giving it a name the code is more self-documenting. New and old programmers don't have to figure out why 128, or if 128 is right or not in this bit of code.
I often wonder what if we decide to build a completely new, well thought out computing system (including network) that is totally separate from any legacy system and 100% backwards incompatible. Not any bit less, 100% backwards incompatible and everything would be fresh. There would be no jpeg support (it would have its own image format), it would not have standard TCP/IP stack - a new protocol and network infrastructure, new display format, new IO ports, etc.
Why am I imagining this? I feel like computing systems as they've evolved, they've built stuff on layers upon layers and just like biological evolution, it is impossible (difficult) to undo because the marginal cost is too high to unwind and break backwards compatibility. We get stuck in local maxima at each layer and the critical energy required to escape out of it would be too large.
I am not saying backwards compatibility is bad, far from that, I am saying that if we force ourselves to cut the cord and completely build computing systems from scratch - I feel like we would have a hindsight to design it better, faster, and cheaper.
I think it is possible to build a better system from scratch, and there are likely space aliens that have such better systems.
But ultimately a computing system should do what its users want, and the market has favored backwards-compatibility for a long time. If anything, it's a little alarming how old software and hardware can quickly become useless or obsolete. For example, any web browser that doesn't support any encryption better than, say, SSLv3, is cut off from a significant fraction of the Internet.
> I think it is possible to build a better system from scratch, and there are likely space aliens that have such better systems.
I respectfully disagree. Alien systems can be thwarted by inverting some bits. [0] I would not have believed this had I not seen it for myself. [1] After all, seeing is believing. True, the alien code seems very insecure. Perhaps aliens write their code based on the historical documents. [2]
I think you underestimate how many smart people proceeded you. Yes, in some cases there are new technologies that transform how we build things and using those from the start might put you in a better place. But in-between those inventions is a lot of hard work by really smart people that would be incredibly hard to match let alone exceed.
To build a system, you would have to start at the bottom layers and then build up. To think you could design any of those layers (protocols, languages, tools, etc.) without making mistakes is naive and I think you would end up with a system that is layered just like current systems, but each layer being much less thought out. Yes, that is even with complete hindsight. I have watched many experienced teams fail on gen 2 of systems.
Don't get me wrong, there are many things I dislike in computers, starting with autotools :) But things like autotools are a lot easier to replace in Hacker News comments than in real life.
I had no intention to belittle the complexity, monumental effort and almost impossible to imagine amount of man hours that would go into this - it was purely a thought experiment.
Do you really think that it would be less thought out if we have lots of resources and smart people? I feel optimistic that we can bake in the lessons from the past and improve on existing layers/stack/protocols.
I think the problem is that, even if you end up getting a proof of concept that attracts others, and they end up being 100x as productive as people using the Old Way, you will still be outpaced because the Old Way has 100,000x as many people working on it as there are on the New Way.
The odds are stacked against any specific New Way from taking hold.
But we can also invert our perspective and ask: Will there a dramatic shift to a New Way in the next half century? Since 1970 there have been at least 3 major transitions in how people work with technology, it’s easy to imagine more will come before 2070.
It's a good thought experiment, for sure. Do you have ideas of different directions for something fundamental? Outside the design of the computer system, I'm thinking about the problems we currently face with identity and trust. Maybe that's something that gets baked into a ground up reimplementation of TCP/IP.
I have. I think about UI a lot. I would love to see flat UI, not "flat" in aesthetic sense, but flat in hierarchical sense. Like photoshop menus, fighter jet cockpits, control panels and 90's porsche dashboards - give me a dedicated button exposed upfront. Laid out in logical fashion spatially - a tool bar group for all things audio, another one for display, and yet another one for devices. Ofcourse when number of items are dynamic, dropdowns are ok or another popup window is fine. No icons. Only text (sometimes abbreviated like CTRL PANEL).
IMO hierarchical deep menus are logical but they take up precious time when you need to do that action 80 times a day. Same benefits can be obtained by well designed layout (with borders, background colors and proper labels).
Most people would find that dauting. I think hiding stuff behind menus is not a good idea even though on surface it might feel like making it less complex. In fact, the complexity is the same, just visual clutter is reduced. I argue that we're really good at looking at visual clutter, my professor's desk is a good example of that.
Yes, it’s the classic “Should we do a complete rewrite?” question blown up to an epic scale. The answer is usually “no” or “rewrite one layer” but their are times when things aren’t modular enough or the fundamental architecture is lacking.
There's a lot of scope to simplify the developer stack. The web was never meant to be used for apps and it shows, badly.
For the operating system layers, if you build a machine that doesn't expose an ISA, in which all code is run via a polyglot VM like the GraalVM/JVM, then you can more easily design a competitive CPU because suddenly you can use flat, direct physical address space mappings with an ISA that is throwing out old instructions whenever you need to. No need for an MMU, TLBs and all the other page table walking logic, IPC can be drastically simplified.
Singularity / Midori was the most complete exploration of this concept, by Microsoft Research. Unfortunately they kind of went down a few bad R&D paths, in particular, by trying to match the performance of C++ code and relying too much on formal methods - i.e. all stuff interesting to implementors - instead of researching things interesting to developers.
That's why the right way to re-invent computing is top down, not bottom up. Take an existing Linux distro and start to replace bits one step at a time, starting with the desktop environment, the app and data models, network protocols etc. But all the drivers and filesystem goop remains the same - limited potential for innovation there. That said, if you want to, these days you can reimplement drivers in userspace using whatever framework you like, reducing Linux to basically a fat microkernel.
I think it is a nice thought experiment but exactly Oberon project shows that even a few smart people really can't be expected to create useful solutions. Have you seen the filesystem? It's been a really long time for me when I looked at it but as far as I recall you had to create a partition for each directory, essentially forcing you to anticipate the size of each directory you create. So, yes, it's a nice system to run in a virtual machine, you can learn a lot about how an OS is built. As for useful... not really.
If we had lots and lots of really smart people working on a new version of everything, though, my guess is that we would end up with pretty much the same stuff we have today. As another commenter said, a lot of software projects doing next gen rewrites from scratch fail. The reason is simple: they focus on solving painpoints and forget about the areas that were neatly addressed by the previous architecture.
The best approach IMO is to address the painpoints by refactoring them properly into the architecture. That is often very complex on its own but still has the best chance of succeeding.
One of the overriding themes which I attempt to convey in my teaching, blog, etc is that the modern technology landscape is something that we arrived at largely by chance. Nearly every decision made in the development of computing had a wide variety of alternatives at their time, but once things become ossified as a norm it is difficult to imagine another way. For example, I have recently mentioned a few times how it is hard now to see TCP/IP as not being the "obvious design," but in the early '90s there was healthy competition of network protocols offering various feature sets.
I think it is extremely important from a teaching and engineering perspective to view the modern computing environment not as some elegant design (which it most certainly is not) but instead as an concretion of "best option at the time" solutions to real problems.
It is a bit intoxicating to think about how things could be if computing really were approached as a single, monolothic, elegant design. But essentially every attempt to do so has failed. Much like how we think of computing as layers of abstraction, the computing industry is functionally constructed of layers of abstraction, and so it is difficult to really change anything at lower levels.
On one hand, it’s fun to imagine what a truly full-stack rewrite of the internet would allow us to do, and what it would look like.
On the other hand, the current internet feels like a good approximation of the best trade offs for performance vs. usability in most areas, and I doubt a full rewrite would fundamentally change much about how we use the internet.
An Internet with a payment system built in from the ground up might be one that did not end up throwing its hands in the air and saying “I guess everything will be funded by ads, I dunno”.
An Internet where “keep engagement high so we can serve shitloads of ads” was not the default path to financial success would be an Internet without social media that happily pushes people into ever-more controversial positions, because that’s what keeps them around longer and lets you serve more ads. Decide for yourself what social changes that would create.
I don’t think this is really true. We’ve had the technological capability of micropayments for many years (since PayPal probably?), so it’s long been possible to charge for access to content on the web. If that were a model consumers preferred for text, it would be winning. Instead, most consumers will bounce from paywalled content if they can’t figure out a way in for free.
In areas like music and video streaming, paid services are actually popular, because they have a different cost structure and more consumers find it worthwhile to pay to access better content with no ads.
The problem is that most paywalled content wants to charge 50-100x what they would otherwise earn from ads. A typical newspaper would make maybe 10 cents off my visit if I didn't have my ad blocker, but somehow if I want to pay I need to pay a 10 dollars/month subscription even if I only intend to read a handful of articles.
There's nothing preventing a newspaper charging 10$ once to pre-pay your account and each visit debits the equivalent of their ad earnings for that page view. It would negate the impact from payment processing fees while still allowing most people to read in a cost-effective way.
The problem is indeed greed, but I'm not sure the banks are the ones who are greedy here.
> There's nothing preventing a newspaper charging 10$ once to pre-pay your account and each visit debits the equivalent of their ad earnings for that page view.
Sure there is, they'd lose enormous money that way. Because the small segment of users that would pay for the average page view are a large portion of the users most valuable to advertisers.
If they charged high enough to make up for the loss of the demographic willing to pay the price, it would be an even smaller group paying very high prices. And even higher prices once you charged extra to recover the costs of developing and maintaining the separate payment and customer support system and service for that small customer base.
Lots of people that are in demographics more valuable than average see stats about the average a site makes per view (or per MAU, or whatever) and says “I'd be willing to pay more than that.”
What they don't realize is that they are worth more than that as advertising targets, and the people that bring the average down to what it is aren't willing to pay the average.
In this case the solution is regulation that makes targeted advertising unprofitable (like the GDPR) so that advertising is out of the question and payment is the only way to fund the website. Since payment is the only option, there is no more way to price-discriminate based on willingness to pay and they need to keep the prices down to stay competitive. Everyone wins.
Since before. PayPal's forerunners were founded in '98 and' 99. I used micro payments in '93-'94. PayPal was notable for being one of the first new payment systems to get market penetration, but at the time at least it was not considered as micropayments at all, as all of the micropayments platforms of the time (and there were many) were looking at providing capability of paying amounts down to a cent or fractions of cents.
Recently I ordered something online using an Apple Pay button, there were no address or payment forms to fill out, just confirm payment.
That was extremely low friction, but would still be too bothersome for a micropayment. If any such model wants to have success, it will need to be completely frictionless, with absolutely no interaction required. And it would need to work automatically everywhere, e.g. built into the ad services themselves. There are attempts (e.g. https://contributor.google.com/v/beta) but that's not quite as easy as it should be.
For what it’s worth, I make around $20 a month off Brave BAT as a verified creator. It’s largely completely frictionless for people who set the wallet up.
I think Apple Pay includes the bare minimum friction to determine that you actually want to spend your money. I don’t think any less friction is ethical. I assume the Google program uses a cookie or similar technical means to indicate your membership in the program, which you previously opted-in to paying for through a flow with affirmative consent. That seems like an appropriate solution, and it shows that this model is supported by current browser tech. Websites are free to create their own membership programs, or band together and form cooperative programs, devising any system they like to set prices and divide up the revenues.
I don’t think the level of friction of Apple Pay is too high - below that level you effectively have a subscription, not a micropayment, since you’ve pre-authorised disbursement. I’m no fanboi but Apple Pay is as good as it gets.
I’m getting spam robocalls that I decline to voicemail. The next thing that happens is a popup with an image of my Mastercard, which I think is an Apple Pay screen, asking me to enter my PIN to pay.
What would happen if I had Apple Pay set to Face ID for confirm?
Is this really an Apple Pay request, or is it a look-alike?
You are absolutely correct. I was told that was the way to send a call to voice mail. I tried the double click, and it popped up Apple Pay, as you said.
> An Internet with a payment system built in from the ground up
Perhaps there could be a new digital currency for all and any transactions done online. Let's call it ICR. Websites and apps would only accept ICR, and pay out only in ICR, and people would convert between ICR and whatever they want, and that would be the only way to generate ICR, unlike cryptocurrencies and whatnot.
Yup, you would pay for websites AND have ads. Given the opportunity to put ads on something, it will get ads on it. That’s just a fact of life. You’re just throwing money away otherwise.
So much this. It’s embarrassing to see otherwise quality-products forced to prostitute themselves to the god of ads. For example, I enjoy Brooklyn 99 like the next guy, but the way their writing bends towards lame product-placement every other episode is really jarring.
Netflix and HBO aren’t what I was referring to - I was referring to “real” over-the-air and cable television. Cable is paid for everywhere by channel/package. OTA is paid for in many places in the world through e.g. tv license in the Uk.
Ads still all over.
In contrast to op assuming built in payment would have had replaced ads.
TV sets themselves have ads built in. TVs and other hardware was sold for years at profitable margins, and then some genius realized they could undercut competitors by putting ads in the interface and making money later. After that, it was just a race to the bottom.
Which is why I'm starting to believe there should be regulations preventing this. I have no idea how could they work while avoiding disastrous second-order effects, but what I want to see is for "ad-subsidized" business model to be legally impossible. Along with its cousin, "razor and blades" model. This is on the grounds that these models are customer-hostile, and make it impossible for less exploitative businesses to compete.
I think Kindle has proven that people are fine with paying less for something ad supported. I think the better option is just more businesses giving zealots the option to pay more for the same thing without ads.
I don't know how it worked out on the US market. It worked out well on Polish market, because US Kindle ads were completely irrelevant and thus ignored as visual noise. And also I recall there was a trick to get rid of them and keep the cheaper Kindle.
Such a payment system would need to work for the unbanked (including the underage). That would probably mean accounts that aren't tied to personal IDs, and governments hate that.
As bad as ads and tracking are, they truly enable everyone to use the internet, no matter the background, financial or legal status, age or nationality. In my view, that makes up for all the problems tracking causes.
Somehow advertisements serve a dual purpose. The other side is the "beat your drum so people can find your great product" mind-set, or, depending on your point of view, the fightclubian impetus to "make them buy shit they don't need"; the other side is the profiling of the users.
If someone goes full-on "modern internet experience" without ad blockers, third party script blockers, cookie blocking, canvas fingerprinting blocking, etc. etc. the dataset collectable from the user behavior is rather large. And this allows for a detailed profile of the user to be built over time.
But what for? Why so much detail? Why so pervasive collection of online behavior and real-life interests? To sell more things to people? It seems like advertising would work just fine with a much coarser scale. Is it something more exotic, like building a "digital twin" of every citizen out there to predict what they will do and how they behave?
Or can it just be the mundane ads?
Anyway, I've wondered those questions and namely why the profiling part must be so pervasive and massive in scale. Can it not work in lower resolution?
PS. The TV-series Colony contains one possible "answer" but although it is an entertaining and paranoid thought, the real-life plausibility of that answer is rather low :)
No, iOS is a lot of custom code, but it's also a lot of reusing open source, and a jnix kernel, and POSIX, and the thr same instruction set as many others.
It's just a skin on top of the same old stuff everyone else uses, but with more control so you can't reach underneath to see much of the foundation.
That said, it's a fairly deep skin by this point, and getting deeper, but compared to the whole depth of the stack we're talking about, it's still just the skin.
I think this is a matter of perspective. I think Mac OS is just about as innovative on Unix as Linux is. Not in exactly the same ways mind you but definitely they’ve invested more than just wrapping a shell around POSIX.
Keep in mind the original post was going so far as to talk about a having different protocol stacks (not TCP/IP), different display protocols, and different IO ports (Apple devices do have some of the latter). And if JPEGs aren't supported through libjpeg, we're actually talking about a completely different computing paradigm, so it's not even a von Neumann architecture. In the fact of that staggering amount of difference, the slight difference iOS and current MacOS have from Linux and Windows is very shallow. That for many things you can easily throw a compiler on there and compile the same open source project and it will link to the already existing libraries it needs, the same ones it will find on other platforms, shows this.
While I don't know enough to comment authoritatively on it, I think with respect to this Apple might have taken a step backwards (at least with respect to percentage of system that is unique, if not in total amount of libraries and code) when it switched to OS X. OS 9 and its prior versions were an entirely unique and custom OS designed at Apple. The whole switch to OS X was taking a UNIX core for stability and building on top of it.
Apple systems are quite different than the norm in many respects, but the scope of this discussion is so large that there's really little they do that's different than everyone else from that perspective, because nobody does anything different. It's hard to start from scratch and ignore approximately 80 years of concerted computer science effort and come up with something useful and usable in any sort of time frame that makes it worthwhile.
What it is is irrelevant. How it presents itself to the user is exactly as a holistic, integrated, carefully designed computing ecosystem.
I’m a software engineer, I’m perfectly well aware iOS and MacOS run on top of a Unix kernel. But unless I go looking for it, they will never shove this in my face and make it my problem.
Meanwhile, I’m still dealing with Linux-loving colleagues who cannot get their sound or webcam to work reliably and consider this an acceptable state of affairs.
> Meanwhile, I’m still dealing with Linux-loving colleagues who cannot get their sound or webcam to work reliably and consider this an acceptable state of affairs.
That's largely the fault of manufacturers who refuse to properly document their devices and release functional drivers. I don't think anyone considers that an acceptable state of affairs, merely tolerable if it allows using an open software stack that you can freely modify.
It's the fault of the Linux kernel, which insists against all evidence that drivers aren't valuable IP that costs a lot of money to develop and that it should all be GPLd, meaning the manufacturers don't bother writing them for Linux.
> Meanwhile, I’m still dealing with Linux-loving colleagues who cannot get their sound or webcam to work reliably and consider this an acceptable state of affairs.
This isn't so much of an issue anymore in my experience. Older webcams and sound cards that work flawlessly with Linux often don't work with newer Windows versions, though.
I think part of the problem is that software is hard.
I've programmed for two decades. I write non-trivial software modules in a specific domain. It's not as if I was averse to analyzing things. And I try. When I can.
Yet. Design is damn hard.
One of the problems is that implementation is often an empirical process of requirements and constraints recovery.
I.e. I cannot create an elegant design before I've implemented anything, because some of the requirements are discovered only when the domain is explored using the code which I write.
One of the problems is that I'm not smart enough to design the system beforehand. And I don't have enough working memory to do so. And I presume many programmers work in similar environments.
I think in the 70's or 80's there was a vivid dream that one day programmers would have advanced CASE tools to help them design better programs. I wonder what happened to that dream. Maybe IDE:s and software systems became so easy to use that everyone prefers to prototype in code and perform empirical constraint discovery just like me...
The dream came true. In a modern non-software startup environment people often write code only as a last resort. I regularly see BizDev people setting up new business process automations with services like Zapier, Jira, Zoho, AutoPilot etc. There are plenty of CMS systems, customizable BPM, CRM, ERP etc where you can define UI and logic without writing a line of code and it just works. The cost reduction compared to the traditional coding is tremendous and enables small IT teams to run quite sophisticated businesses. Developers usually don’t see this, locked within their code-based programming paradigm, but this is already real.
Thanks for this! You are quite right. For the space that is dominated by "CRUD" apps there are quite a lot of alternative implementation pathways than pure coding.
And as most of the business value lives between the UI and the database, I understand why there is no driving incentive to create similar high quality tools for numerical, data intensive programming tasks that run on the customers desktop. We work in a niche, hence must be self sufficient.
IMHO, the business value lies in the data and the way they are structured. The user interface is important to the users, but the data are what make the business run longer than its users. Basically, without the data you can't provide the service; or worse, with bad quality data, you can't provide a good service.
I've spent (in bizdev, e-gov) tons of time figuring out badly structured data, fixing data quality issues, handling tons of administrative tasks to access sensitive data, etc.
I think it's up to a point where the structure of the databases actually influence the structure of the government. That's because nowadays, the databases and the business rules they embody are so complex that even the civil servants tend to forget how they work.
Sounds like a version of Conway's Law: "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure."
The assumption was that the communications structure would come first and then the design would imitate it, but with really ossified hard-to-change systems it seems plausible that the departmental organization would shape around it.
> but the data are what make the business run longer than its users.
How often is that the case these days, though? Currently, when I subscribe to a service or buy a product tied to a service, I feel there's a 50/50 chance the company will shut down the service (or itself) while I'm still using it.
Let me tell you, mostly these solutions suck. There are several reasons for this: (1) leaky abstraction, (2) not the correct abstraction, which works in all envisioned cases without limiting you, (3) often you do not want to only use an API, but instead change a tiny thing and that needs code or accept limitations (4) the existing API is weird or has a weirdly structured result (5) it is inefficient (6) you depend on a third party (7) you might have some rate limits, and probably more reasons.
If you got capable developers, let them do their job and create a flexible (expecting changes, simple design, usually not an elaborate many classes design) solution, that actually fits your situation. Developers are (should be) abstractions specialists. They should be able to find working ones in most cases.
Or live with limitations and the spread of thinking, that some things are not possible in various appartments. Mental barriers, that stop creative thinking, simply, because the tools do not exist.
I think that's a bit harsh - a lot of these tools can be pretty powerful if they are used for the kinds of tasks that they are well suited for. However, stray into a task that is just that bit more complex than what they are good for and they can be an utter nightmare.
Of course, deciding whether a particular task is a good fit for a particular tool can itself be a tricky question.
Well, as a software engineer who wrote first line of code in 1991 and as a CTO of a medium sized company I would disagree here. These solutions do not suck. They work, they are cheap and they are efficient in addressing the business needs. You don’t always need a developer for simple automation, just like you don’t need a computer to calculate 2+2. Abstraction leaks, third party solutions and other things you mention are not the problems that always need a solution. Developers often overthink scalability and invest time in perfection of a code which will become obsolete in a year or two because of the change in the business model. It is there you can find the art of engineering: in finding the right problem to solve, rather than working on the problem for which you know a conventional solution. Zapier is unconventional, but it takes 30 minutes there for something that a development team would require a week or two. The limitations do exist, but it often takes some courage to understand that in your specific case you can ignore them.
They work, they are cheap and they are efficient in addressing the business needs.
In my experience, they are at most two of these things. Most of the time, I see lots of businesses bend over backwards to mold their processes to match the tooling they use, or expend lots of manual labour because their business flow isn't modelled efficiently in their tooling. Impedance mismatch is the #1 factor why many business automation projects end up costing multiples of what they were initially intended to save.
These problems aren't limited to off-the-shelf solutions, it could take an experienced developer months to really learn the business process and model it efficiently. Most businesses don't allow for that in their software budget, so you get either:
- something that works, and is relatively cheap, because it was cobbled together by someone from the business (see: excel sheets, access applications or nowadays, Microsoft Flow)
- something that works and is efficient, because it was built by a team of developers in close cooperation with the business
- something that's cheap and efficient but doesn't work, because it was built by a fly-by-night developer on a shoestring budget.
The only cases where you can have all three is when the business was built around a single piece of well-designed software, the CTO and CEO have been very conscientious about following the capabilities of the software, and the software manufacturer never decided to completely overhaul their interface for "reasons".
The biggest problem is that a lot of them are siloed, sometimes seemingly intentionally.
E.g. we have a bunch of stuff in Airtable. It's great. You can do amazing stuff with it, and it has a great API for most things.
Except for the change history and comments.
When you're aware of this, it is fine, but the moment your business processes start depending on using the comments and change history, you're locked in unless you're willing to use phantomjs or similar to log in and use the API (of course there is one) that their web client uses to access the comments, in a way that is a massive pain.
I'm happy for users to start building stuff and automating stuff. The bigger problem is when they don't design and don't understand the trade-offs. Often it works great for years, until it doesn't and you have a massive job untangling dependencies in some cobbled together solution.
Sometimes that is worth it.
Sometimes it causes you to incur costs far higher than what you saved initially because knowledge often gets lost as these system grow in complexity without any coherent thought behind them.
Very often, even if you end up using these tools, you'd still be far better off if there was still a review process to get someone to say "stop,if you do it this way you're creating a risky dependency, do it this way instead." Or to say "we really need the dev team to handle this."
It’s indeed a job of solution architect or CTO in smaller companies to do this review and develop a strategy telling, when something needs to be replaced by a custom code. Also it’s important to look at the costs not only in absolute values, but also relative to the IT budget. It’s ok to spend later 200K instead of 50K now if company revenue is going to be 10x higher or if your current budget is consumed by a more important project. Cost of money and resources can be different.
You've nailed it - I am an inexperienced software engineer but when I lay down the architecture of a particular problem, think about it in excrutiating detail upfront; nothing goes according to the plan. Reality hits you like a brick and new things emerge when actually writing code. Even ambiguity in requirements surface when actually implementing the logic in code. So no matter how good the requirements are, theyll definitely need additional refinement. Software Engineering is really hard I've recently discovered and I value your decades long perspective.
100% - and even the best developer may not understand all of a user's needs until they get their hands on a prototype. We should always expect to learn and pivot as we learn more about users.
Exactly. I've never seen a (non-trivial) requirements specification that doesn't contain holes, contradictions, imprecision, or requirements that will be dropped or altered during development.
I've found that FP using function composition or other higher order forms of function like composition mitigates the design problem that you and many others complain about.
Essentially in FP you design your entire program as a single immutable function expression that's made out of a pyramid of layers that's constructed of smaller closed lego like functions composed together. People in the FP world call it the point free style.
The type signatures guide the architecture of your program and can easily be replaced decomposed or recomposed into higher or lower forms of abstraction without compromising your overall program. Design comes naturally like building something out of legos... Easily configurable, and minimal planning or need to hold the entire program in your head. If the design is wrong like legos your program is easily reconfigurable.
The main factor that allows lego like design is immutability. A lego that mutates state outside of its own scope is a lego piece that cannot be modular. FP by making everything immutable makes your program legos all the way down meaning your smallest primitive can likely be broken down further to reuse in other places and recomposed to form other higher order abstractions that are easier to reason about. In both OOP and procedural programming your designs can never have this level of flexibility due in to modules being tied together by shared mutable state.
Of course there are costs to immutability: both low level costs such as memory management and high level costs such as graph algorithms that require mutability to be effective. There are ways to deal with these issues but they are not trivial. In the end all FP programs must "cheat" in some way and have a small portion of their program actually mutate something.
Overall though, in all my years of programming I have found FP to be the best overall answer to the design problems that most programmers run into. It's by no means a perfect answer but it is the best I've seen and unfortunately not applicable to all domains.
Agree FP is great as you exposed for a large category of tasks. Agree it is not suitable for some fields, and hence, not a panacea.
FP offers a better syntax for most of the programs I write, hence it makes the initial effort easier. Especially "compiler like" or "interpreter like" programs more often than not just fall into a pit of success.
But still - my original point remains. I am unable to design a program beforehand. The iteration is just faster with FP.
On one point I disagree:
" In both OOP and procedural programming your designs can never have this level of flexibility due in to modules being tied together by shared mutable state."
There is nothing that stops one from writing most OO or procedural programs using immutable style. In fact, this style makes those programs better as well. Just try it. Never mutate an object. Each method, if they must mutate something, then let them return a new object instead.
This style is not applicable everywhere, and due to lack of garbage collected immutable datastructures in the C++ basic library, for example, the area where immutable style can be used is smaller than in a FP language.
And you can't use it in those cases naturally where you actually need to mutate existing storage. But, say, an array of 10 elements? Immutable! Most strings smaller than n * 1000 chars? Immutable! Etc.
I agree with everything you said. To address the part we disagree though:
>On one point I disagree: " In both OOP and procedural programming your designs can never have this level of flexibility due in to modules being tied together by shared mutable state."
When I define OOP I define it as mutable state. The main reason is because there's a equivalence when you do "immutable OOP"
object.verb(parameter)
is no different than:
verb(object, parameter)
Because both features are equivalent I would say neither feature is OOP and neither feature is really FP.
The difference is (mostly) syntactic sugar and it won't change the structure of your programs overall.
That being said:
object.setter()
has no equivalence to FP and is unique exclusively to OOP. Even the vocabulary: "setters" is unique to OOP. Hence following this logic, if you're doing OOP your code will have methods and "setters" that mutate or set internal state. If you're doing FP your code should be devoid of such features.
In fact if you do regular procedural C-style programming with immutable variables your code also becomes isomorphic to FP as assignment becomes equivalent to just creating macros for substitution in an otherwise functional expression.
immutable procedural
def add(x, y):
doublex = x * 2
doubley = y * 2
return doubley + doublex
functional:
def add(x, y):
return y * 2 + x * 2
haskell (functional):
add x y = let doublex = x * 2,
doubley = y * 2
in doubley + doublex
also haskell:
add x y = (y * 2) + (x * 2)
Due to the fuzziness in boundaries there's really only a few things that are hard differentiators between the three styles. When you examine the delineation and try to come up with a more formal definition of each programming style you will find that FP is simply defined as immutable programming, OOP is defined as a programming style that uses subroutines to modify scoped external state and procedural programming involves programming that can mutate variables in all scopes from local, external to global. There's really no other way to come up with hard barriers that separates the definitions and stays in line with our intuition.
It's all semantics either way and most people do a bit of a hybrid style of programming without ever thinking about what are the real differences so it's not too important.
The thing that's sort of bad about OOP is that it sort of promotes a style of programming where subroutines modify external state that's scoped. You tend to get several subroutines that modify some member variable and this is what prevents you from ever reusing those methods outside of that shared state. <= This is in fact the primary area of "bad design" that most people encounter when doing OOP programming... shared state glues all the lego bricks together making your code inflexible to counter the inevitable changes and flaws that you can never anticipate. The other issue is people never realize that this is what's wrong with their program. They blame their initial design as incorrect when really it analogous to saying that their initial lego project was incorrect and they should have glued the bricks together a different way. The primary key to fixing this design problem is to NOT glue your bricks together at all!
While even though you're capable of the above antics in procedural programming... in procedural programming most programmers tend to keep the mutations scoped to within the boundaries of the function definition hence keeping the function reusable. The downside of this is that if you have shared state within the scope of your function it becomes harder to decompose your function into smaller functions because shared state glues all your internal instructions together.
It's easy to see this face in functional reduce. Higher order functions like reduce allow you to break out the function that actually reduces a list while a for loop with a mutating accumulator can not be broken into further smaller functions.
compare the two below:
loop decomposed into two reusable functions (reduce and add):
add = (acc, x) => x + acc
m = reduce(add, [1,2,3])
m => 6
not decomposable by virtue of mutating accumulator:
m = [1,2,3]
acc = 0
for(int i = 0; i < m.length; i++){
acc += m[i]
}
acc => 6
"if you're doing OOP your code will have methods and "setters" that mutate or set internal state"
I don't think there is nothing in OOP that "forces" you to explicitly mutate state. People just happen to do it that way, even if there was no need for that.
I.e. instead of mutating an instance of class Foo
void Foo.set(Some bar)
There are several patterns that make instance of Foo immutable. As an example:
1. Use just 'Factory' pattern to build state and disregard mutation altogether
FooFactory fact; fact.AddBar(Bar bar); Foo foo = fact.build();
This may appear as mutating (the factory) but the point is the mutations are located at the factory, which is expceted to have a limited scope, and the entity with larger scope (Foo) is immutable wihtout setters.
2. If there is need to modify existing state, instead of mutating the instance, return a new instance with the value modified. So instead of
void foo.Set(Bar bar)
Use method that creates a copy of foo, modifies state, and then returns a new instance of Foo
Foo foo.Modify(Bar bar)
Just removing mutability actually goes long way in making programs more legible akin to functional programs.
I agree functional style is often the best. Unfortunately we just don't have a functional language that could replace C++.
>I don't think there is nothing in OOP that "forces" you to explicitly mutate state. People just happen to do it that way, even if there was no need for that.
I never said it forces you to do this. Please read my post carefully.
Let me repeat what I wrote. I'm saying that FP and OOP have blurry definitions that people have an intuition about but have not formally defined. Immutable OOP is can be called FP and vice versa. What is the point of having two names for the same style of programming? There is no point that's why you need to focus in on the actual differences. What can you do in OOP that absolutely makes it pure OOP that you cannot call it FP?
That differentiator is setters and mutators. If you use setters and mutators you are doing OOP exclusively and you are NOT doing FP. My response to your post is mostly talking about this semantic issue and why you need to include mutation in your definition of OOP. Otherwise if you don't than I can say all your descriptions of patterns to me are basically FP. You're following the principles of FP and disguising it as OOP and confusion goes in circles.
Please reread my post, I am aware of OOP and it's patterns so there's no need to explain the Factory pattern to me. I'm also in agreement with you like I said.
I am talking about something different: semantics and proper definitions, you likely just glossed over what I wrote, that's why your response is so off topic.
There is also some ecology: the hardware (CPU, GPU) has evolved in conjunction with software. Maybe a different hardware is needed to support the new software paradigm.
We could, sure, but nobody would use it, and at some level they'd be right, since the point of computers is to do stuff for us, and a computer that (intentionally!) doesn't work with anything else is an island, which is to say, not very useful.
It's possible to remove layers of cruft without throwing everything out. QNX and BeOS both implemented mostly POSIX compliance on top of radically different foundations. We've gotten rid of the ISA bus and that old pre-PS/2 mouse port, and mostly gotten rid of PS/2.
In terms of display technologies, we have DisplayPDF from Apple and Wayland on the Linux side.
You take huge risks by just throwing out everything at once. You want a modular system anyway, so run your revolution one module at a time. People don't notice your little revolutions, but that's kind of the point.
No, USB HID API can have all 104 key rollover if you want. The urban myth has something truthful behind itself, as at some point Windows refused to work past some number of key rollovers, and then keyboard makers started to limit it to accommodate Windows.
USB capable microcontrollers are a buck or two in unit quantity; most of them with a USB drivers fron the manufacturer have a HID device as a standard example. It's like "Hello, World!" but for USB.
Hack? Nah. Engineer? Easily.
But the other commenter who noted that it was a BIOS limitation was right - it's not a Windows thing.
Where are you getting this stuff about Windows from? Everything I can find says it's because of BIOS limitations, nothing to do with Windows; see https://www.devever.net/~hl/usbnkro , for example.
Strangely enough, the most recent motherboard I bought (X570-based) has a PS/2 port for the first time in years. No idea why they seem to be making a comeback, but it is kind of nice to be able to directly plug my old Model M in again.
There is nothing wrong with PS/2 as a HID input. It could perhaps have better "quality of life" such as reversible inputs, and strong re-pluggable support but that's it.
Interesting. I guess technically the board I bought is marketed as a gaming board, too, so that makes sense.
It makes me smile a little to think we've entered the era where 'old' tech is making a comeback in niche circles due to its own merits! I remember the keyjamming detection tool my cousin and I had to use to figure out how we could both play Star Control on the same keyboard. Good times.
I was just reading up about this a couple days ago. The PS/2 comeback for gaming keyboards stems from the fact that the PS/2 connector is interrupt-driven from the device, whereas USB is polled from the host. So in theory, PS/2 latency should be lower. But in practice there are numerous other factors that tend to outweigh any potential benefit: key debouncing logic time generally overwhelms the total latency, the low ~10kbps throughput limits the effective minimum latency of PS/2 bus messaging, and the fact that even if the bus is interrupt-based, keyboards use polling internally to scan the state of the key matrix. It turns out that the implementation details of the how a keyboard scans the matrix is generally a bigger factor in keyboard latency than which bus is used.
For people who weren't there, I think it's difficult to appreciate the kind of efficiencies which were gained by QNX and BeOS.
I first started running BeOS on my PowerPC Mac clone, back when those were a thing. There was a 1.44MB floppy disk circulating which contained pretty much the full OS, along with a web browser and various utilitied and demo applications.
Inside 1.44MB. It was phenomenal.
Compared to MacOS on the same machine, raw computational benchmarks were about 40% faster, and perceptually, the gains were even better than that, with essentially zero UI lag.
So, starting from scratch can have a lot of benefits, and you don't even need to throw out that much legacy.
Yea, I was quad-booting Linux, Win2k, QNX, and BeOS in 2000-2001. The most impressive thing about both BeOS and QNX were their responsiveness. BeOS was used for editing multiple video streams whereas the same hardware running Mac OS really struggled to edit a single stream at full data rates.
My ethernet driver for BeOS was terrible. If I left BeOS running overnight, most of the time I'd awake to a little popup saying something to the effect of "The ethernet driver for your 3c509 has crashed. Click OK to restart the driver." Thank goodness for userspace drivers.
I'm a bit surprised that the iso9660 and UDF drivers are still in kernelspace on Linux, since I can't think of a modern use case where performance of CD/DVD is relevant. Around 2004, I had a corrupted CD-R backup that would kernel panic OSX, Linux, and Win2k3 x64. Unfortunately, there was sensitive financial data, so I wasn't about to submit the image as a repro.
>I feel like we would have a hindsight to design it better, faster, and cheaper.
Maybe I'm just pessimistic but I suspect the opposite would occur. It would be buggier, slower and more expensive once you started trying to operate at any kind of scale. The reason being that hidden within all the accumulated layers of cruft are a million fixes for specific security, reliability and performance issues discovered through real world usage.
That's a pity. Wouldn't it be nice if we somehow could separate the cruft from the craft?
What if some computational deity sifted through our collective codebase and distilled the essence, the minimal, purest kernel? Such minimangel's task would be daunting and probably useless, since the moment we'd throw some new use cases and requirements, NIH and patents, POCs and MVPs and we're back to the simmering mess in no time.
Those hidden snippets are not the cruft. Every ugly hack we would find is likely a required use case and the whole mess is the craft. Maybe we should stop calling it a mess and accept that non-trivial software with on-demand requirements always take this form...
So lets call it detail instead. You cannot separate detail from the craft.
ok but let's distinguish smart things that were done to workaround past blunders
from smart things that you would have done anyways.
Both are "craft", but only the latter is "necessary" and would thus survive the "total rewrite" posited in this "thought experiment".
For example, the x86 is fully of crap; layer upon layer of smarts have been added to deal with design flaws that it was impossible to fix without breaking backward compatibility.
Backward compatibility was practically important, because of network effect (and indeed competitors lose the battle because they were not compatible with x86, or even with their past selves). This practical constrain caused smart people to build smart tricks on top of that foundation.
Sure, it required a lot of smarts to do etc etc but how'd you call it when you end up tunnelling a SCSI command inside an ATA packet wrapped in another SCSI command?
Urbit runs as a VM with a translation layer for things like networking, file system, etc. Using a host OS is expedient to build a working OS that the user actually wants to use, since hardware support is an uninteresting implementation detail. Eventually, one could write a native hardware kernel in Nock/Hoon, and do networking without TCP/IP/DNS between Urbit computers.
Android bundles a Linux kernel, or it won't run, it doesn't expect you to supply your own. It also doesn't try to reinvent DNS or TCP, the VM bytecode format or all the other things that Android leveraged to make itself into a useful platform so quickly.
The other difference is that Android has millions of users, Urbit, not so much.
I think it's what Urbit markets itself as, but the implementation is "just a VM" - with the potential to innovate further. The networking layer is where I thought they'd make a difference but they ended up settling just for the same address space issue we have today IMO without trying to disrupt TCP/IP/BGP/DNS. I severely dislike the Ethereum decision.
Urbit disrupts IP/DNS in that it replaces IP and DNS addresses with cryptographic identities.
Identities aren't limited as anyone can create a comet, an identity without a parent (self-signed identity). There is a hierarchy of limited identities which serve as a routing network to propagate software updates and network packets. Since in reality you need to choose some computer to receive software updates from and send network packets to. Comets could communicate without the galaxy/star/planet hierarchy but they would have to choose some other way to route packets (DHT or something).
Urbit can function without Ethereum. AFAIK galaxies (super nodes/identities) participate in Ethereum as a consensus mechanism voluntarily. Nothing is stopping them from changing that.
I have no idea why those chose Ethereum as a backing cryptographic protocol. It stains the project in my opinion.
I don't see it disrupting IP/DNS with cryptographic identities. I'd argue that Kademlia DHT does that in a more significant way.
Not really sure what Urbit does uniquely in that regard really.
I do crave some one-time use generated address that maps to a DHT/Kademlia cluster to provide deterministic privacy against p2p networks that doesn't depend on BGP autonomous centralized entities.
Ethereum is just used as consensus for key revocation and title transfer between galaxies. They don’t need to use Ethereum or can switch to another method of consensus, it’s just convenient at the moment to both distribute stars and planets and avoid having to build a blockchain or other method of consensus for the PKI.
DHTs do not have a solution to Sybil attacks. In reality, they depend on super nodes. That’s what Urbit’s spanning tree (galaxies/stars/planets) provides, a system of reliable routing. Nothing stops Urbit nodes from using a DHT to communicate, as the network protocol is addressed by identity.
Urbit is sacrificing accessibility by adding complexity which Ethereum is, on top of the reputation of what cryptocurrency is, while also additionally requiring the complexity of wallets and ERC-21 (or w/e it is these days) to prove you have address space.
That's bs in my opinion. It should be completely automated like Ygg does it and allow any peer to participate and connect and discover routing deterministically and dynamically.
I hear Urbit can switch at any time, yeah, easier said than done when you've built a lot of expectations on a significant piece of the stack including the wallet and auth parts which are pretty integral. Then shifting the conceptual model? Not likely.
> I do crave some one-time use generated address that maps to a DHT/Kademlia cluster to provide deterministic privacy against p2p networks that doesn't depend on BGP autonomous centralized entities.
It's not "one time use", but I believe BrightID[1] might be of interest to you. It allows applications to verify that you're a real person without looking into all your metadata and such. And it's not tied directly to your finances, etc like some other ID solutions are.
Many things about Urbit look great, but reading that site it talks about how the Urbit ID would be at the centre of your online life, payments, etc. Then later it explains how a company has been granted permission to sell the address space of these Urbit IDs.
Potentially it is better to purchase an Urbit ID directly, rather than indirectly purchasing Apple ID through buying their hardware, or a Google/Facebook ID by donating your data.
Nevertheless, despite being presented as a free and open system, it seems like Urbit would centralise power with those selling Urbit IDs to an extent that Google, Apple, Facebook, etc have not yet achieved.
When we started moving to cell phones, you could still call a landline phone from one. When we created web browsers, they could hit gopher or ftp URLs. When Stroustrup created C++, he let C code freely interoperate, which let people migrate at their own pace, or even flip back and forth. When Windows was created, it let DOS programs run. When IBM introduces a new mainframe, they always have a way to run programs from the previous generation (back to the 1960s, at a minimum). In all of these cases, interoperability with the old system was essential for the growth of users for the new system.
How many computer users are there now? Three billion?
Break backward compatibility and interoperability, and you lock half the planet out - the half that uses computers. And the other half... if they decide to use a computer, are they going to use what everyone else uses, or are they going to decide that your completely incompatible new thing is the right answer? Even if (in some platonic sense) your new thing is the right answer, how are they going to know that?
Hardware is much more complex than software and also requires incredible amounts of money in order to fabricate. Not sure if that will ever change. We'll probably remain stuck with whatever is available on the market.
Starting from scratch is actually feasible on the software side though. The OSDev wiki has many examples. The most recent one I know is Serenity. Still a monumental task. It's probably a lot of fun if you have the time and motivation to stick with it. You can actually throw POSIX in the trash and create your own abstractions. Hopefully people aren't reinventing signals...
I think Linux is the best option for those who want to create something from scratch. The Linux kernel is unique in that it is completely separate from user space. There is a stable, documented kernel interface in the form of system calls:
So people can actually do things like trash all of GNU and come up with something entirely new. It's possible to rewrite the entire Linux user space in Rust or Lisp or whatever. It doesn't have to be POSIX compliant.
The kernel even has this amazing header which lets people build freestanding Linux programs:
That header file is amazing. Thank you for the pointer to that. For anyone wondering, it lets you build programs without a libc:
> This file is designed to be used as a libc alternative for minimal programs with very limited requirements. It consists of a small number of syscall and type definitions, and the minimal startup code needed to call main(). All syscalls are declared as static functions so that they can be optimized away by the compiler when not used.
> That header file is amazing. Thank you for the pointer to that.
Yeah, it's an awesome header! It comes straight from the kernel developers too. They use it for their own tools, apparently. It might be missing some of the newer or niche system calls but it has generic mechanisms which can be used to access everything.
I found it during my research for my own Linux system calls library project:
There's some additional details implemented by my project such as process start up functions which handle argument, environment and auxiliary vectors. I don't see the need to maintain it anymore though since nolibc.h is so amazing and practical.
I even asked Greg Kroah-Hartman about nolibc.h on Reddit:
That still leaves you up a creek if you wanted to reimagine what syscalls looked like or if you wanted to build a mechanism that was totally different. That's ignoring that the Linux syscall surface area is quite large & complex with lots of legacy cruft on its own.
You can always patch the kernel to try out your ideas.
But really, the kernel is not where the innovation potential is. The way it works is really more or less decided by the way the hardware works. That's why the only real "novel" idea in the kernel space is microkernels, which date back decades.
Worth noting that the microkernel revolution kind of happened on Android, which is a re-think-computing-from-scratch kind of project. The drivers are all moved to userspace in Android, using their own IPC mechanism. The Linux kernel is left with less to do.
Yes, depending on how basic, or how long you want to work on it. Gary Kildall did CP-M. Tim Patterson did 86-DOS (later bought by Microsoft and turned into MS-DOS). Terry Davis did TempleOS. And I'm sure there have been other, one-off operating systems written by single programers.
Design and build? Yes, very much so. It'll be simple, but it's doable. How useful it becomes is a function of how much time you spend on it, and how many others you get interested in it.
It's in some way easier today than in the past. VMs really make testing much easier. On the other hand, the hardware has become more complex compared to some years ago when you could assume that all you needed was a little bit of assembly code to initialize the 32-bit mode of the x86 CPU and talk to the IDE interface.
I feel like IPFS is taking an interesting approach to this. They aren't trying to rebuild modern computing from scratch, however they are designing a tool which, as the name suggests, has the ability to scale basically unbounded. I'm talking in particular about their focus on defining specs that are themselves parametrized by other specs. At the base of this is multiformats which are a suite of specs for things like network addresses which allow you to wrap any current network protocol like tcp over ip over ethernet or dns over https over quic over udp over whatever. By using these paremetrizable protocols they are lowering the marginal cost you mention and making it much easier to migrate to improved technology in the future.
It's been a while since I seriously looked into IPFS, but this is what I remember about it.
> I often wonder what if we decide to build a completely new, well thought out computing system (including network) that is totally separate from any legacy system and 100% backwards incompatible. Not any bit less, 100% backwards compatible and everything would be fresh. There would be no jpeg support (it would have its own image format), it would not have standard TCP/IP stack - a new protocol and network infrastructure, new display format, new IO ports, etc.
The iPad is probably the biggest such attempt to do most of those things. :)
Yes. ZFS has a couple layers of caching in RAM and it actually reads from those caches first, before trying to read from disk. And then it caches whatever it has to read from disk, in RAM. That's why you need lots of ECC RAM to run ZFS. Because errors which propogate from RAM to disk cannot be corrected and such will destroy the whole file system.
You'll have a lot more success, I think, if you curate the things you keep and the things you throw out. All of the successful (relatively speaking) 'whole cloth' rethinks did this.
You are on to something but I think this would mean throwing the baby with the bathwater.
"From scratch" is a loose definition. And there will necessary be reinventing the wheel.
Making the new wheel incompatible on purpose with the old ones is wasteful. For one thing, you are correct. You don't build a car on a carriage framework. But the principles are the same and there are some technologies that will be reused: bushings for example. What should we do about them?
Isn't TCP a good example of these two strategies, e.g. where imperfect natural evolution won over it's competitors which I would say is an example of what your describing https://en.m.wikipedia.org/wiki/Protocol_Wars
I think in general doing a grass roots, ground up, all encompassing design is inferior to a more battle tested natural evolution based strategy where you build on top of each other.
Might be wrong, but that's my gut feel. I'm sure there some good examples of the two strategies and how they played out?
Interesting, I was thinking the exact same thing but in regard to real-life things.
Like imagine cars / roads were invented today do you thing they would have all have this same ubiquitous elongated design, with the grill in front and two headlights in the side, or maybe cars would be completely round maybe they would be made generally with standup room, one big head light across the bottom etc.
While some of these design decisions are practical, I believe a lot would be different if we would reimagine it from the bottom up. The reason - for instance - cars cannot be wider is because the wouldn't fit on today's roads roads. Existing safety regulations also prevent certain design possibilities.
Holyshit, I have thought about the exact same thing. Like totally questioning everything about cars and why do they look the way they look. Cheers, made my day.
Have you also though about how we see 3 holes on a face (2 eyes and 1 mouth, ignoring the downward facing holes of the nostrils) as normal. Just add one more and it’s horror. We take these things granted and then slight changes to it is completely terrifying.
I also wonder - if we rewind evolutionary time and restart with slightly different initial conditions, not sure what species including humans would even exist in the same form. It would likely be like that game Spore.
Absolutely agree with you. Same, with larger impact, would be if we apply this thinking to how we build cities. Think about it for a moment. How much "crap" and "stupidity" is there just because of legacy.
Soviet computers are an interesting historical riff on this.
But I think this would be impossible to cut the cord completely, because a computer is basically a communication system (internally and networked) and by Conway's law groups zero in on comms systems that reflect their own dynamics, so as a civilizations we've built devices and networks that have zeroed in on our own internal dynamics. Therefore as a member of this civilization, you could not create something different. You'd need to go hang with some folks completely different until you absorbed that, then make it to reflect their dynamics. Uncontacted Amazon tribes? Dolphins? Elephants? Plants?
I've always wondered if its possible to write software that will run on computers with arbitrary number of memory heirarchy eg. x amount of l1/l2 cache -> main memory (RAM) -> SSD -> HDD -> Archive storage where you can plug in a new type of memory as it becomes available (eg. 3dxpoint) and it will adapt to run optimally on that hardware without rewrites.
Or designing software for a theoretical system like an ideal reconfigurable computer with reconfiguring latency that is as fast as processing latency and then have hardware engineers design towards that as opposed to having software engineers target hardware like we do know. Feels like that would make for an interesting experiment.
> I've always wondered if its possible to write software that will run on computers with arbitrary number of memory heirarchy
I think that's already possible today - the Legion programming system claims to do exactly that (among other things). I have absolutely no experience with it though so I might be completely wrong.
> By making the Legion programming system aware of the structure of program data, it can automate many of the tedious tasks programmers currently face, including correctly extracting task- and data-level parallelism and moving data around complex memory hierarchies.
What came close was the smartphone (and a bit later, tablets); one big one is that they did not support Flash, which (at least in my neck of the woods) was an important factor in the shift towards webapps / SPA's. At the same time, browsers / JS became faster, NodeJS became a thing, and because of both apps, webapps and REST interfaces, the inefficiencies and insecurity of HTTP came to the forefront, leading to (more, easier, accessibe) HTTPS and HTTP/2, and now a push to UDP-based protocols like quic / HTTP/3.
Anyway, smartphones / tablets have made us rethink hardware, its developments have led to more compact laptops as well, and the push to more compact laptops have caused their manufacturers (mostly thinking of Apple throughout this comment btw) to eschew legacy systems - disk drives, USB ports, even 3.5mm jacks all went out the window.
But that is exactly what you're proposing here; what did you think when Apple got rid of a lot of that legacy? Personally I have mixed feelings; I do like USB-C. I didn't like Lightning because it was nonstandard, but it was an improvement over their broad connectors and USB-C wasn't a thing yet. I can see why they're pushing to wireless connections for audio and input.
Anyway final point, despite the cut in legacy connectors and the like, Apple's hardware hasn't gotten cheaper.
But smartphones - both iOS and Android - take their core operating systems essentially from the 70s. The kernels themselves are few decades newer than that. The concepts and interfaces implemented by them, however, are not. Similar things can be said about Windows Phone and NT.
Arguably, using an existing OS as a base was kind of the only way to get these products done in a reasonable timeframe. They run complex and powerful software and writing that stack alone is hugely demanding. Rewriting the OS would easily have lost the market to the competition because of cost and time overruns.
Why wonder? That's basically what IPv6 did for networking, or what Intel tried with Itanium, but with even larger gap to existing usage. The cost of switching would be so astronomically high, your product would fail before it even started.
"Throw out the old" happens in local pieces of the stack all the time. AMD removed a lot of cruft from the Intel ISA when they designed AMD64, Wayland removes a lot of complexity from the display protocol stack, and every I/O port before USB was designed to be incompatible with existing ports.
I feel that what you are describing will look like the SBC situation in the ARM world: the processor instruction set may be identical, but every vendor uses a slightly different incompatible way of connecting and designing the surrounding chipset, which results in a myriad of almost-identical drivers and subtle bugs that are impossible to fix, because the cost/benefit ratio would be prohibitively high.
Standardization isn't just for backwards compatibility, nor does it mandate it. But it does allow for reuse of existing components in an interoperable way. Imagine if every application came as a unikernel microservice instead of an OS framework library: there would be no UI consistency, copy-pasting between different applications would be hard, and you would have no OS-level update facility -- every application would have its own set of vulnerabilities.
As far as I know, currently there are no "good" real time operating systems available. There are some ancient, crufty ones... and some modern soft-real-time operating systems, but that's it. This would be a fun exercise for someone wanting to write something uniquely useful in Rust, for example.
It might make sense to write a pure web browser engine designed for embedded contexts. Think infotainment systems where the web content being served is fully controlled. This would be an opportunity to write something like a HTML5-only engine with UTF8 only, TLS 1.3 only, HTTP 3 only, etc...
Relational database engines need a massive rethink. All of the current popular ones are 20+ years old, and based on 50+ year old architectures. Why do schema operations take time? Why does index rebuilds lock tables? Why are there column size limits? Why is SQL injection still a thing? Why, why, why... because legacy, that's why. Because hard-coding, that's why. Because 1980s C-style programming with fixed-size buffers, that's why. All of this could be just washed away with a rewrite using Rust, Modern C++, and some fresh ideas.
I 100% agree with this and was just talking about this in another thread on one of my earlier comments. I love what Plan 9 and Oberon was working on. I feel many are feeling similar at this point given the frequency I'm running into this and also I myself wanting to solve for it. Awesome!
I think the networking layer should be built off of http://PJON-technologies.com - it's perfect for standardizing the next layer of networking and without depending on any ISP or centralized authority that also has support for multiple "packet/connection" protocols (LoraWAN, Light, Ethernet, and more).
I am going through a history lesson right now on studying older OS's and why they didn't take off while also getting my feet wet with Forth and lower-level hardware experiments. I still have no idea how video truly works, that's something that's been bugging me from a first principles perspective and I want to understand that more intuitively. Something I keep asking myself is: "How do I poke at the video card and make it display stuff without having to use a OS level API that wraps openGL or similar?" - I know some will be shocked at that question and some may go "woah, actually I don't know that" - we're in an interesting generational leap with where knowledge has been in the zeitgeist of computation and networks.
The OG's are in a particular precious bucket where they've done their part and not that active or have been yelling the current stack is wrong but it's hard to find their voices. Then you have the hackers and hardware folk who learned a lot pre-internet that may or may not be active on the typical net's hanging out only in Plan 9 esoteric rooms saying how good the internet was before marketing took over, and then the dot com boom generation which either knows the earlier internet or thinks the internet is only social media and search engines.
I want to make the internet feel adventurous and fun again like the 90's. I want it to be focused on individuals and not organizations. It was exciting because you connected to people, today you don't do that anymore - you interface with some centralized entity that exposes you to products. I have more to say here that is a lot more nuanced but you get my gist.
Urbit is doing something similar to what you're saying, however I personally don't like their networking model and a few other decisions that just feel contrived to me. Plus I want to get my feet wet and understand the layers myself from a first principles perspective so I can add to it as I am imagining it.
Gall’s law though. For that complex system to work, it would have to be based on something simpler that worked. Yes you might well come up with something radically different, but still you’d have gone through some of the same evolutionary process, with some of the same results.
Many of the accidental decisions of the 70’s and 80’s ended up becoming de facto standards, and then actual standards, and then requirements in purchases, and then table stakes for market entry.
Why must bytes have 8 bits, why do we count time as seconds since 1970, why must text be encoded as unicode code points, etc...
There is now such strong external pressure on projects to conform to these table stakes that I think the only thing that will overturn these decisions at this point is the singularity.
Lets say you did. You launch it, people are excited, they start using it.
But stuff will evolve, people will want to innovate. So they make new hardware, and they make new libraries because lets face it, chances you get everything right the first time is slim to none.
And after a few years you're back to having backwards compatibility issues, and some years after that you're starting to feel the mismatch between hardware and software again etc.
As for cheaper, consider this. You can get an ATX motherboard with lots of functionality most people won't use, for like $50 or so. In low quantities just the PCB alone, just empty tracks with no components would be much more than that.
I think an interesting compromise would be to have a collaborative project for the design of such a system. Even if it was never implemented, it could highlight many of the problems current architectures have, and provide some ideas going forwards. And if it became very obvious that the new system is so much better, then we could always try to do an actual implementation. Nowadays in comparison everything leaks so much.
Another partial approach would be this: imagine you had a completely new console which had a single programming language, all tools needed to develop integrated, dumb simple, and you could draw on the screen easily, manipulate the audio buffers easily, and had trivial access to button inputs. Even without networking, I bet we would see amazing things happen. A closed, small, understandable and yet practical and performant system? The dream of every programmer. I know some people already think we have libraries that do this, but they can't provide such a platform.
In both cases, for different reasons, and in different contexts, the projects set out to reinvent computing from the ground up. The Newton failed in ways that make it a punchline, but the punchline obscures useful things about how it succeeded and how it failed.
Both projects embodied some fairly radical rethinking of how computing should work. Both were fun and inspiring to work on--among the best things I've done in my career. Neither has conquered the world, though some of Newton's novel features have become commonplace, at least in some form.
On the one hand, the fate of these projects is most likely the fate of any similar attempt to reinvent computing. The scale of the existing computing ecosystem is planetary. No small group of visionaries can exert the force needed to move the planet, and no group big enough to move the planet can be cohesive enough to embody a coherent vision.
On the other hand, I think it's worthwhile to invest in pipe dreams like these. They sometimes invent compelling new things that go on to make positive contributions.
They're also instructive. They teach you what is possible when you ignore inconvenient realities. Smart people given a green field and an inspiring vision can invent some truly wonderful things.
They also teach you what is not possible. You aren't going to simply wipe away the existing infrastructure and replace it wholesale. Its gravitational pull is inexorable. At some stage you have to come to terms with what has gone before, unless you're content for your effort to become a footnote.
Both things are useful things to learn: yes, you can do something novel and inspiring, and that's well worth doing for its own sake. No, you can't, simply through the charisma of a beautiful idea, undo all that has gone before.
Nor, if you could, would that necessarily be a good thing. Finding workable compromises with your surroundings is, after all, a constructive endeavor.
The problem with starting from scratch and doing it completely differently is that inevitably you'll end up researching only to get to the same point (for all intents and purposes). Some things just converge towards the same solution given the set of conditions.
So between spending all that (partially duplicated) effort to rethink everything and the fact that the end result would have little buy in due to the high threshold of effort involved with adopting and using it, this is rarely a tempting proposition for many. Either to develop, or to use.
Great innovations don't seem to come from just iterated evolution or some big revolutionary leap. Instead it seems to me to come from a series of iterative evolutions followed by a revolutionary idea and a good execution in part of the problem.
Often it comes from a sequence of these evolution->revolution chains. When that happens the initial execution of the revolutionary idea is usually buggy and incomplete, but works well enough to demonstrate the idea. Other people then evolve that idea further.
I can be a little more forthcoming than this with the benefit of a night's sleep. :-)
The problem with this question is that the answers are fuzzy. Newton had a lot of features that are everywhere now, but that were not when it was being developed. Of the features that have become widespread, it's not perfectly clear which ones have Newton ancestry, which share common ancestry with Newton, and which are mostly independent but possibly influenced to some degree by Newton.
There were a lot of ideas swirling around in the late 80s and the early 90s, and a lot of smart people trying them out in different ways. Some of Apple's best people worked on Newton, and afterward went on to work in other places, taking with them their intelligence and creativity and things they learned working on Newton. It's hard to be sure what they did later, or who they influenced with those ideas.
Take Javascript, for example. It's pretty similar to Newtonscript in several ways. Is that because Brendan Eich knew about Newtonscript and was influenced? Or is it because Brendan Eich was influenced by some of the same things that influenced Newtonscript's creator, Walter Smith? Things like Scheme and Self, for example, and like the idea of using a syntax chosen to seem friendly to C (or Java) programmers. Someone who knows Brendan Eich could ask him, or Walter might know something about it; he often comments here.
Newton had a bunch of features that were wild in 1992, and aren't as wild now. Features like transparent wireless network roaming, seamless migration of data between ephemeral and persistent storage, a purely gestural interface built on machine learning, and a simple knowledge-based help system. It was almost small enough to fit in a pocket--but not quite. I think that's one of the major things that worked against it in the market: it wasn't small enough for a pocket, and it wasn't big enough to have a nice screen. It was exactly in the awkward middle.
Its handwriting instantly became a joke, but it was actually pretty good. Once it was trained, it was actually very good, but one of the things that we learned from Newton was that very good is not good enough in a consumer product. Every small step away from perfection loses you customers.
(Newton's handwriting recognition was unusually good for me, and for the other Newton engineers, because our handwriting training data was built into the firmware. Any of us could make any Newton look nearly flawless because it already knew us.)
Its recognition system was actually more sophisticated than I've allowed: it was in fact several different kinds of recognizers working in parallel, coordinated by a blackboard system. Its ability to recognize drawn shapes and convert them into vector graphics that you could immediately edit in-place was very cool. The fact that you could have all these different recognizers all working together on what appeared to be a single sheet of "paper", doing the right thing, whether it was converting handwriting, recognizing drawings, or responding to command gestures, all interleaved, was even cooler, and I'm not aware of another system that achieves that seamless kind of feeling even now.
The first time I ever saw a freely-roaming networked device, it was in Newton's Bubb Road building in Cupertino in about 1992. It was a Newton prototype PC board ribbon-cabled to a Macintosh SE that was being wheeled around the building on a cart so that the engineers could watch as it negotiated connections with different networks. We all imagined a world where you could have a computer in your pocket and, wherever you went, it would just always have network connectivity. That was crazy talk in 1992.
Newton probably isn't why we have that now; rather, more likely, Newton was one of the earliest manifestations of a dream that lots and lots of people were dreaming, and that someone was eventually bound to get working. Still; 1992. It was science fiction then.
Okay, our present mobile devices are not as seamless as the Newton team imagined, but we do have network connectivity pretty much everywhere. It's so ubiquitous now that people probably don't think it's a big deal, unless they're old enough to remember when we didn't have it.
At the heart of this idea is that we can avoid the compromises imposed by the systems we use. But all system designs involve making compromises. The best you can do is design a new system that makes different compromises, but it's not clear that necessarily makes it better. The existing systems we have were often not the first of their kind, often there were many competing systems that could have won. These ones we have did win for reasons, many of them really solid ones and often because they made the right compromises.
Take JPEG for example, Six different companies helped develop it, there are six different patented ideas in the arithmetic encoding system and a slew of papers were influential in it's development over decades. You're really going to come up with better ways to do all of that over again from scratch?
OK maybe you come up with a better file system or kernel architecture, but why does that mean you need a new graphics compression format? You're also piling up all the risks together. Coupling together a huge swath of unrelated concerns arbitrarily means if some of them sink they'll bring all the other stuff down with them.
I often wonder why, in open source, we don't see the logical divide between architects and developers.
Why aren't there architects writing down the requirements of operating systems and defining their architecture, and, separated, developers working from these designs?
I also wonder why we see so few new ideas in the OS space. From a user's viewpoint, it seems little has changed since the first Unixes were born.
Because software development is closer to developing a prototype object than a production object. There is so much knowledge to gain from implementing the problem that need to reflow back to the conception for a (hopefully possible) second implementation.
Also who was saying that architecture is the wrong metaphor for software architects and the good metaphor is city planners? They put some frames and infrastructures but mostly enable the creation.
IMO there won't be any ground-breaking results. Computing is pretty efficient as it is. May be you'll run modern workloads using 256 MB RAM and 10GB HDD. So what? RAM and disks are cheap.
Where it matters, we use hardware acceleration and very thin layers over that acceleration. For example games.
JPEG have good compression and few hundreds of kilobytes to implement its support (and those kilobytes are battle-tested and probably have little to no exploitable bugs). I just don't understand why would you want to drop it. Modern web adopts new image formats, it's not like you must use JPEG today.
@systemvoltage: “I often wonder what if we decide to build a completely new, well thought out computing system (including network) that is totally separate from any legacy system and 100% backwards incompatible.”
Good idea, and then run legacy apps under an emulator. Each instance created and destroyed after use. That way there will be virtually no chance of malware propagating through the system.
This is a lot of PDFs. Is there a basic summary of what hardware/software is involved available in HTML form?
Sorry if I don't really understand what this is, but it seems like there's a production-ready hardware and software description (written in 2013?) that is applicable today in 2020?
Edit:
Is this meant for academic study? And if so, is a basic newbie description available?
The documents outline how to build a computer, from designing the hardware in an FPGA, to the entire software layer including UI, bootstrapping itself etc.
One person can understand the whole thing.
What's more, this is all done with a compiled language using GC at runtime, and with a low memory overhead ie. Oberon can be used as a systems language.
There are several projects moving on from this software-wise, A2:
Also, following on from A2, Composita introduces a component architecture to allow managed memory without GC, and contextless switches (IMHO better than Rust)
Getting a doorstep solution for your <a href="http://webmunk.in/ac-repair-and-service-bangalore.php"> Ac repair and service in Bangalore </a> is a must have these days, You just need to go online and get a depth information about different services provided by them and you can easily pick a service based on your needs.
This has been done with hardware. The OLPC was a clean sheet implementation of a computer. It created ideas such as only having a few sizes of screws, shipping extra screws in the case, separating the light bar from the LED screen, innovative use of polarization, new mesh networking, and a low price tag.
About half of the innovations were picked up by other manufacturers within a couple years.
This was done with a programming language. Ada was designed through an iterative sequence of trials for a clean sheet implementation of a development language. It created, or popularized, ideas such as explicit module exports, tying directory and file names to classes, separate "to end of line" comments, and more.
About half of the innovations were picked up by other language vendors within a couple years.
Oberon gets mentioned on HackerNews every couple years. It is hard to say its really a clean sheet.
Many of its ideas, and Xerox PARC workstations which is here Niklaus Wirth got his inspiration from, could be replicated via COM/XPC/D-BUS/gRPC/AIDL based desktop, however they never go to the full extent as Xerox/Oberon went, because most developers lack the understanding how it could be like and never bother to learn from history.
I've been mildly interested in Oberon (the language) as of late after reading about Wirth's languages. I remember reading about Pascal and the Modula family some years ago, but always didn't know Oberon was his creation as well and thought it soubded like some weird esolang. I almost wrote a compiler for Oberon when I learned how simple the language is and I needed a compiler for an old architecture.
What were benefits or breakthroughs of the OS when it was created? I wish the 2013 book was available in a physical copy, I'm not a huge fan of PDFs.
... in the 80's, with a headcount of 2 (who had other academic duties).
> "The primary goal, to personally obtain first-hand experience, and to reach full understanding of every detail, inherently determined our manpower: two part-time programmers. We tentatively set our time-limit for completion to three years. As it later turned out, this had been a good estimate; programming was begun in early 1986, and a first version of the system was released in the fall of 1988."
I think of Wirth and Gutknecht as demonstrating that one need not be as far out as Terry A. Davis to do small-team end-to-end work. (compare Carver Mead's tall thin person)
Although the original Project Oberon was great for its day, one should have a look at Oberon System 3 and Blue Bottle (AOS), which evolved Oberon (language) into Active Oberon (language), while offering a much more modern L&F.
Unfortunately the www.ocp.inf.ethz.ch seems to have been moved out of Internet. It had quite some information about Bluebottle, although there are some Github mirrors like this one:
Is Active Oberon considered a canon (for lack of a better term) successor to Oberon07 or is it more like Component "Pascal" and just a spin off from an Oberon version?
=> Active Oberon
=> Oberon.NET => Zonnon
=> Oberon-07 (multiple iterations until 2016)
From features think Go (Oberon), D/C# (Active Oberon).
Active Oberon provides features for manual memory management (untraced references), async/await (active objects hence the name), lightweight generics, interfaces, exceptions.
I don't believe in throwing out the old completely. As others have noted, the cost of completely replacing the current infrastructure is astronomical.
What could be done instead would be a parallel virtual platform inside the current technology that abstracts the current technology in such a degree that many things become trivially doable.
And, of course, in the same vein: https://www.nand2tetris.org/
reply