ok but let's distinguish smart things that were done to workaround past blunders
from smart things that you would have done anyways.
Both are "craft", but only the latter is "necessary" and would thus survive the "total rewrite" posited in this "thought experiment".
For example, the x86 is fully of crap; layer upon layer of smarts have been added to deal with design flaws that it was impossible to fix without breaking backward compatibility.
Backward compatibility was practically important, because of network effect (and indeed competitors lose the battle because they were not compatible with x86, or even with their past selves). This practical constrain caused smart people to build smart tricks on top of that foundation.
Sure, it required a lot of smarts to do etc etc but how'd you call it when you end up tunnelling a SCSI command inside an ATA packet wrapped in another SCSI command?
Anyone who wanted to push the limits of hardware at that time had to do tricks like this. There just wasn't any other choice. Sure, there were plenty of programmers back then who weren't capable. But it's not magic, it's careful thinking and hard work.
If a programmer back then doing crazy assembler stuff to get barely acceptable performance were given the option to work on today's hardware and write code that is portable across chips and operating systems, provided they never do any clever hacks - do you really think they'd refuse?
Yes, given that back then people were typically working in assembler (with frequently changing platforms, to boot) and writing relatively simple data processing applications, the claim is somewhat understandable.
But I think the main thing it failed to take into account is more how hard it is to translate business requirements into complete and unambiguous instructions.
I did basically everything you mention, back in the day, although on a C64.
In many ways, things were much easier back then: Direct access to most of the hardware, flat memory layout, smaller and vastly simpler ISAs, smaller programs (meaning shorter disassemblies to wade through), no protected mode so you could overwrite anything in RAM and so on. And you wouldn't even have to do it live in-memory, just disassemble the program piecewise from disk. People did extraordinary things back then, and you vastly underestimate their capabilities. Sure, you had to write a lot of tooling yourself, but it was simpler times.
I am not trying to detract from the copy protection mechanism, which truly is ingenious. I was just genuinely curious whether I was misunderstanding anything from the article.
Yeah but nobody used it - the task state segment was slow to write and read - so it was faster to push/pop registers. Pity - because this forward-compatibility argument might have convinced folks back then.
You're right, targeting the x86 architecture added a lot of complexity without any tangible benefit. Poring over the subpar Intel documentation consumed an inordinate amount of time.
That said, it only took a little over a month to build the OS itself, while the other two months I spent working on the train control program. All the equipment is notoriously flaky so coping with real-world brokenness was probably the biggest time sink.
Sure I'm not saying it's not possible. I did stuff in the 90s too. But usually you had to put a lot of effort into arena allocators and reusing pointers. Also lots of software then was pretty jank and would just segfault and errnos we're often not handled.
A lot of regular developers were working on Y2K modifications to enterprise programs. That was a person intensive effort and was very successful. In a lot of cases it wasn't very easy. I get the feeling that 32-bit gates in embedded control devices will not be taken as seriously.
Very elegant and very unimplementable at the time. The make They seem to avoid the under sized address ranges which time and again plague computing.
eg.
-IBM mainframe OSs with below/above the line addressing.
-The 1MB 8086 & Dos Limit and the hacks that followed (EMS,XMS) etc.) for more than a decade. The Disk access size limitations in the same era.
- IPV4 address depletion.
The authors in 1980 seem to be aware the issue would be there but I don't see how the thrashing could have been avoided even if 1980's developers/designers had read the paper when it was published.
> +orc and fravia +hcu stuff ate up loads of my free time in the late 90s
Ha ha, yeah me too. It was interesting to see how cracking affected software development too. Paintshop Pro 2(?) was the easiest "Hello, world" crack, but the next version was really difficult. I never got to the bottom of it. Their registration verification code seemed to be littered throughout a load of their initialisation functions instead being the simple `if isValid(userCode) unlock()` it once was.
That said, it would no doubt have been easier to reverse engineer if I could have forward engineered at the time... QBasic wasn't really a good gateway to assembler :-D
This. Our goal was to build a toy OS, from scratch, in x86 assembly, that could multitask execution of DOS COM files. God help you if you didn't already know ASM.
Fortunately we were graded at milestones, and our sins (bugs) were forgiven by replacing our individual buggy implementations with known working implementations that covered the concepts up to that point.
On my first job, I wrote a Sha-1 implementation in 8088 assembly. The platform was a smart card, but there were a few additional proprietary instructions. So, a real chance to beat the compiler, who didn't know those instructions.
Ten years later, when I met someone from my old job, it turned out the assembly was still used. Probably the only consequential thing I did in all the years I worked there.
Ken and I ported Plan 9 to the SPARC in 6 days over one fun Christmas break.
I wrote the disassembler (for the debugger) and Ken wrote the assembler, so we could cross-check each other's work. The hardest problem other than fighting register windows occurred when we both misread the definition of an instruction the same incorrect way.
That tells me why it sucks from the perspective of a compiler writer in 2017. But it doesn't tell me if it sucked in 1979 in an era when ask was much more frequently hand-written and there were many more physical constraints and trade-offs. x86 seems often to be judged by later innovations like RISC. But that wasn't a thing in 1979 and sucks for hand-writing anyway - perhaps you want more complex and non-orthogonal instructions in 1979.
Both are "craft", but only the latter is "necessary" and would thus survive the "total rewrite" posited in this "thought experiment".
For example, the x86 is fully of crap; layer upon layer of smarts have been added to deal with design flaws that it was impossible to fix without breaking backward compatibility.
Backward compatibility was practically important, because of network effect (and indeed competitors lose the battle because they were not compatible with x86, or even with their past selves). This practical constrain caused smart people to build smart tricks on top of that foundation.
Sure, it required a lot of smarts to do etc etc but how'd you call it when you end up tunnelling a SCSI command inside an ATA packet wrapped in another SCSI command?
reply