Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

> but I wouldn’t have the slightest idea where to begin

https://wiki.osdev.org/Expanded_Main_Page

It's in some way easier today than in the past. VMs really make testing much easier. On the other hand, the hardware has become more complex compared to some years ago when you could assume that all you needed was a little bit of assembly code to initialize the 32-bit mode of the x86 CPU and talk to the IDE interface.



sort by: page size:

> Still, I’d say that trying out Arch has immeasurably improved my knowledge, not just of Linux but of the underlying concepts behind modern computing.

I love hearing that, because it was a goal of Arch from the very beginning: to stop fearing the commandline.

And I was the first alpha tester, in that I wanted to learn more about how the sausage was actually made, so to speak. I was comfortable using things like Linuxconf at the time, but its beginner-friendly veneer meant that I didn't really know what to do if it _wasn't_ there.

After tinkering with Crux and PLD for a bit, I wanted to go deeper and start from nothing. So I loaded up the LFS[1] docs and just started typing in the shell stanzas to start building my compilation toolchain. In an effort to DRY as much as possible, the work also got placed into shell scripts, which eventually became PKGBUILD modules.

I started having way too much fun with it, so I put up the world's ugliest webpage[2] to share my triumphs, and a couple people found it, somehow. That begat the immediate need for documentation, which eventually brought Arch into the forefront. I can't recall who spearheaded the Arch wiki, but we owe them a great debt, because it has become a valuable resource for Linux users, and not only the Arch users.

Arch is my happiest accident.

ps: btw, I run Arch (is this still a meme?)

[1] https://www.linuxfromscratch.org/

[2] https://web.archive.org/web/20020328043401/http://www.archli...


> Operating system kernels would be an example of that: The best test for a *nix OS kernel is, if it can run a shell. You need all the essential syscalls to do something sensible and if any of the required parts doesn't work the whole thing fails.

So start with something simpler. Start by making a kernel that can run /bin/true, that never reclaims memory, that only boots on whichever VM you're using for testing. You absolutely can start with a kernel that's simple enough to write in a week, maybe even a day or hour, and work up from there. See http://www.hokstad.com/compiler for a good example of doing something in small pieces that you might think had to be written all at once before you could test any of it.

> I spent the past 3 weeks doing exactly that, refactoring a code base. I knew exactly where I wanted to go, but eventually it meant working for about a week on code without being able to compile, not even think of testing it, because everything was moving around and getting reorganized. However now I'm enjoying the fruits of that week; a much cleaner codebase, easier to work with and I even managed to eliminate some Voodoo code nobody knew why it was there, except that it made things work and things broke if you touched it.

Which is great until you put it back together and it doesn't work. Then what do you do? I've watched literally this happen at a previous job, and been called in to help fix it. It was a painful and terrifying experience that I never want to go through again.

In my experience with a little more thought you can do these things while keeping it working at the intermediate stage. It might mean writing a bit more code, writing shims and adapters and scaffolds that you know you're going to delete in a couple of weeks. But it's absolutely worth it.


> I don’t think compiling kernels would really gain anything from being outside a terminal?

I remember compiling, flashing and debugging Windows CE images (10? maybe 15 years ago?) from an IDE (Visual Studio). It was super comfortable. Setting breakpoints, jumping to any kernel thread, navigating the call stack, watching memory and variables. All from the IDE. The development process was super fast.

Now between dmesg and logcat, I have to debug reading and grepping thousands of lines of log. Also adding printk, ALOG and all sort of logging functions to the code, recompiling, reflashing using a terminal with ADB, etc.

> Ultimately it’s just an interface to programs that output mostly text

For example, double-clicking on a compilation error and showing the error in an IDE is priceless. I know there should be some Vim plugin that does that, but it's out-of-the-box on every decent/modern coding IDE out there (VScode + ssh, which I use for AOSP, for example). Even better if the IDE shows only the errors/warnings the compiler emitted.

Also, try to find the error line between 10000's lines of building log when you compile a kernel/AOSP in parallel with -j20.


> What would you expect to see instead of a bunch of configuration files and cryptic commands?

The fact that you even need to ask ..

Like, it's so _obvious_ that any computer system can't possible be made to work properly without a bunch of cryptic configuration files and cryptic commands.


> The one thing I like about using a VM is it runs full Ubuntu with an init system

You can get that functionality without full VM overhead by using:

- https://wiki.archlinux.org/index.php/systemd-nspawn

- https://www.freedesktop.org/software/systemd/man/systemd-nsp...


> Disclaimer: building this type of thing (on bare metal) is a chunk of my day job. I see it as unbelievably trivial.

You do, I'm sure, because you've invested a lot of time learning it and building tooling around it. For anyone who isn't a full-time sysadmin, debugging all of the many and various quirks in management hardware is a major time sink versus scripting a VM server's API.


> When I run something on my machine I want it to do exactly what it is supposed to, nothing more and nothing less.

So you're running on seL4 right now? How's that going? Did you program the userspace yourself?

I'm impressed.


>I loved developing all kinds of command line and UI tools for Windows, so I might be a little biased, but I found the Linux equivalents..well..not as easy and straightforward to say the least.

Well, I think very often commands have very not intuitive names of parameters, very often some seemingly random letters

When it comes to me,

I use Linux for server related stuff, hosting my things and stuff, also I like raw terminal Linux because there's nothing happening, so no distractions

but day-to-day I use Windows.


> No mention of how it's better

No embarrassing buffer overflow CVEs is a very good start.

To me that's an actual selling point and I've migrated from almost all UNIX coreutils to Rust alternatives for that reason alone.

> Am I supposed to use a tool just because of what it's made of, or because it solves a problem for me?

No, as an adult you are supposed to not frame the discussion unfairly and ask the right questions.


>Not sure what can be done. Though I am tempted to go see what a 9,000 line OS looks like.

I think you are going to like STEPS:

>The overall goal of STEPS is to make a working model of as much personal computing phenomena and user experience as possible in a very small number of lines of code (and using only our code). Our total lines of code target for the entire system -- from user down to the metal is 20,000, which we think will be a very useful model and substantiate one part of our thesis: that systems which use millions to hundreds of millions of lines of code to do comparable things are much larger than they need to be.

http://www.vpri.org/pdf/tr2012001_steps.pdf


> In fact, if you asked them to make something that builds from the command line, and runs, they would probably be scratching their head for a while.

As someone who learnt to program using linux/Mac OS, when I got a job somewhere that used windows and VS exclusively, trying to get my head around the (to me) needlessly confusing and convoluted way in which windows-dotnet devs seemed to do things was a painful experience.

Watching some of them try and debug docker files was amusing because that was a total role reversal.


> How hard it is to actually use your own code.

What do you mean by this? It's easy enough with test-signing. Just use SignTool sign to sign your binary. Then sc create to create the service for the driver, and sc start to start it as usual.

> How much it looks like 'nix.

Also not sure what you mean by this either... to me there's a world of difference between Windows and 'nix kernel development.


> Personally, I'm still an XP fiend at heart. Give me: a single list of prioritized work; a test suite I can trust …

For me the essence of XP was always programming in pairs in front of a computer. That notion was somehow lost in the article and you also don‘t mention it.


> Can anyone explain the deep nostalgia and longing for old DOS era software, and in particular VGA text mode interfaces?

I’m not using any of those other old things you mention, but I love how I can run my full Emacs-configuration locally in a TTY, remotely over SSH or whatever with no loss of functionality.

Having that capability is IMO a strength, not a weakness, and I wish more software was like that.


> And then there's the documentation which is like infinitely better than Linux's...

I am 100% unexpert in systems programming. I am not challenging this sentence.

When I look through Linux's source code, I read documentation files like Documentation/x86/exception-tables.txt and filesystems/ntfs.txt. I can also read the source code and its notes - for instance kernel/cpu.c or kernel/panic.c.

Where do I find analogous documentation or source code for NT 10? Lack of systems documentation pushed me away from Windows, and I would love to learn I was wrong.


>>Some of the big learnings where sysctl settings and other os level tweaks.

I would love to know more about this - care to write up an article or something?


>Wow, what functionality did it have that took all those pages to document? Can you offer any examples of things it could do that Unix couldn't?

Here's an interesting comparison:

http://www3.sympatico.ca/n.rieck/docs/vms_vs_unix.html

Although it's light on internals.

Much more detail can be had from here:

http://www.hoffmanlabs.org/vmsfaq/vmsfaq_contents.html

If you want to check out OpenVMS for yourself, you can get a "community license" and the code once it's released:

https://vmssoftware.com/about/news/2020-07-28-community-lice...


> When I have a problem the documentation is... IDK, consistent?

Yeah, it's almost like developing the OS as a singular entity instead of a kernel with random bits tacked-on has some advantages...


> I spent several hours fixing a problem and I learned next to nothing in the process.

This is probably true for this specific case here, but my experience with fixing stuff in Linux is actually the opposite. I learnt a lot doing so, and learnt stuff that turned out to be later useful in very unexpected spots.

Back when I was a teen and using Windows, I've spent countless hours fiddling in stuff in regedit and other atrocities and it feels like I never learnt anything useful in the long run.

next

Legal | privacy