Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Native Linux GPU Driver for Apple M1 (twitter.com) similar stories update story
14 points by yewenjie | karma 3539 | avg karma 4.85 2022-09-29 06:53:42 | hide | past | favorite | 305 comments



view as:

This presentation... Humans are truly amazing.

It's so comical to see a literal VTuber developer doing absolutely awesome work on an extremely high technical level, and better yet, on HNs frontpage almost every week.

God bless anime.


The voice is very annoying though, amazing work none the less.

OMG I just saw it for the first time. That's a nightmare. I didn't expect it to be like this when I read your comment.

yea, I tried to watch her livestreams but I can't deal with the voice. Still though, very cool stuff

Yeah the voice changer is crazy. Like the work and sometimes I peak the stream. Wish I could watch it with sound but it's unbearable after a couple of minutes.

What we need is a highly technical VTuber that's good with audio to work on this problem for us!

he's actually very competent at audio, just he really clings to this hardly intelligible voice filter for some reason.

I've seen this channel before. I don't care what the content is, I can't listen to it.

I am also 98% sure it is literally @marcan42, especially if you read around the time "she" first appeared on the scene. She first appeared by "taking over" @marcan42's twitter page for April Fools day, but remained afterwards. I personally think it's a weird joke in poor taste, probably meant to torture Apple's lawyers in a potential lawsuit (as unlikely as that is). ("She" also deletes my YouTube comments suggesting this.)

Attempting to doxx vtubers is in worst taste, just fyi.

Ah, what? @marcan42 is the one who started the project, has worked on it for over a year, and (to my belief) switched into "Asahi Lina" mode in April, being conspicuously "unproductive" since then. Doxxing the cofounder for an odd prank? Meh.

If you don't believe me, look at @marcan42's YouTube output since Asahi Lina entered the room. It plummeted. Only a few non-GPU things once in a while. And you'll never catch them streaming simultaneously.

Furthermore, before April, @marcan42 was talking about doing literally everything Asahi Lina is now doing. From writing the GPU driver in Rust, among other things. Asahi Lina's desktop is even KDE with the exact same screen resolution and scaling settings, and even it appears similar usage of keyboard shortcuts, for Goodness' sake.


When someone asks you to please not speculate on their personal life, it's the polite thing to not do it: https://twitter.com/LinaAsahi/status/1575450309907795968

@Aissen A request posted less than 2 hours ago. Also, it only calls out asking personal questions of them, not speculation.

Also, it is not unfair to ask questions considering people are donating money for this. Anyone who is donating money to both, to discover they may be the same entity, should be rightly upset.


> Also, it is not unfair to ask questions considering people are donating money for this. Anyone who is donating money to both, to discover they may be the same entity, should be rightly upset.

Exactly, that's the point of a AMA (Ask Me Anything). You can literally ask them anything and as I presume you or others have paid / donated money to them and need to know what they are paying for or towards and who is doing it, for transparency purposes. That isn't doxxing, that's transparency (Especially if you paid or donated for this.)

Now, for 'transparency', name please?


Nevermind. After looking at the VTuber videos, corroborating with the public evidence and the likely names brought out, with 99.999% certainty that the conclusion of who the VTuber is, it is indeed @marcan42 [0]

[0] https://twitter.com/marcan42


Maybe it would be good to keep bullshit conspiracy theories and doxxing behavior to other sites. We don’t need it here too.

None

Says who? I don't watch vtubers, I know nothing about how fans of the format think it's supposed to work.

But while you can decide something is in "bad taste" within your clique, you don't get to decide that for the rest of us.

Doxxing is a serious accusation which is not supported here. The account you're replying to was speculating based on public information, there is no revealing private info here at all.

I genuinely don't understand why you think wearing Anime-face makes it beyond the pale to say "oh that's probably so-and-so, here's why", when it's just two Twitter accounts.

I see it as no different than speculating about the owner of a gimmick Twitter account, as distinct from, say, publishing their home address. That latter is doxxing, the former isn't.


Not cool man.

None

If that's true, so what?

And you were tasked with investigating this very important case of wire fraud, I see.

And you have one piece of vague, circumstantial "evidence" supporting this. Maybe don't throw around accusations like this without something more solid?

None

We detached this subthread from https://news.ycombinator.com/item?id=33019853 and redacted a personal name.

Please don't do this here.


I would ask that you look at this thread. There are multiple people saying the exact same thing now, as fact and not theory. Either they are wrong and you must silence them as well, or I am actually correct and this is public information or an open secret.

Edit: See the following.

https://news.ycombinator.com/item?id=33022406

https://news.ycombinator.com/item?id=33022494

https://news.ycombinator.com/item?id=33021900

https://news.ycombinator.com/item?id=33022481


I don't know anything about any of this, but I know that your comments stand out as being driven by some sort of strange pre-existing agenda, which is no doubt partly why users flagged them and pushed back against them so vociferously. Would you please stop posting about this now? It's just adding offtopic drama.

It’s weird to me that she’s using such a high-pitched voice, because there are many successful VTubers with lower-pitched voices.

For me, it isn't annoying, I basically cannot understand it most of the time. Which of course kills the whole thing for me.

Really needs subtitles/closed captions to be enabled!

I really hope these folks get some AI / video masking / voice masking tech that isn't horrible soon.

Dang that video...


It sounds like something from Interdimensional Cable. Like it's Justin Roiland trying to do an anime girl voice but it comes out as a sort of Mr. Meeseeks screech.

This is the future of our profession. We've gone from long-term, long-form documentation, to blog posts, to talks... in the next ten years it'll just be anime pfps squeaking at each other.


That's because "her" natural voice is male.

You can clearly hear the deeper voice behind high-pitched screeching.


> That's because "her" natural voice is male.

I half-expected this. From what I observe, streamers who are both biologically and sexually female (it's a sign of the times that I have to clarify this, this much) tend to be OK with showing their real selves, but streamers who have some form of gender dysphoria (usually male-to-female) generally have virtual avatars.


He streams normally on his main channel, https://www.youtube.com/c/marcan42/videos, just the GPU series seems to be April Fools joke taken too far.

Careful. I just made that suggestion on another account that they were the same person here and I was immediately called a BS Conspiracy Theorist, my suggestion was redacted by moderators, and my post was flagged to oblivion. For making the suggestion I was also accused of doxxing using public information.

This happened less than an hour ago. @dang is it OK to say now?


> the GPU series seems to be April Fools joke taken too far.

I see; interesting.


Haha, this all is a weird reveal for me. In particular because now I know there are fewer people in Asahi Linux than I thought - since two of these personalities are one and the same.

What? No, this vtuber thing is just very trendy in the anime community right now. The "her" is in quotes because the avatar is female, which is not related to whether or not they're trans.

Based on my own experience, you need to reevaluate your biases.

I’ve had many avatars online, some male, others female, many not human. While it can be for others, my decisions were never about my feelings towards my gender or sexuality.

I say this just to let you know these things are orthogonal. Just as you shouldn’t assume anything about a female child playing Peter Pan in the school production, you shouldn’t assume anything about a person’s gender or sexuality based on their “current” avatar (of possibly many current avatars).


> these things are orthogonal

You're right. It was a poor judgement call to associate virtual avatars with anyone's sexuality; thanks for pointing it out.


Just a heads up, scare quoting someone's preferred pronouns is not a good thing to do and comes off as very disrespectful if not outright transphobic.

While you are not wrong (simply because many people would agree with you), systematic bullying of outgroups is divisive and also disrespectful.

It would kind of make sense for people who just don't want to show their face even if they don't want to be particularly vtuber-y

I was wondering if this would be more common as the technology developed but it seems like it hasn't and instead there are specific ideas about what vtubers are supposed to be that have become entrenched.

There are some people like Hikalium who are using vtuber avatars for live programming videos but not particularly doing a character though


With deep learning you'll eventually be able to just have a very realistic avatar that looks and sounds like a real person. Might eventually be hard to tell if people are using something like this or not.

It's perfectly normal to stream and not show your face at all. Preferring to present as an anime character will always be niche.

Is it? I feel there's a much bigger tendency for westerners to show their mug and speak while Japanese prefer to hide (both mug and voice) if at all possible.

Yeah, definitely. There's a self selection bias where a person who wants to stream is going to tend to be a person who wants to show their face. But it is not weird / frowned upon if there is no face and the content itself (gaming, coding, whatever) is the entire screen. I think most normies would much prefer no face to an anime character.

I do think it would be weird to hide your voice, though.


For once, I'd like to see a VTuber presented as classic Warner Brothers characters. Let's see Porky Pig present this stuff.

(Anyway, the work itself is great!)


I would love to do a tech vtube as Slappy Squirrel.

More likely it'll be a character like Dot. Or Bugs Bunny in a dress.

However pretending to be a woman seems relatively common nowadays.

I've read it might be related to a particular type of "erotic target location error". Perhaps related to the influence of the internet - the rise of virtualised disembodiment and the increased availability of pornography.

I'd agree, but the voice chosen annoys the heck out of me.

I thought it was Japanese to begin with. I had to listen carefully to parse the English. It seems to be a stylistic choice and I’m sure you get used to it after a while.

It's worth noting that Asahi Lina and Alyssa Rosenzweig are two separate people though their work does overlap [1].

[1]: https://twitter.com/alyssarzg/status/1533624133929553922


Asahi Lina is very clearly being performed by Asahi Linux lead developer Hector Martin. Why on Earth he didn't just use a male vtuber model so that the obnoxious voice changer thing wouldn't be necessary, who knows. Maybe he's dipping his toe into coming out as trans or something.

10 minutes of Googling would show you that Hector is not the GPU driver developer in the Asahi project.

And the person who is denies being Asahi Lina. Meanwhile, Lina sometimes shows paths with "/home/marcan42" in shells during their streams. Martin's not admitting it but he's doing a poor job of hiding it too.

The November 2020 M1 machines are nearly getting to daily usability in Linux. Support for screen brightness adjustment, the webcam, speakers and mic are the remaining things for day to day use for me. All look like they should be working in 6 months-1 year.

It's a bit of a shame that the RAM is so limited on all those platforms - I can't imagine it being enough to load many electron apps in a few years time.


I am waiting to try this on the 16GB RAM M1. M2's have 24G or more available. I think this will be my goto machine for Linux.

> I can't imagine it being enough to load many electron apps in a few years time.

I honestly hopoe Electron has to improve it's performance or companies start moving off of it. It's honestly a bit of a joke at this point.


Desktop PWA is becoming a pretty reasonable alternative.

I don't want to run a web browser to run a desktop app. I don't care how useful it is to run JS and HTML from a dev standpoint it's an absolutely absurdity that my desktop app has a CORS vulnerability.

Then you probably won't get Linux desktop support at all. Linux camp should be cheering electron.

Honestly, if the other OSes had decent native apps for some of these things I would probably switch. There is just something wrong with Spotify using 1gb when it was just opened and never used.

Ah yes, let's just take those table scraps and be thankful for them...

I could be wrong but I feel like literally running some kind of little mini virtualized instance and just running a desktop app meant for OSX or Windows could end up being more efficient than running a whole browser.....

Or running a more optimized app meant for Windows using WINE.

Not necessarily. If you're running the browser anyway, and the app is a PWA and can share that same instance. The "browser" overhead is not really there.

A native app, lets say that wants to play video and is using a cross platform GUI. You've got to load and run that code, and have all the relevant code in the native app's address space. That would be a bunch of code that needs to load over the PWA which is leveraging something you have loaded in RAM anyway. In this case the native is actually worse than the PWA for resource utilization.

Launch CPU cycles can be similar, running a huge swath of GUI code for the native app vs a pre-loaded browser which only needs to run the JS and render the page.

Having your app's runtime already loaded on the system is a huge advantage.

Electron does not benefit from these advantages though. These are PWA exclusive advantages.


> my desktop app has a CORS vulnerability.

Yes, because CORS, a way that HTTP requests are restricted, doesn’t even exist in native. Of course a native app can reach out to any URL it wants. That is the default, and also how CORS functions when disabled or bypassed.


Somehow a security feature in browsers called CORS, that isn't present at all in native apps, is both a vulnerability and a downside? I don't understand this line of thinking.

PWAs _can_ get better and more platform-friendly though. It's two bad choices right now.

Are you all using some rare electron apps that are unknown to me because I keep hearing this meme repeated but when I look at my RAM usage[0], it never corroborates this.

I just don't see how 220MB for Discord is unreasonable in any way, when Firefox with 8 tabs takes 1.2GB. Telegram written in Qt meanwhile takes 200MB, literally no difference to Discord, an electron app.

[0] https://i.imgur.com/cfCRWDS.jpg


Because weechat uses <18 megs?

Does weechat support rich embedded media like images and videos, can you live stream games to your friends on it, talk with people, have profile avatars, use emotes, build bots that stream music to you?

Even something completely simple such as snip a portion of your screen and paste it in the chatbox so others can see it immediately, without having to mess around with dodgy image upload sites. This is basic functionality for 2022.

It's like comparing notepad.exe to Microsoft Word


I've sent plenty of pictures over DCC in my day. OBS lets me stream whatever to wherever at unlimited quality without paying.

The point is most of what Discord provides is low value for the resources it demands.


EDIT: ignore me, I thought this was about WeChat, the Chinese messaging app, not weechat, the IRC client :(

> rich embedded media like images and videos

Yes

> live stream games to your friends on it

no (there's WeChat livestream (idk what it's called in English), but it's not discord style stream-your-desktop)

> talk with people

yes

> have profile avatars

yes

> use emotes

yes

> build bots that stream music to you

yes

> Even something completely simple such as snip a portion of your screen and paste it in the chatbox so others can see it immediately, without having to mess around with dodgy image upload sites

yes? you can paste stuff into wechat just fine. Think WhatsApp, not IRC

Idk where GP came up with 18MB tho, it's eating up 100 on my laptop right now.


How much of that needs to relate to the memory usage though? Embedded media doesn't need to be loaded besides what's visible in the current channel and perhaps a bit of scrollback, perhaps some text and scrollback for other frequently viewed channels. Livestreams also shouldn't be taking memory unless you're watching, same with voice. Avatars and emotes would certainly take memory, but certainly not hundreds of megabytes of it.

I could do that in 2007 with 256 MB of RAM.

Part of the issue is that 64-bit programs end up taking close to twice as much memory, because pointers are twice as big, instructions are longer, etc.

Higher screen resolution uses more memory too.

I'm sure dependencies / libraries have grown in size too, and yeah, using web for UI takes memory.


Kopete, less than 100MB under KDE3 in ~2007. Video calls, emojis, inline LaTeX, Youtube videos...

Spotify uses 1GB of ram for me, while doing nothing, it was only started by mistake, it didn't play a song.

Slack was using 1.2 GB. It is in a workspaces but that seems a lot for an app that does nothing.

Postman I think is also Electron often uses 1-2GB for sending API requests.

Atom though was only using 200MB which is fair.


Signal is using 335MB for me right now, which feels ridiculous for a messaging app. I have Firefox running with over a thousand tabs (hard to tell how much RAM it's using with all the processes, many of which share memory, but my total system usage is around 8GB right now, so it's at least less than that), so I guess it's not scaling linearly from 8 tabs = 1.2GB. At any rate, Firefox does at least a couple orders of magnitude more things than Signal does; I think it's reasonable for it to use more RAM.

I think it's a bit weird to compare a chat app with a web browser, though, when it comes to memory usage. Maybe a better example: hexchat, a native GTK IRC client, is currently using 60MB of RAM. Telegram's native app using over 200MB is also ridiculous.


A chrome process normally takes up about 100-200MB RAM. Are you running 16GB/200MB=80+ different electron apps at once?

I assume you mean when nothing is loaded in it, or it's showing a plain HTML page. 1GB+ is completely common for a lot of Electron apps in real use. If you run a dev environment and are using 2-3 Electron apps, 16GB will be a struggle.

Currently sitting at 29GB (on M1 Pro), just for regular web dev environment on MacOS.


> I assume you mean when nothing is loaded in it

Yes, but that's normally what people mean when they talk in broad strokes about "electron apps". Anything beyond that is application data, which is going to be roughly the same regardless of stack, and in nontrivial apps quickly comes to dominate.

> Currently sitting at 29GB (on M1 Pro), just for regular web dev environment on MacOS

I mean, I assume you're using more than just some regular electron apps. macOS itself is currently using 8GB of RAM on my machine, my docker VM is using 4GB (down from a default of 8GB), etc. And that doesn't include my IDE and its code-scanning and type-checking background processes.


Does that include docker (a VM?)

Does that include file system cache?

Having a system with lots of RAM, the OS does it's best to use it. Just because you see 29GB used doesn't mean you'd see a noticeable performance dip on a 16GB machine. You might, but it really depends what that RAM is being used for.


It's technically Chromium Embedded Framework instead of Electron, but last time I used the Spotify desktop client it was possible to push its RAM usage past 512MB up to the 1GB mark just by clicking through albums and artists.

For something extremely functional like a VS Code, which is a mini-IDE of sorts, that might be excusable but it's beyond silly for a streaming music player.


But it has nothing whatsoever to do with it being a web-based app. It pretty clearly has to be the result of caching lots of album art and/or metadata and/or audio data, which is just as possible to do on a native app.

Web technologies in and of themselves aren't bad, but their usage is strongly correlated with cost-saving measures and the questionable technical decisions that result since choosing a web app itself is often a cost-saving measure.

VS Code is an example of things gone right, where Microsoft has clearly hired an AAA-class team and funded them well. Within the same company, Teams is an example of the exact opposite and much more representative of the typical web app.

In Spotify's case, if they're aggressively caching album art, metadata, and/or audio I would say that's of questionable value to the user. Art and information on songs/albums/artists/etc are in nearly all cases only going to be seen once or twice by the user per session, and so keeping them sitting in memory doesn't make a whole lot of sense. Caching audio that's not been played is very questionable (to the point that I don't think they're doing this) because many, many people are still on metered connections and Spotify would quickly be blowing past bandwidth limits if it were pre-caching albums left and right.

Disk caching makes a ton of sense for Spotify, given that it's being done in a standarized directory that the OS can clear to free up space when necessary, but on machines with 8GB or especially 4GB of RAM there's a very good chance that by aggressively caching to memory they're evacuating other things that would better serve the user to be sitting in memory.

Using a lot of memory is fine when there's clear user benefit but it should still be done intelligently.


> their usage is strongly correlated with cost-saving measures and the questionable technical decisions that result since choosing a web app itself is often a cost-saving measure

Correlation is not causation

We can debate the specific design choices that Spotify or any other company has made, but my original point was to push back against the tired trope that using web technologies for an app automatically means runaway resource consumption and (apparently) the downfall of civilization


Except in this case, correlation is so strongly correlated with causation that it becomes causation.

Look, the plural of anecdotes is not data, but you can't argue against the general trend of "applications using web frameworks like Electron tend to consume more RAM" by saying "well in this specific case they made poor design decisions that would bite anyone in the ass"; while that may be true it misses the point, which is the much larger (and well-correlated) & overarching tendency of such apps to absolutely chew RAM.

That it's a "tired trope" doesn't make it untrue; it just means that enough people have accepted what is functionally the new state of the software application world that it's considered passé to call such things out.


It is probably just caching images in memory.

Ahaha, this is a tangent, but this stupid situation is why I'm doing my current web project as God intended: in a single .html file, shared with collaborators on Dropbox, editing in any dumb text editor.

There are no dependencies.


It's enough memory for most applications that are not Electron.

Not to say that it should not have more, far from it. But the Electron framework is so wasteful, compared with basically all the alternatives.

I wish the framework developed for Sublime Text were not secret.


Yeah it's funny that every cross-platform UI kit seems to suck, but both Unity and Sublime have in-house custom things that are good.

Que pedo, wey.


> It's a bit of a shame that the RAM is so limited on all those platforms

As long as you didn't get one of the single-channel SSD models, it would make plenty of sense to give yourself a 16 gig swapfile (or something of the sort).


MacOS creates swapfiles automatically as needed (in /var/vm). It will allocate a lot more space to swap than the size of RAM, if it determines that's useful.

It's actually a problem, if you're low on SSD space the filesystem can fill up with swapfiles while the system is still reasonably functional because the SSDs are fast enough. Then because of an APFS design fault, sometimes it isn't possible to delete any files to free up space. It says "out of disk space" when you try to delete a file.


Yes; I have to restart my computer whenever that happens.

I use one as a daily driver and run Chrome, Slack, Spotify, and VSCode simultaneously and it’s been fine.

Incredibly fun hack to make it work (start of the video): TLB flushing is a bit hard, so let's reset the GPU after each frame(!!!). And at the rate this advances, this comment will probably be obsolete in less than a week !

The standard solution: turn it off, and turn it back on.

Microreboot – A Technique for Cheap Recovery

https://csis.pace.edu/~marchese/CS865/Papers/candea_microreb...



An omission of both mine and the Candea-Fox papers.

I was disappointed to not see it, but I’m not sure how widely-available information was by then.

That how Apache prefork + mod_php work

Some added context: Many may be reading this the first time and wondering, What gives with the NVIDIA drivers (Nouveau)?

To put it simply, NVIDIA screwed third-party drivers by requiring a ever-changing encrypted and signed blob, hidden deep within the proprietary driver package, to be sent to the GPU every time it boots. Otherwise, you can't change the clock speed of your NVIDIA GPU from the boot clock speed, which is almost unusably slow.

The message from that is screw NVIDIA - not that third party GPUs necessarily are that horrible to implement.


nVidia only requires signed firmware starting from 2nd-gen Maxwell cards (released late 2014). There's a lot of existing nVidia hardware that could be fully supported.

8 years is pretty old in GPU years. Certainly that old hardware is still useful, but not if you're doing anything where modern GPU performance is helpful, or even necessary.

> There's a lot of existing nVidia hardware that could be fully supported.

But almost no users to justify the huge development effort. Many laptops from that era, for example, are completely unusable by now - only high end models had 16GB of RAM, most were limited to 4 or 8GB - and desktop builds of that age simply consume far too much electricity for the performance they offer.

In contrast, investing work into current NVIDIA/AMD drivers or the Apple Mx architecture makes more sense - the crypto boom and bust led to a lot of cheap but current cards which means many will simply stick it out with a used RTX 3090 for a couple years once NVIDIA releases their new generation, and Apple usually keeps their own hardware stacks very similar in design which means work done now will still be a foundation for Apple's chipsets five or ten years in the future.


Not everyone can afford to buy hardware. I'm stuck on a 2012 desktop with Intel graphics I found on the side of the road, plus some spare RAM an open source dev mailed to me. I expect there are a lot of folks with dumpstered computers (like those who go to FreeGeek) that could benefit from nouveau support for older GPUs.


Hasn't NVIDIA been making pretty big steps towards a compromise on that front with the open source linux drivers they released a few months ago?

Yes, but let's not have real world infringe of this completely offtopic rant about nVidia in Apple themed post :P

They didn't release open-source Linux drivers, just GPU kernel modules (https://news.ycombinator.com/item?id=31344981).

Why is this relevant?

The signed blob (39 MB of RISC-V code) for newer cards is now publicly released and redistributable, so nouveau will be able to use it too.

https://blogs.gnome.org/uraeus/2022/05/11/why-is-the-open-so... https://lwn.net/Articles/894861/


This whole linux on Apple M1/M2 thing is the best proof that Linux is always a very good fit fort next gen hardware/software/developers.

new hardware, arm based apple silicon here, is now challenging the monopoly of x86. after such a long waiting, we finally have a mature alternative platform to choose from - and it is performance is pretty good.

new language, rust here, is getting its way into the linux kernel. this new GPU driver is written in Rust!

the work is based on the GPU hardware analysis of Alyssa Rosenzweig, who is a very talented young lady who started when she was in high school I think. the author of the driver, Asahi Lina, also seem to be very young.

Just amazing!


Asahi Lina is a VTuber persona for Hector Martin, (@marcan42 on Twitter).

Again Careful. I said this elsewhere for another person saying the same thing - I just made that suggestion on another account that they were the same person here, and I was immediately called a BS Conspiracy Theorist, my suggestion was redacted by moderators, and my post was flagged to oblivion. For making the suggestion I was also accused of doxxing using public information.

This happened about an hour ago. Again, @dang, is it OK to say now? I'm not crazy here.


Wait, so this is a marcan? Big shame, someone should use this info though.

And do what with it exactly? Nobody really cares except people who already thought marcan was weird in the first place. Most normies already think all vtubers are men with voice changers…

I was referring to him being a great engineer but not so great as a person.

IMO the vtuber thing has nothing to do with him not being a good person. It's just weird.

If you have the desire to do so, spread the word about the actually bad things he is doing (I'm sure we're thinking about the same things). People might actually care about that, some day.


Could you expand on that a bit? I haven't heard anything about that.

Both the people you're replying to are, unsurprisingly, Kiwi Farms apologists.

https://news.ycombinator.com/item?id=32728315 https://news.ycombinator.com/item?id=32727645

marcan is a common target of harassment from KF users who insist that he's helping a late friend of his fake their death. He has regularly spoken against that community of abusers.

FUD like you're seeing here is part of their modus operandi and it's fairly obvious when you know what to expect.


The other person and this account are not involved and are just concerned bystanders. Your account is of a dolphine emulator dev, the community of which is directly involved in that drama and is still actively engaging in targeted harassment, to the point of trying to prevent people and companies from getting legal protection so that they cannot be represented in court, which is a fundamental right of citizens in democratic societies governed by law.

None

I agree, I find it amazing. I don't understand any of it, but it's like deep sea exploration to me. Poking around in the dark depths of a very complicated black box.

From memory, EU used to have very strict laws against reverse engineering. Is that still the case today? I mean this whole thing is pretty much based on reverse engineering the GPU.

That's the US. EU is fine.

The US largely has laws against reverse engineering hardware for the purpose of circumventing copyright protection. But that's not happening here. What other laws were you thinking of?

AFAIU, in the US to be on the safe side, you need to apply clean room reverse engineering, no matter the goal. In the EU, as soon as it's in the name of interoperability, reverse engineering is fine:

> The unauthorised reproduction, translation, adaptation or transformation of the form of the code in which a copy of a computer program has been made available constitutes an infringement of the exclusive rights of the author. Nevertheless, circumstances may exist when such a reproduction of the code and translation of its form are indispensable to obtain the necessary information to achieve the interoperability of an independently created program with other programs. It has therefore to be considered that, in these limited circumstances only, performance of the acts of reproduction and translation by or on behalf of a person having a right to use a copy of the program is legitimate and compatible with fair practice and must therefore be deemed not to require the authorisation of the rightholder. An objective of this exception is to make it possible to connect all components of a computer system, including those of different manufacturers, so that they can work together. Such an exception to the author's exclusive rights may not be used in a way which prejudices the legitimate interests of the rightholder or which conflicts with a normal exploitation of the program.

Directive 2009/24/EC

This would allow you to disassemble and modify any parts of OSX and its drivers in order to help write a Linux driver. Does the same apply in the US?


The USA has fairly lax laws regarding reverse engineering. It's just that proprietary software EULAs typically forbid it -- and you have to agree to a binding EULA in order to use the software.

There were a number of court cases, many involving game consoles (incl. Sega v. Accolade, Galoob v. Nintendo, Sony v. Connectix) that establish that as long as you're not distributing verbatim or other infringing copies of another company's work such as software or chip designs (and you haven't signed any contractual agreements otherwise), you're in the clear. To establish that you are in the clear, clean-room reverse engineering is the recommended approach: one party does the reverse engineering yielding a spec; the other implements the software based on the spec.


> one party does the reverse engineering yielding a spec; the other implements the software based on the spec.

good to know that. asahi linux should be fine, the reverse engineering and driver development are done by different people.


It helps that Apple is aware of this work and, while they aren't contributing, they're not actively hindering it either.

They are based in Japan anyway

What makes you think that?

Lina is the vtuber persona of Hector Martin @marcan42 who’s has been working on the rest of the Asahi Linux project. He’s based in Japan.


I get and support not wanting to spread information about someone that doesn't want it to be spread and if we put that into a box then sure - people can flag it, mods can redact it, the persona can ask (as they have now) people don't ask, and it's fine for the person to not want to acknowledge it. Put that in a box and I'm in full support.

When it comes to going outside that box though by plainly denying identity instead of de-emphasizing/sidestepping the identity I can't shake the uneasiness around "gaslighting for a good cause" not being right. Especially because this gets amplified by every other 3rd party who may just be taking identity claims at face value (which may or may not be what's happened here?).

It's not really important everyone know Lina's identity, especially if they don't want people to know it. At the same time it's not really important to try to convince people that they don't actually know something. Show them the understanding about the matter sure sure, and if they still try to publicize it don't let them doxx someone sure, but I don't think it's good to go further than that with it.

To that end I've tried to write this in a way that it has no relation to any person/identity in particular - just the topic at hand.


The location on Asahi's GitHub profile: https://github.com/asahilina

Not really. It's more US thing where a license can try to forbid Reverse Engineering, and you end up with things like clean room RE based on supreme court case.

Meanwhile for example in Poland (EU member), it's illegal to forbid reverse engineering - any claim in a contract, license or not, that forbids such is null & void. Because the copyright law has a paragraph about how "reverse engineering is a right of everyone." - the only thing is that you can't just recompile reversed code and claim it's yours (that would be copyright violation).


Would love to see Solvespace running on that. The Mac version does run native on M1 already and the OpenGL requirement isn't to high.

Does it need proprietary blobs?

Yes, the Apple laptops need a whole host of proprietary blobs for bringup and firmware.

"Please temper your expectations: even with hardware documentation, an optimized Vulkan driver stack (with enough features to layer OpenGL 4.6 with Zink) requires many years of full time work. At least for now, nobody is working on this driver full time3. Reverse-engineering slows the process considerably. We won’t be playing AAA games any time soon."

From https://rosenzweig.io/blog/asahi-gpu-part-6.html


Yeah, it's because of this statement from an extremely trustworthy source that I'm looking for context here. I suspect most of the big problems haven't been suddenly and magically overcome? E.g. how close are we, given this, to supporting an AAA game, let's say from a few years ago?

The key words are "optimized" and "OpenGL 4.6 with Zink". "Functional" and "OpenGL 2.1" is a different story, and the same trustworthy source said in https://rosenzweig.io/blog/asahi-gpu-part-6.html that:

> thanks to the tremendous shared code in Mesa, a basic OpenGL driver is doable by a single person. I’m optimistic that we’ll have native OpenGL 2.1 in Asahi Linux by the end of the year.

It's likely that even a bare-bones OpenGL driver will probably run better than llvmpipe, which is especially important in a laptop context due to the resulting power use improvements.


Ah, so it's a "90% of the iceberg" situation. Great info, thanks!

What AAA games run not only on Linux, but ARM Linux? This is more for going on YouTube with hardware acceleration than gaming, which is niche upon niche in its current state anyway.

Box64, FEX-Emu and other x86-64 on ARMv8 emulation projects cover that gap and explicitly target gaming as a core use case. And of course there's Apple's Rosetta, which we know is good enough for gaming on ARM macOS. Apple has released a Linux version which should technically be able to run on Asahi, but I'm unsure of the legal situation around this.

Do Rosetta apps utilize the GPU on M1 (on macOS)?

Yes. The calls to the platform graphics APIs and underlying drivers are identical (and literally hit the same code), whether from x86/Rosetta or native/ARM64.

You can always do this: https://wiki.debian.org/QemuUserEmulation to run amd64 binaries on an arm64 machine, or vice-versa. Docker desktop sets this up so you can pull amd64-only Docker containers on your M1 and not notice. I did some very minimal testing and it's not even insanely slow or anything (but obviously for many games, you can't leave this much performance on the table).

I do this on my workstation and can run anything, it's quite nice:

  $ lscpu | grep Architecture
  Architecture:                    x86_64
  $ GOARCH=arm64 go build main.go
  $ file ./main
  ./main: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, ...
  $ ./main
  Hello, world.

Let's set up a jira and hold sprint planning

Totally non-sarcastic question: then why bother?

It's an honest question. Even if someone (or a team) could somehow be paid for this work, by the time the results are usable, the hardware will be more or less functionally obsolete.

And that is on top of the fact that ARM64 on MacOS will always be a small slice of the gaming pie, and ARM64 on Linux games and GPU applications virtually nonexistent.


Gaming is not the only thing that uses a GPU. A driver that's complete/performant enough to run a Linux desktop and web browser is already useful.

To me the whole point of using open source tech is the freedom. I want to have options, hardware or software. When Apple tries hard to remove itself from that list I probably won't fight to keep it there. I just cross it out of my list as if it doesn't exist. So I simply don't use Apple products.

What do I miss?


Machine learning

note it’s probably a lot less effort to add support for M2’s GPU than starting from scratch and having to reverse engineer everything.

So eventually the gap between hardware release and fairly complete driver support could close quite a lot.


There's a much lower standard than "AAA gaming" which still delivers massive value to most users in running a desktop environment.

Also, the Apple GPU has evolved from PowerVR roots dating back to the 1990s. It is fairly safe to assume that the next generation of Apple GPU will share enough with the current generation that in 3 years, supporting whatever new hardware exists will be incremental rather than transformative ground-up work.

This was already the case for M1 into M2.


I don't really understand what's the difference between AAA game and Google Chrome? I thought that modern applications heavily use GPU acceleration. Is it some subset of GPU commands that's required for desktop, compared to AAA game? Is it possible that Google Chrome will crash the OS with some tricky CSS animation (may be in the future version)?

>I don't really understand what's the difference between AAA game and Google Chrome?

Hundreds of GPU features the latter doesn't use in normal hw-accelerated rendering of webpages... except maybe in doing WebGL content (and even less much fewer and older features than what AAA games want)


Also don't need so much performance so it can be easier

Right, because WebGL 2.0 is basically 2010 GPU hardware, while the upcoming WebGPU is targeting 2014 hardware.

Games can trigger a lot more GPUs paths due to their creative usage of shaders.

> Is it some subset of GPU commands that's required for desktop, compared to AAA game?

Yes. Compute shaders and geometry shaders are big features. Vulkan, DX12 and Metal also brought a jump in performance at the API level by allowing developers to batch commands sent to the GPU.

Desktop environments can and do leverage those newer APIs to get marginally better battery life. Asahi will miss out on those benefits, but the performance will be insanely better than “no GPU at all”.


Yes, but WebGPU is basically the version 1.0 of the common subset of those APIs, meaning 2014 hardware.

Why bother? Because the next generation of hardware will be able to use most of the software written for the current generation, and after a few generations the hardware will be well supported very quickly.

Additionally there's a lot of shared code between the different linux graphics drivers through Mesa, the linux userspace graphics driver framework.


Box86 translates plenty of x86(-64) games to be playable on ARM.

https://youtu.be/Of93GBCEbug Shows some native games, but also Skyrim and Metro Last Light working on ARM.


You can't tell people who are doing the work what to work on..

but considering there is a mountain of linux development for people with the expertise to be done for more open platforms. This work pretty much only helps one of the richest companies and Apple shows very little inclination to help by providing documentation or support (they let an alternative OS boot seems to be the extent of it).


Maybe it's the reverse engineering aspect that makes it interesting. Those of us in that line of work already spend our time at work turning docs into device drivers. It would be like working at Apple without getting paid.

And yet Linus used a MacBook Air as his daily driver for many many years.

The XPS series might have caught up for a while, but until they flip over to ARM, Apple laptops simply blow anything else out of the water. What’s wrong with trying to run Linux on that?


It sets the wrong incentives. Companies that are more developer-friendly, publishing specs and so on, should be rewarded for it.

In principle I would agree, but the world isn't black and white. First of all, in the PC world, there are few machines which are completely and well documented. That Linux runs on so much PC hardware is more due to the popularity than great documentation. NVidia just recently finally open sourced their drivers.

Could Apple improve the documentation a lot? For sure! Bu on the other side, the ARM Macs are a very nice platform, so I can understand the desire to use it. There is no competition at the moment, so it would be wrong too, to not support Linux there.


I don’t like your implication that ARM is the only way to do this. The Apple chips are fast and low power because they are good designs built on a very modern fabrication process.

It’s perfectly possible to build such chips with other designs and instruction sets, for example x86_64 or risc-v, in the same way it’s pretty common to build cheaper slower ARM processors. Plenty of folks at Intel and AMD are doing that right now.


There are fundamental issues with x86 that make it impossible to match the efficiency of ARM. Variable length instruction coding for instance, which means a surprising amount of power is dedicated to circuitry which is just to find where the instruction boundaries are for speculative execution. Made sense in the 80s when memory was scarce and execution was straightforward, but now it’s a barrier to efficiency that’s baked right into the ISA.

Thank you very much for this interesting comment! It would be great if you would like to provide an URL with a detailed analysis of this issue.

And sadly this partly applies to RISC-V too. It only achieves competitive density with the (optional) instruction compression, which makes instructions vary in length. Not as big of a problem as on x86, but still a fundamental limitation.

>sadly this partly applies to RISC-V too.

Not in any way that has any relevance.

>Not as big of a problem as on x86, but still a fundamental limitation.

Huge understatement. Instructions being any size 1-16 (x86) vs being either 16bit or 32bit long (RISC-V).

As with everything else in RISC-V, the architects did the weighting, and found that the advantage in code size overwhelms the (negligible by design) added decoding cost, for anything but the tiniest of implementations (no on-die cache + no builtin ROM).

As it turns out, it would be difficult to even find a use for such a core, but in any event it is still possible to make one such very specialized chip, and simply not use the C extension.

Such a use would be deeply embedded, and the vendor would be designing the full stack so there would be no concerns of compatibility with existing binary-only software. They would still get ecosystem benefits; they'd be able to use the open source toolchains, as they support even naked RV32E with no extensions.


That sounds false to me. If that were solely the case, then the CPU could pad the instructions coming in and pretend they are all the same length.

>Variable length instruction coding for instance, which means a surprising amount of power is dedicated to circuitry which is just to find where the instruction boundaries are for speculative execution.

This does apply to x86 and m68k, as "variable" there means 1-16 byte, and dealing with that means bruteforcing decode at every possible starting point. Intel and AMD have both thus found 4-wide decode to be a practical limit.

It does not apply to RISC-V, where you get either 32bit or 2x 16bit. The added complexity of using the C extension is negligible, to the point where if a chip has any cache or rom in it, using C becomes a net benefit in area and power.

Therefore, ARMv8 AArch64 made a critical mistake in adopting a fixed 32bit opcode size. A mistake we can see in practice when looking at the L1 cache size that Apple M1 needed to compensate for poor code density.

L1 is never free. It is always *very* costly: Its size dictates area the cache takes, clocks the cache itself can achieve (which in turns caps the speed of the CPU), and power the cache draws.


Maybe. If I remember well, Apple ARM M1 can decode up to 8 instruction at the same time, is-there any RISC-V CPU with the C extension which is able to decode 8 instructions?

>Maybe. If I remember well, Apple ARM M1 can decode up to 8 instruction at the same time, is-there any RISC-V CPU with the C extension which is able to decode 8 instructions?

Sure, there's Ascalon[0], 8-decode 10-issue, by Jim Keller's team at Tenstorrent. It isn't in the market yet, but is bound to be among the first RISC-V chips targeting very high performance.

Note that, at that size (8-decode implies lots of execution units, a relatively large design), the negligible overhead of C extension is invisible.

C extension decode overhead would only apply in the comically impractical scenario of a core that has neither L1 nor any ROM in the chip. Otherwise, it is a new win.

And such a specialized chip would simply not implement C.

0. https://youtu.be/yHrdEcsr9V0?t=346


GPUs tend to evolve over time rather than change radically. So even if the hardware is obselete before the work is finsihed the driver for the next generation will start from a much better point than the first one.

Also it will be useable for at least some usecases, even if not AAA games, before the hardware is obselete.


Because AAA games aren’t everything.

> a basic OpenGL driver is doable by a single person. I’m optimistic that we’ll have native OpenGL 2.1 in Asahi Linux by the end of the year.

That should be enough to be very useful to a lot of users.


Getting this off the ground has a chance to get this to snowball. Imagine it gets to rough but useable and it’s in the kernel. Then more and more folks can iterate on it. It’s development rate will increase.

The answer could be: because they like it.

People do great stuff with computers and programming and I think this is a good example. Passion is what it's all about.


This is not about just doing it for passion or for hacking fun despite being worthless of something. There's a very pragmatic reason that makes it very useful!

The "why bother" is missing the crucial point, that the most important use of the GPU driver for Asahi would be HW accelerated desktop (and driving external monitors, etc) - it's not like a GPU driver for M1 is useless if no AAA games aren't supported...


I totally concur, AAA games aren’t really the main driver for desktop Linux anyway.

Why both building a contemporary computer kit on MOS6502 chips[1]?

Because people want to. Who is anyone to be the arbiter of those desires, no matter the practical applications (which this has a ton of, as alluded to via the other repliers)?

1 - https://eater.net/6502


Your desktop that you're using to post this is likely using GPU acceleration to composite your windows. Not having GPU acceleration for graphics is a killer with today's HiDPI displays. The CPU generally can't keep up, and even when it technically can it is extremely power inefficient.

By the way, I am reading and replying not using desktop, but using Android phone :-)

But you are right -- modern desktops are using a GPU heavily. For worse or best(10-15 years ago everything worked without GPU help and compositors. Was not as fancy looking).

P.S. I like how HN works on mobile. And how lightweight it is. Made by real hacker wizards, of long lost arcane knowledge of the webcraft!


>Totally non-sarcastic question: then why bother?

Because most of us looking into Asahi don't care for playing AAA games with it. We want Linux on our Mac laptop, with hw acceleration for the desktop and apps.


What AAA games would even run on an M1?

Huh, that's a good question, since you'd need a AAA game that supports ARM Linux.

With the upcoming updates to Metal, in principle quite a few.

Basically all AAA games will run relatively smoothly (30-60fps) with low-midrange dedicated gpus which is effectively what the M1 performs like on paper (I think it's in the ballpark of the NVidia 1050/1050ti/1650).

You just have to lower the settings and resolution to get that smooth performance - think base model last gen console/nintendo switch levels of visual fidelity rather than up to date gaming pc/current gen console levels of visual fidelity.

It's kinda fun squeezing performance out of woefully underpowered hardware - LowSpecGamer on youtube is a channel dedicated to this kind of stuff.


>Basically all AAA games will run relatively smoothly

I think the parent's point is "what AAA games would run on the M1, even if they finish GPU support, given that we're talking about ARM Linux?"


Actually Apple itself already released Rosetta for Linux VMs. I hope it might also be usable with Asahi to run Steam.

Box64 actually comes quite close to Rosetta in terms of translation speed and apparently that's with optimizations which have yet to be implemented[0]. Would make sense to just use that instead.

[0] https://box86.org/2022/03/box86-box64-vs-qemu-vs-fex-vs-rose...


>We want Linux on our Mac laptop, with hw acceleration for the desktop and apps.

Again, why bother?

Why continue to choose user hostile hardware made by a manufacture who seeks to frustrate your precise desire to do this? Wouldn't it be much more logical to purchase from a manufacturer who doesn't care what you use on the hardware--or even better yet, actively supports you in getting the hardware to perform as well as possible with Linux? What is the point in continuing to chase after hardware which doesn't fully work in your chosen operating system with your preferred applications? Why pay full price for hardware that will never be fully supported and work to its full potential?


I'm a bit disappointed that this is getting downvoted without a response.

I had a Macbook Pro 8,1 I ordered specifically for the intel video because at the time Nvidia support was lacking and I got that model with the express purpose and idea that if OSX changed to where I no longer liked it or if the hardware was no longer supported by Apple I could just install Linux and keep going. I had nothing but nightmares trying to get Linux to work on that thing. Consistently things would be fine and then suddenly grub would randomly eat itself. The idea of buying hardware and installing an unsupported operating system seems like insanity to me after that experience.

I'm genuinely trying to understand why people would do this because it seems to make no sense. I doubt any of the people hitting the downvote will see this post in response but I'd really like to hear why anyone would do this. This is supposed to be a commenting and discussion board, I'd like some discussion. Thanks.


My two cents (i'm not one who was downvoting).

Reasons to do it: 1. Enthusiasts are having fun 2. They increase their skill in Linux driver development 3. This can generate patches usable by ARM linux in general, not only viable for Apple hardware.

If she manages to succeed, perhaps with help of others, then we have Linux desktop there. Linux gaming and 3D acceleration wasn't always a thing, and people were still using Linux.

But I agree that apple hardware would still be hostile, and not sure if it is worth it. VM can be used with same success. People who by new macs should be ready to be vendor locked.

But to extend this topic.

Many parts of x86 linux drivers are community developed. Either by individuals, or by companies as RedHat. Yes major hardware vendors, like Intel and AMD makes sure that chipset and CPU are supported, but many motherboard vendors and laptop ones don't give a damn. As far as I know Realtek gigabit ethernet chip was community driven driver.

And most printer drivers are not from vendors.

Now I am curious to search statistics, how much of drivers are made by vendors?

But generally, there is still a problem, that person decide to try linux, and their hardware is unsupported on x86. Laptops especially.

I tend to check linux compatibility first, even if I don't use Linux regularly(actually going to switch soon). I've used linux for a while on my DELL laptop, and experience was great. But my daily driver is windows desktop. Going to try it out. Hope my Wacom tablet would work in Linux, as from my research Wacom is best shot for L.


But you are partially correct. If people can concentrate on other, more mainstream and easier to fix hardware, it would be more beneficial for comunity in general.

But people is doing it for free, and getting fun. So that's their choice.


Thank you for the response. I was busy so I didn't see it at first.

You helped me see what was happening too, so thank you for that too.

I'm not for a second saying that if people want to hack away on the hardware to have fun, to scratch their own itch, etc etc they shouldn't be allowed to do so. They want to do it, they should have as much fun as they can while doing so. I wish them well. No on is served by petty administrator types demanding the whole world act as an offshoot of their corporate needs and abandon what is fun for them to work on the stuff that any one individual needs. Linux grows like it does because so many people are having fun with it and I'd never want to stop that.

The people I don't understand are the ones who buy Apple hardware with the intention of running Linux. They're the ones taking the gamble and buying something which was never intended to work under Linux from a vendor who is actively hostile to them doing so. At these prices, why gamble? Why fight? Do you want a toy for geek cred or are you buying hardware to use?

It's different if they're planning to develop on the hardware alongside the hackers getting Linux to run or are trying to target their code on the ARM type that Apple is using with hopes of future portability. That makes sense and is their business, whether for work or for pleasure. I just don't understand why the average Linux user would want to do this.

Sure Apple's ARM processor is pretty much leading the pack when it comes to ARM hardware set up for daily use. The hardware is very cool and I dearly hope that someone will release something intended for Linux that can compete and give them a run for their money. I just don't think it's a gamble worth pursuing if your goal is to simply use the hardware as opposed to developing or hacking on it.

Thanks again for your response, it helped me understand how I was coming off and I apologize to everyone for the misunderstanding my wording caused.


By the way. I don't know anything about ARM laptop. I know that there are some models. But have no idea how it is going.

I guess having another option won't hurt, and would increase applications adoption.

To be honest: supporting such uniform hardware as Apple's is way easier than supporting bunch of different vendors. So this is doable, if won't be one-man project.


> The idea of buying hardware and installing an unsupported operating system seems like insanity to me after that experience.

But if you're a Linux user, that's the normal state of things. There are options like System76, but they often carry a price premium (which may well be a worthwhile investment), and the Linux ecosystem would be a lot weaker if that was the only non-server hardware you could buy.


> why bother?

I wonder the same thing, but from an ideological perspective.

Why should the free software community promote Apple hardware by making it more accessible to OSS enthusiasts, when Apple only cares about OSS when it directly benefits them? Apple makes great hardware, but they're actively hostile to everything free software stands for. If Apple cared about this user base, they would work on this themselves.

That said, from a technical standpoint, this is nothing short of impressive, so kudos to the team. I can't even imagine the dedication and patience required to work on this project.


Right now, Apple makes arguably the best laptop hardware on the market thanks to its chipset.

Why should you be forced to choose between non-free software and subpar hardware?

Why not liberate the hardware?


Because it will always be an uphill battle, where the biggest beneficiary will be Apple. I'm not saying the OSS community should boycott Apple, but maybe let's not help them sell more machines.

I think there are plenty of "good enough" laptops that run Linux just fine, so I'd rather use those than have the relatively "best" hardware, but have to fight it to run the software I want, deal with any incompatibilities, and fear that it might break at any point. Standard Linux on supported hardware has plenty of issues; I can't imagine what the experience is like on completely opaque hardware, as much progress as the Asahi team has made. It's like a reverse Hackintosh project, which has always been a painful experience.


I think that is all personal preference. I'd rather hack around incompatibilities than settle for "good enough" while macOS gets best of class hardware.

I agree. If you know that you want to use Linux on it. You should support honest vendor with money, not one who is actively against you.

Just a hobby, won't be big and professional like GNU

Currently 60% of the CPU power is being used to do software rendering! If we can offload that to the GPU then it both frees up a lot of CPU capacity and ought to greatly improve battery life.

Same reason people bothered with Linux back in the 90s. It's about the journey not the destination. It's fun, enlightening, and there's a chance it becomes something big and impactful in the future.

Why doesn't Apple release the documentation?

Isn't there anyone inside Apple who wants to have Linux?

Isn't there anyone inside Apple who wants to help?


Trying to take a non cynical take: at a business level Apple's hardware business was growing while other makers were having a hard time, and I think with their current advancement in processor tech and sheer production quality, they have captured most of the market they were probably expecting to capture.

At this point, trying to expand the user base through sheer hardware improvement or trying to include fringe groups (windows dual boot users, linux user) probably has diminishing returns.

In contrast, service revenue, app revenue and accessories like air pods, the Apple Watch have a much better ROI.

What I'm coming at: they have little incentive to work hard to expand the user base, and a better ROI on expanding revenue from users of their software platform. So I'm not holding my breath on Apple actively helping for windows or linux support anytime soon.


So what you are saying is that engineers at Apple have nothing to say about this and are basically like servants?

They do, demonstrably, have some sort of say. The M1 Macs have had deliberate support for 3rd party OSes since more or less day 1 (just not exposed/advertised), and Apple made some early changes to enable Asahi. Some recent comments by marcan on the topic:

https://twitter.com/marcan42/status/1554395184473190400


I don't see engineers having a say here. Instead, I see a lot of praying, begging and hoping in that thread.

What do you want, engineers explicitly coming out and saying they designed the system to make this possible? Oh wait, they did.

They are one of the worst lock-in minded companies. Making their hardware as closed as possible fits that.

The documentation they have may not be complete or in a form they are willing to release (e.g. because it mentions future products)

They also may be reluctant to release it because of fear of litigation. They may do something that’s patented, similar to a patent, or deemed patented by someone else, and releasing documentation makes it easier for others to find such cases.

Also, in the end, there’s dollars. Even the simplest process to release documentation costs money (and theirs wouldn’t be the simplest for the reasons mentioned above). They may think there’s not enough money or reputation gain in it for them to do this (and for reputation gain, they would have to keep doing it)


Apple barely releases documentation for their SDKs they expect developers to use.


what’s the app he used for the overlay avatar?

It's mentioned in the description for the video: "Animated using Inochi2D by Luna the Foxgirl (https://twitter.com/LunaFoxgirlVT https://twitter.com/Inochi2D)"

And going through the twitter link you can find the homepage for that software is at https://inochi2d.com/ (the other twitter link gets you the homepage for the developer of that software at https://github.com/LunaTheFoxgirl).


i don’t understand things like that avatar. maybe i’m too old.

[ unpopular opinion incoming!! ]

yes, you are special and unique, sure. we all are. you like anime, and stuff from Japan, awesome. lots of people do. why is one third to one quarter of the video real estate consumed by the avatar? why is the voice so high pitched and hard to understand?

i mean, fine. i am not about to tell someone how to present themselves, especially unprompted. i will say that i have zero interest in this if this is what the community around it is like. it feels like i’m being talked to like i am an infant, and there is fear that my attention will wane if i am not overstimulated visually and aurally. it is insulting, to me.

i wish this effort all the best. treat me like an adult, please.


(Am I also old, I ask myself?) I don't like the distorted voice but the thought of making an avatar and hiding behind it, instead of showing one's face in a video, sounds good. I wouldn't mind using the image of an anime girl either.

just don't show yourself at all? that seems very preferable to some avatar, to me. I don't need people to look at me, and I definitely don't want to pretend I'm someone I'm not.

but maybe you're pretending you're someone you're not every day, and your avatar could be the you that you want to be? if so, absolutely go forth and avatar up! find ways to be yourself, always. lots of folks do that stuff, and that's awesome, but that kind of thing doesn't add anything that I look for in the things I spend my limited time on, is all.


> why is one third to one quarter of the video real estate consumed by the avatar?

From what I've seen, it's not unusual for streamers which do not use an avatar to consume a fraction of the video real estate with a camera showing their face; the avatar merely replaces that.


yeah, I know, but the ratio of avatar to screen share area is abnormally large in this case. at least to me.

Amazing, now the only missing thing is a proper trackpad driver.

How is a gpu driver ported? Isn't it proprietary?

From a past article, reversing a printer protocol was derided for its complexity and there is a wire to snoop.


The main asahi dev is a reverse engineering legend who previously worked on both Nintendo Wii and PS3/4. I think the apple hardware is probably easier to reverse than those platforms.

The driver is not ported, it's reverse engineered.

> reversing a printer protocol was derided for its complexity and there is a wire to snoop.

Anything can be reverse engineered with enough time, effort, and domain knowledge. A printer may not be worth it.


There's also a "wire" to snoop here. Marcan wrote m1n1 which allows you to run Mac virtualised and shows the hardware communication.

Great work, the v-tuber wierd thing doesn't affect of the value of this work.

Speak for yourself, I could barely understand the content due to the high pitch and speed of the vocoder. I hate video tutorials in general though, I really wish this was a blog post.

why would they speak for anyone but themselves? and why would you think they were doing so?

It is an expression, another way of saying "You might feel that way, but I don't"

See https://idioms.thefreedictionary.com/speak+for+yourself


meh expressions like that don't make sense to me. all these people everywhere saying things that don't make sense just because someone else said it before them. it's all just peer pressure from books and dead people.

meanwhile I'm over here trying to understand and participate in conversations and no one cares about people who take words at face value. I hate this goddamned planet


Speak for yourself, I enjoy using idiomatic phrases because I like conveying nuance and expressing myself using a common phrases. :)

WHY WOULD I SPEAK FOR YOU AND WHY WOULD YOU ASSUME THAT I WAS?

It took me a few years to figure out when strangers said “How you doing?” they didn’t actually want to know. To the best I can recall that wasn’t a common greeting when I was young.

I still don't understand why people do this. if you don't want to know how I'm doing, DON'T ASK HOW I AM DOING. ask something else.

smalltalk is just lies upon lies so you don't notice larger lies later in the conversation. smalltalk is manipulation and is dishonest and it's awful.

I want off this planet.


I think everybody would agree that a blog post or even a nicely edited video is better than this.

But you have to remember that this was an impromptu live stream. It took much less effort on her part to create, literally just 30 mins (half of which was her trying to compile OBS).

And I think that's fine. I think the end result is the goal, blog posts are nice but not the focus here.


I agree, I didn't even watch the video.

What I meant, is that the result is still great (a GPU driver for M1) regardless of how bad the video is and how weird the entire thing is in general.


I'm just getting old, I do not understand VTubers or the appeal.

Complete speculation on my part, but I think the biggest appeal is supply side, that people can be Internet personalities with an avatar without sharing their real identity. That helps a lot of people who would like to start sharing content but are otherwise too shy, or for various reasons think they can’t/shouldn’t use their real identity.

On the consumption side, I find the VTuber method to be more compelling than pure voiceovers, even if it’s silly. An avatar helps create a sense of engagement. It’s also interesting to see what sort of characters people come up with.


Maybe I'm old too, but I don't have a problem with avatars and virtual ones, it's the extremely high pitch and cartoony voices VTubere tend to have. And why does it have to be anime or furry aesthetic every time?

Give me Optimus Prime doing kernel Rust coding. Even better, gimme the sarcastic high pitch of Skeletor. I'd also be fine with, I dunno, Lara Croft or other virtual female character.

Everybody has their own preferences, fine, but outside of that overrepresented niche of sexually ambiguous anime and furry there's just... nothing.


People don't do it probably because of copy rights. Don't know what constitutes enough change to the original character to be considered original work.

Inventing a new cartoon or 3D character can't be harder than designing an anime avatar.

It is actually harder. The anime aesthetic was literally designed to facilitate easier 2D animation. Notwithstanding the aesthetic itself being very popular (and thus more familiar to designers), many "non-anime" v-tubers like Layna Lazar or Juniper Actias will tend to incorporate anime elements because it's what they know and it's easier to implement, even if they're not trying to follow the "virtual idol" model set by Hololive and the like.

Also, one has to consider that this is an entertainment market almost exclusively bound to Twitch and Youtube, and it's counterproductive having an avatar that isn't going to appeal visually to the masses, generate clips and be supported by the algorithm.


> Give me Optimus Prime doing kernel Rust coding.

Oh my god I need this now


Personally I can't stop thinking about Skeletor doing code review lol

There's a lot of interesting and creative work going on in the v-tuber space, and a lot of tech involved adjacent to VR and AR. It's unfortunate that Hacker News isn't more interested in it, because people here eschew anything popular with the masses.

You're getting old and you don't know about the Muppets?

body dysmorphia

Have you seen the kind of disgusting comments the average female gets on YouTube? I think this is a good way to sidestep that.

I'm young and the high pitches hurt my ears, but impressive work regardless

I'm in awe of anyone who can reverse engineer and build a driver for something like this that isn't documented at all (AFAIK). Impressive stuff. And to use that knowledge to contribute to Linux is a great thing. Kudos to everyone involved.

Can anyone explain why it's hard to get GPU-accelerated ML libraries such as Torch running on a Mac (Intel or M1/M2)?

In each library, their operations need to be implemented for each backend separately (CPU, GPU, TPU, etc). All of these libraries support CUDA as their default GPU implementation because it’s by far the largest in terms of market share. But since Apple GPUs do not implement CUDA or a translation layer (they use Metal, Apples graphics and compute acceleration library), that means all those mathematical operations need to be rewritten targeting Metal before torch can even communicate usefully with an M1/M2 GPU. That doesn’t even touch on the fact that different backends need to have work scheduled on them slightly differently. Some expect a graph of operations to be submitted, some are just submitted as a series of asynchronous operations, etc.

Also just wanted to point out that torch does support Apple GPUs now, however.


The nightly of PyTorch does run on the M1 GPU.

wow this is so impressive! absolutely stunning work!

This work on Asahi Linux is some of the more impressive work I’ve been following. To basically reverse engineer undocumented hardware features and get Linux running on them in such a low amount of time is nothing short of inspiring.

Wow, this is impressive. 2022 may finally be the year of Linux on the desktop! ;)

But running on undocumented hardware, so without any guarantees.

You say that like Linux, or really any other consumer software, comes with guarantees on any other hardware.

Of course, guarantees in the non-legal sense of the word.

If you work towards correctness by design this gives a certain assurance that you don't get if you do a lot of guesswork and hope for the best.


I'd be more sympathetic to this argument, by a lot, if I wasn't constantly encountering bugs in systems where there wasn't (or shouldn't have been) any guesswork needed.

To (substantially) rephrase the point, you're saying "there's this additional source of bugs" without an argument that it's a significant enough source that it will stand out against the background of all the other sources of bugs.

There's also a strange flip side where abnormally competent people are doing the work, so I might even believe the bug-rate is likely lower than average.


This! In fact my experience generally is that things that are supported by "hobbyists" work better then official stuff.

> I'd be more sympathetic to this argument, by a lot, if I wasn't constantly encountering bugs in systems where there wasn't (or shouldn't have been) any guesswork needed.

In my experience, you can take any consumer PC (laptop or desktop), install Linux on it, and find a firmware or hardware bug or unsupported hardware feature in the time it takes to read through the kernel log. Often the broken functionality becomes obvious just from the process of trying to boot, log in, and type dmesg. Unless the broken feature in question stems from a component that is wholly unsupported, it is very likely that the flaw is a result of the hardware or firmware's behavior differing from what the documentation or relevant industry standard specifies.

So the status quo is that everybody is always deciding to live without some features that their computer "should" be providing based on its hardware capabilities. Of course, it's perfectly valid for some users to not care about a feature like having a working fingerprint sensor or sensible fan control or automatic brightness, and even more understandable for features that are less readily apparent to the end user (eg. idle power management). But when those gaps between theory/design and reality do get closed, it's usually a result of reverse engineering leading to an acceptable workaround, not the OEM coming back to finish the job.


on the Laptop!

I like when they write kernel drivers, just keep them away from my daughter.

This is so technically cool and an amazing achievement but wow what an odd video.

Although I'm thoroughly impressed and applaud people for doing this in their spare time, I sort of wish they wouldn't help Apple in their developer-hostile actions.

If Apple don't want to release documentation on their chips, then leave them to rot in their walled garden.


I mean, I guess there are various degrees of hostility. When Apple ported their iOS bootloader to the new Macs, they purposefully added the ability to boot third-party operating systems.

They just didn't tell anyone how to actually build a third-party operating system.


Exactly the middle ground, but it's a shame so much has to be reverse-engineered due to lack of documentation. It's quite a contrast from the Apple II, which came complete with schematics and a ROM listing. Obviously, today's computers are nowhere near as simple, but I hate seeing more and more about our systems becoming jealously guarded secrets.

Nonetheless, a half-step like this is vastly better than the locked bootloaders, jailbreaking, and outright hostility of the smartphone/tablet/console world.


What's the relevance of the anime theme in the linked video? It also doesn't show any of the points discussed in the tweet.

I don't get all the griping about "Wasted development effort" and "Apple doesn't have documentation".

Linux was about making it work on different platforms from the very beginning. There was never any vendor support.


People take so much for granted. It was decades until the graphics cards manufacturers ever saw Linux support as a serious issue. Linux made itself a serious platform by this kind of "wasted effort" and if you go even earlier in the history of Linux, it itself was maintained by enthousiasts that many considered "wasted effort" because Linux was an amateur operating system that was never going to have vendor support or even run production grade software.

Cynics are gonna cynic. But it's people like Asahi that get shit done and move the world forward.


This is incredibly amazing, code wizardry.

Everybody knows that marcan = AsahiLina. I guess he just does V-Tubing for fun? I don't know and I don't understand, I find the voice and the background music also a bit annoying, but its fine... his/their work is great!!

Marcan directly said at one point that AsahiLina wasn't him. But, I can't tell if it's really not him or if it's "not him" as in "it's not me, it's my character" and he wants it to be ambiguous.

Legal | privacy