Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
Tour of new custom M1 macOS runners racks with Christina Warren [video] (www.youtube.com) similar stories update story
99 points by keepamovin | karma 6780 | avg karma 2.07 2024-01-20 04:51:38 | hide | past | favorite | 185 comments



view as:

Seems completely nuts that you have the "shuck" entire assembled computers for this. Reminds me of the story about the Soviet factory ordering tractors just to remove some component like a single bearing for manufacturing something else.

I agree it's nuts and wasteful. It seems like Microsoft & Apple could come up with an arrangement to reduce their waste.

Even AWS tucks mac minis into a rack https://aws.amazon.com/blogs/aws/new-amazon-ec2-m2-pro-mac-i...

Perhaps a policy holdover from the late 90s https://www.wired.com/1997/09/apple-kills-clone-market/


If you take the innards out of any model Mac and 3D print your own mounts, you can get some crazy density. There's basically nothing in them.

Apple did once sell 1U server racks [1]. I assume the reason they no longer sell these is because the market has spoken.

[1] https://en.wikipedia.org/wiki/Xserve


I somehow expected more density for a custom arrangement like this. One eviscerated mini per sled, ten sleds per rackmount chassis, six chassis per rack: sixty minis in 42-48U.

Sonnet's RackMac mini[1] puts two unmolested minis in a short 1U enclosure: sixty minis in 30U. A web search turns up the Apple Mac Mini Relay Rack Shelf[2] from RackSolutions, which has four minis in a full 1U enclosure.

I'm willing to believe that there are cooling and airflow advantages to the sleds, although those two small fans at the back of the sled don't look too exciting. The biggest advantage seems to be cable management and the ease of sliding out a single mini at a time.

One comment on the video points to another video[3] about MacStadium, which mounts minis sideways on what look like library shelves.

[1] https://www.sonnettech.com/product/rackmacmini.html

[2] https://www.racksolutions.com/mac-mini-shelf.html

[3] https://youtu.be/0b46E4mp_V8


> I somehow expected more density for a custom arrangement like this. One eviscerated mini per sled, ten sleds per rackmount chassis, six chassis per rack: sixty minis in 42-48U.

If Apple isn't willing to manufacture an off-the-shelf offering, what did you expect? That's... pretty dense...

As far as off-the-shelf desktop things turned in-house rack mount sled chassis builds go, 60 Minis in 48U (with _all_ the supporting bits – multiple network switches etc) is pretty damn good?

> I'm willing to believe that there are cooling and airflow advantages to the sleds

Nah the benefits are all management, maintenance, and isolation. Not to mention the sled chassis providing A/B power feed support, IPMI-esque management from the host chassis for each sled, etc etc. The benefits are massive and have nothing to do with cooling/airflow.

> although those two small fans at the back of the sled don't look too exciting.

I can assure you those 40mm fans _scream_.

> The biggest advantage seems to be ~~cable management~~ and the ease of sliding out a single mini at a time.

Bingo. Single sled failure and the ability to swap to a spare/repair the broken one outside of production by pressing your thumb on a trigger and pulling towards you. Huge win vs the off-the-shelf 'rack your factory Mini in this metal cage on rack rails' scenario.

Not to mention you now have the ability to assign jobs/builds intelligently at a sled/host chassis level. Isolation & proper distribution of jobs at that granularity (and cabinet, power phase feed consumption, A/B feeds, cabinet row(s), etc) is very valuable. This shit is really, really dope.


[flagged]

Smart girl giggles is the best giggles tho.

I wonder how bulletproof Apple's macOS license is. Perhaps one could find a country where the "you can't run it on non-Apple hardware" clause is not valid, and get some good lawyers. Then just run a standard data center with normal multi-socket virtualization servers, and for each one buy a dead mac to have the rights to the software. Perhaps one could hot glue a powered off Apple motherboard inside of this server and claim that it is Apple hardware now.

Maybe it's risky, but you could easily compete on the per hour price with these shops, that have to buy actual macs, disassemble them and run all this custom infrastructure to support this.


The days of x86 macOS are numbered. I don't think there are good ways for running ARM macOS on non-Apple ARM CPUs and I am not even sure there are ARM servers comparable with M1 in single-core performance (maybe latest AWS Graviton?)

With ARM Macs, Apple has killed the Hackintoshes, which I never found appealing anyway.

It was like those folks that put Ferrari logos on red Fiat sport models.

Either get the real deal or something else


Yeah, I never understood the appeal. macOS but you have to mess around with hardware and drivers. No thanks. The appeal of macOS is that it just works with Mac hardware.

Messing around with hardware and the software that runs on it is how you really learn how computers and software work. The original Mac SE and Mac IIs came withe a hardware debugger you could enter anytime with a literal button on the front.

If I wanted buttoned up single-use OS there is iOS. But almost no one wants that as their primary OS because they want the customizable, extensible OS.


I did this with Linux and PCs 25-30 years ago when I was in high school and college. I don’t have time or interest in that anymore. Besides, the small amount of embedded work I do scratches my hardware itch better than any PC could, with their layer upon layer of crap (management engines, etc.).

Appeal was simple - at the time of peak Hackintosh (with Intel CPUs and Nvidia GPUs), you were able to get much better performance both CPU and GPU level for fraction of the price while still having MacOS experience (as it brings together best of both the GUI as well as Linux terminal functionality). The overhead of having to mess around with updates was well worth it at that time.

Long time Mac user here. The Hackintosh is what allowed me continue to use MacOS after Apple finished the PowerPC era. Because they also dramatically raised prices on the Pro workstations. It was now out of my budget even though the price of PCs were falling. I used a Hackintosh for years for 1/2 the price of the equivalent Nehalem based MacPros. You may not like them but they serve a purpose and they have a fan base.

"Because they also dramatically raised prices on the Pro workstations."

The PowerMac G5 Quad cost $3300. The Base model Mac Pro that was release less than a year later cost $2200, and it greatly outperformed the G5.

Base model PowerMac in 2005 - $2000 Base model MacPro in 2012 - $2500

Adjusting for inflation from 2005 to 2012: $2,000 becomes $2,359.67

After adjusting for inflation, Apple raised the price by $140 between 2005-2012, but they also 6x the base RAM and WiFi became standard. Not to mention the significant performance and efficiency improvements.

I don't know how much the WiFi upgrade cost on a G5, but if it cost more than $140 Apple actually reduced the price of the Pro workstations during the switch to intel.


By good fortune the PC I built from eBay parts was 100% compatible with a hackintosh macOS 10.6, even down to wireless and bluetooth.

This allowed me to experience using macOS as a daily driver (at home) while saving up for an eBay macbook (macbook pro 13 mid-2010)

The Hackintosh community is a vital step of "try before you buy" for many like myself.


On the other side of this the ibm x3550 m2 and m3 in any configuration ran OS X SL right out of the box without any customization besides the bootloader, a happy accident I discovered

> Perhaps one could find a country where the "you can't run it on non-Apple hardware" clause is not valid

Some of us recall the era of the officially authorised Apple clones.

In my case, I was a level one tech at the time, and so I ended up fixing far too many of the damn things.

As is the way with IT, the non-Apple manufacturers treated it as a race to the bottom. They put out cheap-ass products that were built in an equivalent cheap-ass way. Sure you could buy a clone 30%(or whatever) cheaper than the Apple equivalent, but you got what you paid for.

I for one celebrated the day Apple killed off the Apple clone license regime.

The whole point of the Apple platform is the tight integration between hardware and software and, statistically speaking, the generally high quality and reliability of Apple hardware. As above, I have witnessed first hand what happens when you legally decouple the two ... its not pretty.

To pre-empt the people who will point me at some blog-rant where someone's Mac "broke", well sure, when you build millions of machines, there will inevitably be some that break ... but I doubt anyone can seriously argue against Apple reliability overall.


After the keyboard fiasco I'm not convinced Apple's first party control is really all that great. Non-removable batteries, RAM, storage too strike me as repair hostile. It's a luxury brand now. The hacker spirit is long gone.

Apple seems to be held up to much higher standards, perhaps due to volume or a simple product line up.

but most hardware makers have fuck ups equal to; or worse yet they seem much less talked about.

ask your IT person about recent XPS models desoldering themselves or cooking themselves in peoples backpacks when they should be a asleep for a very recent example


Other brands get plenty of criticism. And often without folks rushing in to defend them using whataboutism.

not defending them at all, I dont really care, but it seems like Apple problems are all anyone talks about for an entire news cycle and continue to be talked about for half a decade, yet people very quickly forget about issues with Dell, HP, Lenovo, Samsung even if they make news at all.

Exceptions only for really serious issues like Samsungs phones spontaneously combusting and Lenovos Superfish.


Apple only produces a few device models at a time. Where Lenovo and HP have a laptop configuration for every use case or every price point under the sun, Apple has three tiers at most.

That makes developing for Apple products relatively painless, but it also means that if they mess up, a significant portion of their customers are affected. Combine that with the sales numbers they're pulling off, and the end result is that Apple messing up is a bigger deal in practice.

There's plenty of news about other manufacturers, too. It doesn't seem to get upvoted to the front page of Reddit and HN as often, but tech news sites report about all kinds of failures. I don't know why people keep upvoting Apple, whether it's positive news or negative, but for some reason Apple just attracts a lot of attention. If you look outside vote based news, there are plenty of "Lenovo publishes UEFI firmware patches" news articles too, they just don't get as many comments.


For GitHub and most other companies running racks of macOS, the use case is testing/building macOS/iOS software. Even if you could build the stack you describe, I wouldn’t want my tests/builds running on a hackintosh build with unsupported processor and virtualization layers. I doubt there would be a huge market.

If it were 1/10th the price it could be worth moving most of the build process there. If you need the extra validation you could do some final testing on official hardware.

Maybe but it shouldn’t be that big of a difference. M series processors are very energy efficient. Minis aren’t that expensive.

Note: 10x is the multiplier for GitHub actions. So maybe I’m wrong but I’m guessing it’s more capacity than cost and that gap should narrow.


That has already been done, but it's not really the point of this; the point is to put M1 chips (or M2, M3 etc) in a blade-like enclosure for non-standard workloads (like GitHub runners).

The fact that it also has a legit serial number and macOS software license is a nice bonus, but if it was realistic to run ARM builds for iOS and macOS on other hardware, it would be done. AWS has the same thing, you can get bare metal Macs as much as you want.

As for running macOS itself; for server services that really doesn't make any sense, you'd be running Linux (like Apple does themselves for their services). That just leaves exactly why GitHub is doing what they are doing: you'll want a proven toolchain that can keep up with ecosystem changes, so your customers can run Xcode workloads and the likes.

There were some services that used (slightly) older Xcode builds on random x86 hardware, but neither the price nor performance was what the market wanted (I think) so they all disappeared without much of a trace. The only one left that does stuff like this is Corellium but that's not as much a build or device farm as it is a debug/reverse-engineering facility.


Corellium runs on ARM hardware, of course.

Not that I have found... My team runs our virtualization infra for end user computing (VDI) and our dev div runs almost entirely on MacOS. One of the frequent requests we have as we scale out more globally is to get virtual Mac desktops. There are a few providers that we have found, stateside but they don't scale well. There is of course EC2 offerings from AWS, which is basically what they are doing at Github with Mac minis.

We're looking at something similar as we migrate from VMWare. Any recommendations for VDI for Mac?

No not really. We have found a few smaller players but their ability to scale concerned me. We talked to one and I asked "if we do a new acquisition and I need 2000 Mac desktops in a month. What would you do?". I was surprised that they were as honest as they were, they simply said they could not procure that many in that period of time. We are a big AWS shop, I think a solution we are toying with is using Mac ec2 instances and building an orchestration layer that essentially setup one for a user, load the base tooling, enroll in MDM and email the customer with login details. Im not sure what protocol we would use yet though.... anyway... tldr NO.

The idea of using thunderbolt to extend a motherboard is pretty smart and can be used for many laptops, I wish they would sell this to people who want to reuse some broken laptop and turn it into a server.

Still wondering if it makes financial sense since you could just build a custom rack to pack as many mac minis as possible, even at cost of buying storage from Apple.


you can buy pcie extensions easily on amazon.

Sad to see this much e-waste, especially when Apple highlighted their environmental initiatives so much at their most recent wwdc.

You don’t know it’s e-waste. Most likely they sell those part in bulk to a 3rd party.

It's still wasteful. Shipping around unneeded parts that maybe will be reused?

Possibly, but is there even a market for the parts they are not using? How many people are replacing M1 mini parts?

If Apple actually cared even one iota about the environment then every machine and headphone would have a replaceable battery. Everything else is greenwashing.

Almost no detail of any kind except 60 instances in a rack, and a very brief look at the chassis.

I got the impression that the guide didn't know anything about what they were talking about w.r.t internals or how it's used, but that could be sexism or racism I guess. At no point did they mention anything about why there was an external thunderbolt network controller (or why it was so big).

No information about orchestration or storage or... any detail at all really.

"here's a fan, there's a thunderbolt ethernet", "it fits like this".

Very surface level, which is annoying because its very clearly marketing for developers or technical people who even care about GitHub.


It was a weirdly undetailed video.

They are both in high-level senior management roles and are pretty young. Most people I have encountered like that are not very technically skilled, regardless of race or gender. Either way it’s a marketing video, probably best not to read too much into it.

They both have at least a decade experience in the industry and Iccha Sethi seemed like a knowledgeable engineer. She also seemed pretty genuine and enthusiastic about her team's work on this.

The video is just pretty short and very surface level and it would've been better if it was an 8 - 10 minute video. Then they could have included the people taking the things apart and put some shots of actual work being done in.


They didn't give details but it might come across as rude to assume they didn't know anything. The guide is introduced as a director of engineering, IIRC. One rational and charitable interpretation based on that is that they know way more than you do.

Also, even without that introduction, I got the opposite impression. The lady seemed very knowledgeable and had that melding of professionalism, persona and super-smarts that is common is very high performers, in my experience.

Sure, it didn't cater to your exact technical expectations of what the video should have, but such an expectation could be considered somewhat narrow, and it may be difficult to argue that therefore it is without merit. Your point about it seeming to miss the point of its intended audience is well-taken but may result from an a super narrow understanding of the audience segment you mention, which is likely more diverse than suspected. Also, the context suggests it was designed as a mix of PR, marketing, and technical sneak peak / curio, which it seemed to achieve well.

Your self-awareness of how this could be the result of sexism or racism is commendable but by surfacing your awareness of it, it comes across as less likely to be the cause, so I'm going to assume it's some other reason for now! hahaha! :)

Overall your comment could come across as somewhat strident and narrowly focused dismissal of something, and someone, with lots of value! Perhaps you just missed that for whatever reason? That's ok! :)


I watched this, and the commentary from the 2nd tour guide seems non-serious. I did think the "valley girl" accent is what threw me off, but I stopped watching at "super cool sled". I just don't understand who their audience is with this type of commentary.

I hope they're just frying vocals and not m1s

I have an impeccable valley girl accent and just shy of thirty years of production sysadmin experience. Debiasing is hard work, but it’s worth it.

If you sound disinterested and don't articulate your speech for the technical audience you have -> you deserve the perception you get.

Biases exist for a reason. If I'm not hearing the detail I'm looking for or the sincerity I can trust, I know I'm not your audience.


If, in a technical settings, someone consciously interprets the valley girl sound itself as an indication of inarticulate, insincere, and disinterested; then, then they’re perpetuating biases against a regional accent, which is completely unacceptable. The harder problem is getting people to notice when they’re unconsciously doing so, and then to compensate for their own bias when listening to – and judging – others.

(Your comment makes sense otherwise, no argument here — I’m only here for the valley girl subthread.)


Hmm. I will say from past experiences I've only found the 'valley girl' accent to be in certain parts of California, and not representative of Californians. Growing up I remember when my sister and her friends adopted this style of speaking for a time. Usually at school, this was (from my perception) to project an air of being disinterested in bored in conversation toward authority figures. I also found it when someone wasn't capable of demonstrating a deep knowledge of something (where the bias comes into effect). I know it's incorrect, but I have many friends that think Californians don't have an innate accent and I've often thought this about how I sound when going elsewhere.

I think your comment makes sense, and I appreciate the way you detailed this.


I think it was just overly stage-managed and produced. I found the upbeat, ‘super-excited‘ kinetic delivery (from both presenters in that part) fake and off-putting; especially for the engineer, I’m almost certain that this was ‘trained’ by whichever agency they were using for the video production.

[dead]

Just because something isn’t particularly technical doesn’t mean the person who made it doesn’t know what they’re talking about. I’d love to learn a lot more about these racks too, but I don’t look at the video and go “this is the extent of the knowledge these two women have on the thing they’re talking about”. And, for what it’s worth, at least one of the guides posts here quite often, so she might even stop by to answer your questions if she’s allowed to do so and actually wants to interact with you after what you said about her.

By your tone I get the impression that you're offended by my statements.

I can understand that if you read my comment as if I intended to be insulting.

I'd like to point out that: I did not intend it to be insulting to the authors, for 2 reasons:

1) Nobody knows everything; I'm responsible for works which I do not go into detail with. For example, my reports include: Render Architects, Gameplay Programmers, UI/UX Design and things I conventionally understood such as Backend Development/SRE. -- If I spoke on a rendering topic it would be high level, this does not mean I am bad at my job.

2) The way in which this was presented is either: Highly media trained (IE: DO NOT GIVE ANY SECRETS AND WE WILL NOT TELL YOU WHAT IS SECRET WE WILL ONLY EDIT IN POST), which is a situation I've been in before actually, or the person presenting is, like me, managing a team of specialists that she does not go deep technically with.

I want to be clear: I do not think less of any of the people who presented this, I do however think less of GitHub the organisation for presenting this marketing without very much substance.

The crux of my comment was precisely this, certainly not an attack on any individual.

I even tried to acknowledge my potential bias towards sexism, racism (and perhaps also the valley accent): however, on reflection, I genuinely believe that I would have scolded a caucasian male significantly worse for presenting in such a way. I definitely pulled punches based on ethnicity and gender; which is also a form of racism/sexism - just in this case it actually benefits the video and its presenters.

The reason I even consider that they do not know the subject matter in depth was that the conversation stemmed almost entirely on what the presenter was seeing, as if they were finding things they recognised. Like when a person who has built computers opens up a server and recognises RAM, or USB ports. (something I have seen often in my career when non-IT folks get curious when people are working on servers). The second reason is typically engineers love to explain reasons. The why of a thing is usually much more thrilling to discuss and that wasn't talked about at all. IE: why is it clever to have an external thunderbolt ethernet controller, why is it clever to have these machines housed in sleds instead of racking the whole device.

The kinesthetics of how they're just pointing at fans, power and sliding it in and out of the chassis just reminded me of how I was when I got my first server hardware and started playing with it. It lacks the familiarity I would expect.

I hope you understand my position, it's not intended to insult or offend, we all have value of course and I was just uncertain if Github had put her in front of the camera because she represents a high position and doesn't work with this daily; which in my opinion reflects poorly on github, not on her.


There was no substance to this video at all especially when you think about who might be the intended audience for this video as in tech professionals or tech interested people.

I'm not offended by your comment. For one, it's not even talking about me or any work that is closely related to mine. I'm just pointing out that someone who was in that situation may perceive the comment negatively, and explain why that might be the case.

It is (genuinely!) good that you did not intend malice with your post. And being aware of sexism, or any other kind of stereotyping/discrimination, is definitely a good start. That said, just because you did not mean to be insulting does not automatically preclude you from coming across that way. While we all make mistakes, and I assume from your comment that you wish to avoid them yourself, I think what your reply here really shows that you did a poor job at expressing what you really wanted to say. This is unfortunate, because it's actually pretty easy to rephrase it to not come across poorly.

You–and this is a fact–noted that the video does not have much technical content. It's kind of a marketing piece, really. A lot of people here (myself included!) would have probably wanted a deeper dive into the technologies involved. Here are some examples of how you might rephrase to convey that:

"It's interesting to see that they got this working, but I have to admit I'm kind of disappointed by the video. There's not a lot of detail as to how this works. I was hoping they would explain the Thunderbolt adapter, at least." (It's rarely a bad idea to be courteous and polite, especially when you start a comment. This conveys your feelings and where you feel the video falls short. The work itself, rather than any people involved, are brought up.)

"What's with the big Thunderbolt network controller? Anyone know what's up with that?" (Invites others to respond if they know the answer. Perhaps even the author or someone closely related! They often browse the internet just like you.)

"This is really just a marketing video :/ It's kind of stupid that they don't actually explain anything that's going on. I hate companies that keep their cards close to their chest." (This has a much stronger opinion in it, and your idea of what they should do to fix it, but it directs it at GitHub, not the people in the video. GitHub is a lot of people but not one person. It's hard to offend a company.)

"Why did GitHub make this video? It has no technical content and makes it seem like these women are not involved in the engineering process. Why bring out the director of this team if you won't even let her say anything about the actual work she did?" (This is a very strong opinion. If you're careful you might be able to play it, but do note that the women in the video may not appreciate you speaking for them about how the video portrays them.)

On the other hand, you said, and I quote, "I got the impression that the guide didn't know anything about what they were talking about w.r.t internals or how it's used". Do you see how this directs your criticism at the people in the video? There are plenty of reasons why they might not go into detail, many of which are out of their control or not relevant. You might have the "impression" that the guide doesn't know what she's talking about as opposed to, say, actually not knowing what she's talking about, but if you go up to someone and say "I get the impression you're stupid" that's basically saying "I think you're stupid" or "you did something stupid". That's not nice, regardless of how you feel about it. From your second comment, your annoyance has nothing to with the people, so why bring them up? Direct your criticism at the thing you care about.

I think a big problem is that people often realize that what they're about to say is difficult to express and can be a sensitive topic (women in tech, here). Unfortunately, saying it anyways and then ducking under "this is hard to describe! I am not being discriminatory! I would say it to white cishet men too!" doesn't actually really help. First off, why would you say it to a white cishet man? Your comment, in a vacuum, doesn't have any sexism in it. It just sucks in general. You probably shouldn't be going around telling men that they give you the impression of not knowing what they're talking about, either! But when you do bring into the picture that women are the target of these comments more often than not, I think you should demonstrate a little more tact. Read it a bit more closely to see who the criticism might be aimed at, and soften or clarify as appropriate. Or ask someone else you trust to help you out!


>I get the impression you're stupid

That's a different claim altogether. You're right about how their phrasing directed criticism at the presenter rather than entirely at the company, so you should steer clear of strawmen while you're ahead.


If you come up to me and say "you don't know what you're talking about" I interpret this as you telling me that you think I am stupid. My feeling is that this would not be an uncommon reaction, either, but if you have other thoughts I'd be happy to hear them.

> I got the impression that the guide didn't know anything about what they were talking about w.r.t internals or how it's used

I got the same impression, and I think it's a fair critique of the video and the presenter. OP simply acknolwedged that unconcious bias may be affecting their judgement, for better or for worse.


I wonder when will be the time when Apple themselves just give in and create something like a M core Xserve. They stopped selling them as they weren't popular, but at that time Macs simply weren't popular.

Apple themselves would make good use of the Xserve for their own internal use rather than buying Dell/HP servers like they currently do. Also things like Xcode cloud are currently running on x86 commodity servers, in the future Apple likely will stop supporting x86 altogether and it makes sense to run it on AS. They could reuse the high-end Mac Studio or similar chips and only have a custom board and chassis suitable for server use, with additional IO and redundancy.

And for those saying that Apple isn't interested in large deloyments, they literally make the Mac Pro in rackmount form for render farms and so forth. The Mac Mini has 10G ethernet and the formfactor purposely hasn't changed. The chips are efficient enought that you don't need screaming loud 1U fans and they could definitely create a quiet 1U Studio suitable for both dense datacenter deployments and a TV studio technician's rack.


> rather than buying Dell/HP servers like they currently do

I wonder what kind of software/OS they run on those machines.


AFAIK it's simple virtualized x86 MacOS. Their EULA doesn't apply to them of course so they can run it on whatever they like.

EULA for thee, but not for me!

My understanding was that Xserve, and the corresponding OS X Server, were discontinued because performance for Apple-rolled server software sucked ass.

There were at least a few threads on Apple’s forums where people were redirected to community-supported software.

There is a fairly strong use-case now with robust software, but I feel like Apple wants to maintain an illusion that their developers are small mom-and-pop shops with a Mac Mini in corner of their farmhouse.


macOS also lacks some fairly basic server features. Last time I looked the kernel didn't do SYN cookies for example.

Apple just needs to make it easier to install Linux on these machines. Or, make their own server software that's "good enough". Like OS X server but actually good.

But I doubt they would do this well. It would be way, way too expensive and probably not have upgradeable RAM.


> they literally make the Mac Pro in rackmount form for render farms and so forth

The rackmounted Mac Pro is actually intended to be used amongst other rackmounted audio equipment in Studios. FWIW.

That it can be used for both is a nice accident, but it's not well optimised for a render farm. Not that this matters really as it can be used this way so they can still get access to this market.


Tim Cook's Apple is a consumer electronics company, so forget it.

Apple doesn't want to be in the server market and I don't think that will change any time soon. They would have to either build a whole new organisation for that or use resources that go into their consumer products like the iPhone to build servers.

> They could reuse the high-end Mac Studio or similar chips and only have a custom board and chassis suitable for server use, with additional IO and redundancy.

Also what would be the benefit of running Xcode cloud on Apple Silicon? Today cross compilation is the norm so the architecture doesn't matter. They would have to make a new datacenter version of their chips without the GPU and accelerators that aren't used for CPU heavy tasks.

> they literally make the Mac Pro in rackmount form for render farms and so forth.

The Mac Pro has a rack mount version is not for the server room. People use racks in audio and video production too and that's where you will probably find most of the rack mounted Mac Pros. You can probably fit 2 CPUs and 4 - 8 GPUs in the space a Mac Pro takes in your datacenter. So why would you choose a Mac Pro over that?


If you need macOS on the server for whatever reason, your only option is the Mac.

I have four Mac minis racked up for this, with two use cases: 1) iOS CI/CD and 2) some computer vision stuff using Apple’s Vision framework.

No complaints, but I obviously wouldn’t run anything that didn’t need to be on a Mac on these systems.

BTW, you can rack mount twenty Mac minis in the footprint of a single rack mounted Mac Pro (there’s a 1U mount that takes two and they’re small enough to mount on both the front and the back of the rack). So 20 M2 Pros per 5U.

They’re of course not a sysadmin’s dream but they do tend to stay up and Ansible works fine.


> Apple doesn't want to be in the server market and I don't think that will change any time soon.

It might. Mac sales have been declining for several years now.


> Mac sales have been declining for several years now.

I’m not so sure about that. I think the Apple Silicon thing added a big boost.

But I’m often wrong. I’m sure there’s numbers, somewhere.



Fair ‘nuff. Time will tell.

But I never found Mac servers particularly compelling. We used to have a couple of XServes (and the overpriced RAID disks), for years in our rack, where I worked. We never bothered using distributed builds, which was probably the only unique service they offered.


After the pandemic purchasing spike for both home schooling and work from home, every computer manufacturer's sales declined.

In Apple's case you also have the people who upgraded early for the M1 performance boost, and they aren't likely to toss out a new computer unless they are doing cutting edge tasks.


Yeah, Covid will probably result in waves of refresh.

Anecdote wise, my corp forced a refresh in Q4 so I expect a crest is coming in 2024.


I think you're completely wrong.. My reasoning:

- Highly energy efficient non-mobile computers (mini, pro, etc).

- They already run a huuuuuuuge server park on AWS for themselves

- They've been (successfully) transitioning their revenue towards "services" for almost a decade

- I suspect they want a piece of the LLM market as well.

- They acquired Fleetsmith for device management in the enterprise

- They implemented nested virtualization in the M2+

- The cloud is highly profitable, and I suspect they want a piece of that cake

- They spent a lot of resources creating a boot policy that allows booting other/unsigned operating systems on Apple Silicon. (asahi). This is not present on iOS

- They already offer "cloud runners" for XCode (cloud). This is a very contained service, to start offering cloud services

- The timeline for iCloud platform:

- 2016: Azure -> Google. 2018/2019: Google -> AWS. This $1.5B contract will end this year. It seems like a prefect moment to switch from AWS to their own platform and later-on offer more generic services to companies.

- They need to 4T or 5T marketcap somehow. Options are - LLMs, Cloud servers, the Enterprise, (Vision), or something completely new, but the Vision Pro is the last major category under Tim Cook, so I don't think anything new will happen for consumers.

All in all, it seems like Apple will go back in the (cloud) server business.


They're wrong, for a simple reason: my build jobs that can run on a Mac finish faster on a Mac.

As far as the complex list of reasons:

- Correct.

- Do they run the server park, or AWS?

- "Services" is a word Apple uses as shorthand for Wall Street types to mean "monthly subscriptions to consumers", it doesn't mean "things we charge for", because at that point it has 0 meaning

- Why would anyone run an LLM on an M-series server?

- Why would Fleetsmith help manage servers? It's MDM. You don't need or want provisioning profiles on servers.

- Nested virtualization makes sense.

- Cloud isn't highly profitable, it's highly revenue-able. Profits are declining as the titans bash it out.

- They didn't spend a "lot of resources"

- Xcode Cloud is in the cloud - but they're already offering it.

- The timeline for iCloud elides a whole lot of time.

- They don't need a 4T or 5T marketcap and those aren't the only options.

You know too much about Apple and it's blinding you because you can connect all the info you know about it to any other arbitrary trend. Been there, that's why I'm comfortable being overly concise about it.


AWS for "iCloud", not sure what else they run. I meant a "cloud server park"

Services is anything software as far as I know.

If they run their own servers, they'll have a large number of gpu core which could be used for LLMs. From what I understood is that M* is pretty good at them. That way they're not helping NVIDIA grow more as a competitor.

My point with Fleetsmith is that they're looking at corporate use.

Maybe not a lot of resources for them (boot loader), but they deliberately allow booting other operating systems.

I firmly disagree about cloud profitability. I read numbers between 30% and 60%, although I also found 2.6% for GCP and -8% for Azure. Not sure why they're so far apart. Amortization shouldn't be the issue. Perhaps "R&D"

Last part, you might be right ;-)


Counterpoint; Apple is still just as greedy as they were when Xserve failed the first time.

Cloud customers want good prices and well-supported hardware. On the server, Mac is neither of those really. Apple's hardware pricing would need a complete revision to stay competitive, and their software policy would need to open up even more to enable better API coverage, proper virtualization and compelling hardware compatibility. Even if Apple did that, it would merely be enough to keep MacOS a second-class citizen in the world of server OSes. The existence of OpenBSD and Linux almost obsoletes any real-world application of a licensed server OS.

Apple can go back into the server business if they want, but the server business isn't begging for more overhead or grudge matches between vendors. Apple cannot turn the server industry into a trillion-dollar opportunity without reversing policy entirely, on both the hardware and software side of things.


Apple could always license out their A chips and OSX to some preferred vendors if they don’t want to be in the business themselves.

Apple tried licensing once before, and it did not work out very well for them (e.g.: I owned a PowerComputer PowerTower back in that time). I understand how it would benefit a number of people, but Apple is a consumer company. How would this benefit Apple in a meaningful way?

> Also what would be the benefit of running Xcode cloud on Apple Silicon? Today cross compilation is the norm so the architecture doesn't matter. They would have to make a new datacenter version of their chips without the GPU and accelerators that aren't used for CPU heavy tasks.

ARM builds can be done on x86, but what about the automated tests after? Those have to be run on ARM when you target AS and everyone including Apple themselves needs something to do that.


I wonder if the system on a chip design of Apple Silicon would work for servers. Would it just be a special packaging that crammed several Mac mini boards into a rack mounted chassis? If so, then they would just be repeating what others are already doing.

The RAM being part of the chip seems like a limiter to scaling up to a server model with a lot of cores and RAM in a single blade.

Not saying this is not all doable just that it probably requires design and investment they otherwise do not have any desire to do.


Even if they just replicate what these companies are doing I think there's two main benefits:

  1. less waste. The entire case is being thrown away immediately by these volume consumers. All the packaging and what not is also wasted.
  2. easier/cheaper deployment. Companies are already paying their employees to design replacement cases, control boards, etc. and getting them built at relatively small scales. Then they're paying them to shuck and transplant from the Mac Minis. Apple could probably charge a >50% premium for their solution vs the equivalent number of Mac Minis and companies would jump on it.
Additionally, at least in my experience, the headaches and fixed costs of deploying these machines means companies refresh them far less often than other hardware. It's totally possible that companies would refresh more frequently if Apple just sold something easier to deploy.

Apple already books all of TSMC's bleeding edge nodes for the consumer market. How can they possibly start selling equivalent server hardware?

A slightly tweaked “server” version (more ram in the package) of their chip from the previous TSMC node could be an interesting way to use up some non-bleeding-edge capacity.

> in the future Apple likely will stop supporting x86 altogether and it makes sense to run it on AS

Nearly all of that backend Apple infra runs on Linux and (formerly?) Solaris.


As of a few years ago, an Apple employee told me they were primarily running on OEL (Oracle's Red Hat-based Linux).

As for why Oracle: I think they said something along the lines of Oracle indemnifying them -- or being best equipped to mount a defense -- in a software licensing lawsuit.


They don't have the bandwidth to do it. They can't keep up in hardware design, CPU manufacturing (TSMC throughput), software, anything. They're running on all cylinders just trying to release a new phone, new OSes every year and in addition to that chasing the dream of creating some kind of new category (TV, watch, cars, VR headset) that they can't afford to assign engineers to something mundane that they can silently outsource.

Unlike companies like Google, Meta and Microsoft they are institutionally afraid of over-hiring and losing their culture. This is a conservative tradeoff where they may lose the ability to grow seemingly "small" markets (small=anything smaller than the iPhone) in favor of not losing their culture (becoming a Google or Meta which just squanders every opportunity)


A trillion dollars makes many things possible.

I wonder how much power they would save in their own data centres by switching to Apple Silicon. Surely it would be astronomical

I'm wondering when Apple is going to publicly release it's Cloud offering to compete with Azure, AWS, etc! :)

That is what they kind of do with XClound and iCloud based services.

What is xcloud?

A typo, I should have written Xcode Cloud,

https://developer.apple.com/xcode-cloud/


I am so happy that I don't run mac farms for building a teams ios apps anymore.

Everything about dealing with apples developer ecosystem was an absolutely miserable experience, from not being able to virtualize for ci builds to having to appeal half of our (we made 250-300 apps a year for events that were based on the same template, so near clones with different assets) apps rejected and allowed on appeals sort of things.

Android was the polar opposite.

Absolutely ridiculous that I would need an entire mac mini/studio to run ONE CI job at a time. Always felt like Apple didn't WANT us to make apps.

This was 2014ish though. Is it a lot different now?


Same. i can't wait until i get out of the apple dev ecosystem. And i've been in it since the iphone 3gs.

Everytime i work on web or backend it's like a breath of fresh air.

The amount of opaque , undocumented, disfonctionning proprietary garbage one has to put up with is just depressing.


Last setup I worked on, we rented out M1s from Hetzner, and each machine could run 2 builds simultaneously. XCode makes pretty good utilization of all cores so there isn’t much benefit to parallel runs in the same machine vs a queue.

We also used it for JS builds as it was way faster than the Intel runners, and 3-4x faster than GitHub’s own M1 hosts.


I‘m currently setting up iOS CI, and got excited when you mentioned Hetzner, but turns out this has been abandoned. Scaleway seems like a good alternative, but 4 months of that is roughly equal to buying a refurbed machine. Since self hosted runners are easy to setup, we‘re really thinking about letting them run onsite.

Check out warpbuild.com - I'm the founder and we have ios runners that are cheaper than the github equivalent.

I don't know if something changed in their offering, but you can still find M1s available via the 'Dedicated -> Server Finder' page.

> Always felt like Apple didn't WANT us to make apps.

No, they want you to buy each developer a Mac. Then they can be sure to hold their market dominance for developers because a Windows or Linux machine can't be used for iOS dev, but a Mac can be used for all development. This artificial restriction ensures that unless you want to buy each developer 2 laptops or have different laptops with different OSes for different teams you just go Mac.


Very nice and clean design - however, it seems to me like there is easily enough room for two Mac Mini M1s per sled without affecting the cooling and achieving a density of 20 per rackmount enclosure and 120 per rack.

I'm also curious as to whether GitHub considered (or is considering) running Asahi Linux natively on them - the latter would make more sense to me as it could easily blend into their existing infrastructure.


Why would you run a linux on these? there are much better server grade systems that linux works on.

The benefit of these devices is running MacOS, and virtualizing it is not going to give many benefits that aren’t natively available in Macos already.


Asahi Linux performs far better on Apple Silicon hardware compared to macOS for server-related tasks (https://jasoneckert.github.io/myblog/ultimate-linux-arm64-wo...)

Any luck with virtualising macOS itself on that? :)

But the point of buying the Mac hardware is so that you can run macos.

With one machine per sled you can easily isolate it and remove it for repairs if it breaks. If you packed two or three on to a sled you'd have to take all of them out of service. The cost difference probably isn't too big.

There is no point in running linux on these. If you were looking for servers to run linux on...you would buy linux-based servers. Those would be much more space efficient and cost effective.

The whole point of doing this, is to be able to run MacOS.


I actually think there is a use case. IIUC you can only legally run macOS on Apple hardware. However you are allowed to run it in a VM. So it could make sense to run Linux as the base OS then run macOS VMs on top. This would probably make it easier to manage than only running macOS as generally Linux is more suitable for headless usage and people are more familiar with running it in a server setting.

Of course there are still limitations here because IIUC Apple also has restrictions about renting, so I don't think you can run 2 VMs and rent them to different customers. But you could possibly run multiple VMs for the same customer on the same hardware.

Honestly I assume that they are running the "Runner" OS in a VM anyways. So really it comes down to what makes it easier to manage the host? macOS running macOS VMs or Linux running macOS VMs.


Nothing will ever compare to the inefficient absurdity of server racks of 2013 “trash can” Mac Pros.

https://9to5mac.com/2018/11/01/mac-server-room/


It looks like a bombe from WWII -- one of the machines they used to crack Enigma codes.

https://en.wikipedia.org/wiki/Bombe


Ha! Great pull.

running trash cans in horizontal orientation can't have been good for cooling, right?

The audio on this is really strange, like the Indian woman’s voice is ai generated. Does anyone else hear some kind of warbling in their voices?

It sounds like she's trying to talk over something to me. I think there was some loud AC running they edited out.

The video is also a little too color graded?


The title of this really needs "GitHub" in it somewhere.

It's more a Christina Warren promo video than Github.

Edit: her linkedin handle is "filmgirl"..

I wouldn't be surprised if she's leaving the company within a year to become a 'creator'. She has a good track record in tech.


I'd assume the big "innovation" here is that the networking card in the sled offers lights-out management.

I first got the impression it was by CNBC or something.

For a marketing video by github itself, presented by a senior developer advocate.. This is not the way to excite developers at all..

I'd rather look at anything by backblaze, the datacenter cooling tech by meta, or even any marketing video of the old XServe for that matter.


[flagged]

is anyone here using github actions for running ios app tests & push to the app store ?

what's your xp ?


I wonder why go the trouble of gutting them to put into a new case. Other hosting providers like MacStadium simply slot the whole machine into racks: https://www.macstadium.com/blog/first-look-mac-mini-with-m2-...

> I wonder why go the trouble of gutting them to put into a new case.

two words -- power supply.


Gutting mac minis to retrofit into blades. Producing tons of e-waste in the process. All because Apple wont enter the server space. Insane.

How come they're not including the impact of what they decide NOT to do, into their climate impact calculations?


> All because Apple wont enter the server space.

Because they won't allow virtualization.


They do, and the standard macOS SLA specifies the rights you have as an individual customer. I'd assume that custom volume licenses are negotiable.

fwiw Apple entered and then exited the server space a while ago.

Apple has no moral obligation to enter the server space.

Mac Minis aren’t designed for data center use. This is a hack.

Apple was in the server space and they were terrible at it[1]. Further, macOS is a terrible, inflexible server OS. Linux is king, for better or… well, for the better.

1. I worked for an organization that was part of a pilot into Apple’s enterprise push and I felt like it was more of a marketing effort than an honest attempt.


I think it does. Apple wants people to use and support the Mac with software. That means that people need development hardware, here for running automated tests. Even if Apple isn't selling server hardware to end users (what I think would be great if they did), they should have some offerings for commercial vendors like here, Microsoft. At minimum, you should be able to order just the motherboards of a Mini, ideally, they would make slightly optimized versions of this.

No, but I think they have some moral obligation to allow people to develop software for their platform. They license their compilers and software so that it isn't legal to run them on other hardware.

If they just changed the license to not make it illegal. I'm sure someone would figure out how to make it work. Then you can run CI on real servers, build and test iOS apps on your Windows desktop.


It looks like Apple has a Mac Pro with a rack mount chassis. I’m slightly confused by this video, it came out a month ago, but is about M1 Macs… is this specifically a plan by GitHub to recycle old Mac minis?

Anyway, I don’t think Apple should include the impact of another large company not buying the right skus in their climate calculations.


The Mac Pro does have a rackmount, but that's mostly a box for PCIe devices. If you just want Mac compute and storage, it's not nearly as suitable, and the pricing/unit economics are a mess.

Still though—we have a picture of what their entry into the server space looks like. The picture is… the pricing/unit economics are a mess. Maybe Apple is just bad at servers, haha. I’m not sure why people are clamoring for them.

try building iOS CI/CD pipelines without a mac.

The Mac Pro is enormous (5U) and wildly overpriced. It is absolutely not designed for a large-scale datacenter. You don't even get ECC RAM like back in the Xeon days.

I can understand reasons they won't (re-)enter the server space themselves.

However it is a shame they don't sell bare boards for specialist integrators to build these sorts of products with.


Apple entering the server space feels like the kind of distraction I’m critical of businesses falling for.

I’m not sure I’m a fan of most Apple stuff these days, but I think they do very well staying focused on premium consumer gadgets.


Does apple run it's cloud on freebsd or Linux? Or is it special Mac servers? What of their own build farms and ci?

Metal is pretty recyclable.

Not all electronics are used in the approach github are taking

So much effort, just to run Xcode remotely.

For those of you who want to ship code to macOS from CI (e.g. Electron apps), you should check out my companies product at https://hydraulic.dev/ ... it lets you package, sign, notarize and upload self-updating Mac apps from any OS including Linux. Amongst other things it bundles Sparkle on the fly also, so you don't have to deal with Squirrel, and it can do the same trick for Windows. This lets you use standardized Linux CI runners to automate the entire build pipeline.

Obviously if you're writing a native app, you will still need Mac CI machines to compile, but if you're shipping Electron or JVM apps then it isn't necessary. The necessary binaries will be downloaded by Conveyor and bundled with the app automatically. You just provide the platform neutral artifacts as inputs, so a Mac CI runner can be avoided entirely. Which is useful because all that effort GitHub have to do make Mac CI expensive.


I am sure reassembling a Mac Mini into a custom shed makes sense on the scale of GitHub but for smaller companies https://www.racksolutions.com/mac-mini-shelf.html usually suffices.

The virtualization stuff and extreme energy efficiency are reason enough for Apple to get back into servers. There’s also the rack mount Mac Pro. The thing is, many server use cases would benefit from remove a lot of macOS end-user user-space fluff. Apple would want to reintroduce something like standalone Darwin images but with the Apple virtualization framework added back in. I don’t think Apple will do any of this soon, but a business deploying thousands of Macs may want Mac servers.

Apple don't even have to sell server hardware. They could build an IaaS service relatively easily and undercut every existing cloud Mac company on price. Easy win for next quarter profits too. Maybe they don't want to be a cloud provider, but they're missing out on a lot of money.

At least when I used to work at Apple ~4 years ago, our internal clouds were nowhere near the robustness needed to open up to the public. Poor security/isolation controls, outages, clunky usage, etc.

Apple's cloud software engineering muscle is fairly weak and providing an IaaS might be "easy" for the hardware side of the company, but very difficult for the software side.


But why couldn't they build out that capability? They could go on a hiring spree with all their cash and the state of the tech job market. Apple often goes into new market areas. Surely Apple is farther along in the cloud services game than they were in the VR game when they decided to go all-in into VR, or same for wearables, phones, music...

Does Apple want to spend a lot of money to be #5 in the cloud after AWS, GCP, Azure, and Oracle? Who is going to switch? I feel like all of the big spenders on those platforms are there because they got the best sales pitch (best basketball seats, best steak dinners, etc.)

My last job was at a tiny startup for 4 software engineers and we picked AWS because of that. As the people using the platform, we hated AWS after they consistently messed up our account in every possible way despite paying $700/month for support... but they routinely took our sales team (???) out for dinner, so we never switched to GCP. (I used GCP at my next/current job and they are ... weird. But we used their managed K8s service in a very weird way and they were very supportive when we escalated things.)


GCP is weird cause you're weird?

OK I'm definitely weird and that's not Google's fault. Well, maybe a little.

No, the support situation is weird. You have to have a contract with some third-party company, in our case, SADA. You then send all your support requests to them, and if they deem it necessary, they escalate it to Google. I feel like we always stumped them and then a person with a google.com email would be cc'd and instantly give us a good answer. SADA was very chill, though. Pretty competent in general except for their weird account enrollment system (which I had to deal with any time someone joined my team).

The weird thing that we were doing was creating 100s of managed Kubernetes clusters a day. We make an open-source app that runs on Kubernetes and tried to have a cloud offering, and this provided the best isolation between customers. People would sign up, and then we built all the infrastructure for them from scratch. (The AWS way is to have the customer give you their AWS account ID, and then they would pay for all the infrastructure themselves. But for whatever reason, GCP didn't seem to have that option, and honestly our best customers were all too "new" to the cloud to have a routine AWS bill. So it worked for us, and whenever cluster creation was glitchy, GCP was happy to help us. What we were doing was unusual, but since we always paid on time, I guess, they were not mad about it or anything. I did offer to CC them on our cluster-creation monitoring alerts and they declined ;)

Anyway, what I thought was weird was that with AWS, AWS is your support contract. With Google, it's some other company entirely. That'd kind of weird, right? You buy something from Vendor X and then when the product doesn't work, you talk to Vendor X. But with Google, we were too small potatoes for them to be worth talking to us for. But if something got escalated to them, they were very nice about it.

(What's extra crazy to me is that I used to work on Google Fiber, and we had great support for people paying them much less than we were paying. Want to call someone and talk about how you're scared about viruses for 45 minutes? Google Fiber support would listen and assuage your fears, every single time. They were great people consistently doing a great job. Not sure why GCP didn't steal them!)


When I call Visible (Verizon's own MVNO) for support, I don't get Verizon, I get someone in the Philippines that works for some company there, pretending to be Verizon. Same with Amazon or really any other major company these days.

The weird thing here is that you dealt directly with SADA and knew that it was a third party. I actually see that as a good thing as it is more transparent.


A random aside... I have a friend who works as support for a major credit card issuer and they did the same outsourcing. She says that pretty much every call gets escalated and the customers are just madder than they would be if someone had handled the situation competently in the first place. But it's like business school 101 to outsource, and so they do. (It's pretty new for them, as they have a hard time retaining people. The low pay and interacting with the general public pretty much guarantees that. My solution would be... just pay them more? You're getting 30% interest for years every time they buy gas. If someone has a question and then pays their minimum payment, you're just creating free money out of thin air. Send them a Christmas card too!)

Yeah, now that you mention it, the whole SADA thing is pretty transparent. I don't have any major complaints; it just feels weird to me is all. Despite what HN thinks, Google can offer good support. I've seen it done! But it is apparently not their favorite thing.


I've been using GCP for many years and have always had fantastic experiences with all aspects of it. I don't get the HN hate for it, but it is what it is. Their loss.

To expand on the other (excellent) comment; cloud profits are hard. Google Cloud only recently started turning a profit, and Microsoft's Azure purportedly doesn't make money at all. You'd be building up to compete with Amazon AWS and Oracle Cloud, both of whom are fierce competitors.

Apple has already proven that they can build a server product, they just haven't figured out the "why" yet. Cloud customers are trying to buy low-margin compute with good support for their software. Apple is trying to sell high-margin compute with good support for their software. The indifference towards either party's needs damned Xserve from the start, and Apple would have to take a radically different approach if they want a shot at succeeding nowadays. Their hardware would be competing with Nvidia's enormous Grace servers and Ampere's dirt-cheap ARM vCPUs. Their software would probably be ignored unless it was Open Source.

And all that begs the question - why bother? If Apple won't be making iPhone-sized hardware margins or getting Mac-level brand loyalty, they'd be spending money to undermine their core products. Comparatively, a VR headset that retreads the last decade of HMD tech a-la iPhone is a pretty safe bet.


Was it based on Macs, or all Linux boxes?

I wonder if Apple might want to do it because of this weakness. Build a new public cloud from scratch, feed it with money that comes from a new source, migrate internal services onto it once the public has tested it enough to fix most of the bugs.

When I did a contract to deliver a Jenkins CI/CD server for Apple Retail Engineering, their entire internal cloud was basically just VMware running on the last full batch of Apple rack-mount XServe machines.

So, not exactly something that I would consider competitive against the likes of Azure or AWS. In fact, AWS now sells services where you can bring your VMware internal clouds over to be officially supported by VMware on AWS.


Would it be at all possible for another business to do this, instead of Apple?

if for not other reason than that being perhaps the most likely way to actually compel apple to do it themselves, hahaha! :)


Apple doesn’t license macOS to run on anything but genuine Apple hardware. Which is why these racks of minis always keep popping up.

OK, but would it be possible to rvng it to run? I guess that would be illegal, but...just like with software piracy...if it's possible, it will be done....what's stopping someone setting up a data center in a more permissive part of the world and running images on whatever hardware they like?

I’m sure it’s happening - hackentoshes have existed forever - but ARM makes that harder and I doubt paying customers want tos some the relatively smallish amount.

I’ve always been suspicious that some of the cheaper “rent a Mac mini VM” may be doing trickery with a Linux virtual machine host.


Wouldn't this be relatively easy to test with a benchmark? Apple's ARM processors are pretty fast, especially for single-core stuff. Most data-center processors are designed for lots of cores at the cost of single-core performance. Plus, if you're using an x86 processor, you'll need to be translating ARM to x86 which is going to come at a big cost.

To get plausible Mac mini benchmarks on the fastest x86 processor, you'd need overhead to be less than 25%. Plus, you'd be buying a $650 x86 processor instead of spending $600 on a Mac mini. You'd also need to think about power. That x86 processor draws 150W base and 253W peak compared to an M1 Mac mini which hits a peak of 39W. Even if you're getting pretty cheap power (5-cents per kWh), that's still $44/year in extra costs.

I've used QEMU on Github's x86 runners to run Linux-ARM and it's painfully slow. It seems like it would be hard to make the economics work out well while still offering a plausible experience speed-wise.


I like how you've seemingly disproved this just with economic arguments. That's a cool way to do it! Hahaha! :)

Yeah, my suspicions were back in the x86 days when it was much simpler.

The issue is that it's hard to turn that into a business. In the US, clearly a company like Github/Microsoft can't do that. You suggest finding a country that might be more permissive. The issue there is that there are a lot of treaties that most countries have signed up to that protect IP like WTO membership or the WIPO Copyright Treaty. Maybe those wouldn't prevent what you're suggesting, but they probably do. There are some countries outside that, but not a lot and some of them are probably non-starters like North Korea or Iran.

And who would your customers be? You won't get US or EU companies wanting to use your Hackintosh Cloud. They know the jeopardy they'd get in if they were building Hackintoshes. Renting them probably isn't much better. What happens when Apple figures out that a company has been building their iOS apps on the Hackintosh Cloud?

Plus, if you were to create a Hackintosh Cloud that got any traction, it would be a constant moving target as Apple tried to foil your plans. What happens when the next security update also has something that makes your machines not boot? Sure, you find a workaround over the next month and in the meantime your customers find that your service isn't worth it. Companies pay a fortune for things to be reliable.

Finally, how much could you really undercut genuine M1 pricing? Scaleway will rent me an M1 Mac mini for €80/mo (€0.11/hour) or an M2 Pro for €173. You're going to need to pay really smart people to keep breaking Apple's attempts against you. Scaleway just buys some Mac minis. Plus, the ARM processors you're buying won't have the performance of Apple's chips and might even have slightly different behavior for things like memory ordering. Maybe that won't matter most of the time, but if I'm paying for a build cloud, I probably don't want to be second guessing it.

When it comes to cloud services, I think buying genuine Macs isn't really the issue. I mean, Github is charging $350/mo for mediocre 2-core x86 runners. They're charging nearly $7,000/mo for an M1 runner. If you wanted to compete with Github's runners, you could just rent Scaleway servers at €80/mo and still hugely undercut Github's pricing as long as you got a bit of usage. At half of Github's price, you'd need about 17 hours of usage per month to cover Scaleway's price - less than 2.5% utilization.


These are great points too! :) Not sure the downvotes, haahah, internet! :)

There’s a reason Mac minis 1. Still exist 2. Have a 10Gb Ethernet option for a computer that is supposedly a desktop ideal for Grandma’s emails.

Apple knows that people use them as servers.


Why did Apple create the xserve line to start with?

The target market for Xserve and Xsan included creatives (video servers, render farms, etc.), education, and science/biotech.

Note that the Xserve was just the last of an interesting history of Apple servers. I'm especially fond of this 1996 behemoth: https://oldvcr.blogspot.com/2023/11/the-apple-network-server...


Fun story: thanks to Xsan you can buy Thunderbolt SAN devices from various vendors. Now, the Areca ARC-8050T3 SAN with its 9 Thunderbolt inputs has become insanely popular on movie / commercial sets. Drop it into a 4U rolling case and it becomes the motherlode where everyone, no less than nine people at the same time can just plug a Macbook in without any problems, configuration, maintenance or anything. Just works. It pushed Areca to make optical TB3 cables, in fact, now people don't need to stay so close to the motherlode.

The prime reason for the Xserve line was to have a Mac Server that could go into schools' server closet to help manage all of the Macs in a school. It got sold in lots of other places for various things (e.g. that supercomputer at the time), but that was the overwhelming reason at Apple. The whole project long the things that got done on it were things that schools needed.

There are a lot of schools, but once you have one in each (that had a large Mac presence), there was not a lot of growth room. I was working at the University of Wisconsin (Madison) at the time, and we had a grand total of 6 in our data center (yes there were more on campus, but in ones and twos in departments), and no big reasons to have more.

And as much as I love working on MacOS, it (like Windows) is not well setup for use as a non-interactive node (e.g.: server or data processor node): the GUI gets too much priority, the filesystem is really slow to sync, and there are unexpected edges in setup.


I worked for AppleCare at the time not directly for Apple, and there were some in the office. I was interested and didn’t the self paced training but the business case never made sense and it was abandoned too early because they didn’t have a good plan.

My personal observation was it was very focused on the Edu market and Apple internal use but lacked the resources to succeed in the enterprise market.


"Runners?" I've been working in software development for a long time, and currently write mobile apps... but I've never heard this term.

Now hurry and flag this totally innocuous observation again.


Legal | privacy