I went through that website before posting. That's why I thought it was a Gitlab-like service. I like that it works without JavaScript, and it appears to have a wiki-like integration too.
It's designed so that most (all?) of the tasks you might do are "web optional" so e.g. old fogies like me can submit PRs via e-mail, and have the young whippersnappers review those requests on the web.
It can do the same tasks like Gitlab (Git hosting, issue tracking, CI and more) but it does it in a different way that is more similar to the workflow of the Linux kernel and other big open source projects.
For example instead of pull requests, you have mailing lists in which you send patches. You can see more in the website of Sourcehut: https://sourcehut.org
> However, I refuse to give any company credit for waking up their support team only when a scathing article about them frontpages on Hacker News. I told them I wouldn’t publish a positive follow-up unless they also convinced me that the support experience had been fixed for the typical user as well.
You are a gem and I wish more people were like you.
A very old, but still key lesson here for product makers: when you know that you're doing new, early-gen work with a kinda reasonably high probability of something faulting - you need to provide stellar, high touch support, whatever it costs. And you need to plan to be ready for that from the point when you ship your first thing.
(I think this applies both to hardware and software products.)
Of course, this is easier said than done, but ultimately this is the core function - are your customers happy or not?
As a CTO you need to prepare the company and your investors for this failure mode. You need to be able to explain to them that sometimes making sure that the product actually works and the existing customers are happy is more important than that sexy new feature of the month. It's not always a very easy balance.
In my experience, even semi-technical people with a more sales/marketing kind of focus in the company often get blindsided by this. It's on us to prep them for this.
It seems like this company learned it the hard way. The key thing is that they did learn.
For what it's worth, we do have a focus on making sure the hardware works properly etc.; I've always been pushing for that and haven't had any real problems there.
This particular board was a test escape. A very unique, thankfully singular (so far as I know) test escape, but there's always a chance for that -- the larger failure in my mind was the breakdown in the support process. With our new controls in place (including rapid escalation / RMA of "odd" faults observed by customer) I am confident this won't happen again.
And yes, we're updating tests to catch the newly discovered failure mode. It's an interesting one, the board is a sort of a "zombie" in that if you boot it up by looking at it just right, it'll run stably and pass our stress tests, but it should never have left our factory in that condition due to the other faults. Full stop. :)
That particular horror story led me to not equip our development team with a dozen POWER9 machines, which is a pity, because the combination of almost-total trustworthiness and enormous firepower was extremely attractive for our particular niche (plus, I used to hand-code PPC64 AltiVec “velocity engine” code for HFT funds back when Apple sold G5 Xserves).
I’m glad to hear that it was a one-off and that you have revised your support strategy, but... really, it made for a pretty frustrating read. It’s not chump change, and going out on that kind of a limb on... trust... which was clearly unwarranted at least at the time of that initial incident.
There’s nothing more frustrating than paying top dollar for something exotic and then being stonewalled by oblivious “tech support” when things don’t work as advertised, particularly when something is sufficiently novel or unusual that relatively few other users are out there to proffer help and the potential points of failure are plentiful and “unknown unknowns”.
As far as I’m concerned, that incident was extremely bad publicity. That board could’ve been shipped to me. I’d have been left to dangle from my solitary rope and squirm when management asked me what was going on.
Sorry to hear that. If you ever get tired of worrying about untrustworthy firmware phoning home, DRM, etc., please consider us for the replacement purchase? ;)
Nope, too late. The decision was made and orders have been placed. You don’t get do-overs, sorry.
I might still get a machine for my own amusement, but... I don’t know, reading of some poor guy left dangling for best part of a year just doesn’t build confidence and leaves me with a “there but for the Grace of God go I” feeling, you understand?
I understand. I also know this happened to no one else, and it was a pathological case in a support system that has since been completely overhauled. This is one reason why we started the self-serve RMA process -- to show that yes, we are committed to replacing defective hardware in a timely fashion. If the machine is defective, and we can't help via our online troubleshooting guide and Email / phone support, send it right back for replacement.
That being said, the assembled systems products are checked with a significant burn-in test etc. before they ship. We have never had an assembled system arrive defective anywhere, except if it was damaged in shipping (i.e. crushed box etc.).
I’m not trying to guilt-trip you or anything... I really like the idea of having non-monoculture machines and I’ve always been a sucker for unusual hardware (NeXT, BeBox, IBM RS/6000, dual Pentium machines to name but a few)... but if the one prominent story I find online is about a poor guy who is left to cry his eyes out regretting his courageous and expensive decision while the company that supplied him is apparently indifferent (this was before the happy if belated epilogue) it’s just an instant turn-off as far as I’m concerned. You’re producing something unusual and new, and users take a risk and pay far above market prices for the privilege to be on the bleeding edge... the least one can expect is competent and timely technical support. This poor guy went through “please reinstall your OS”-level indifference for months. That’s not the way one builds confidence. And yeah, I’m glad you’ve revised your policies, but in my particular case it came too late. This guy bought a Ferrari-analogue and got a Trabant with Lada-level tech support. He’s been pretty polite in writing up a follow-up detailing your company’s long-overdue resolution of his issues. I doubt he’s asked you to refund you the months of lost productivity and the time he spent essentially volunteering his time to serve as a free engineering consultant. That’s just not good enough. Somebody should’ve swooped in before this poor guy became so frustrated he wrote it up and submitted it to HackerNews.
No DRM, full-ownership computing is great, and the performance is literally astounding (particularly for our use-case), but at the time the decision was made, based on the evidence I had, the decision was clear. I do not regret deciding not to buy your product because based on the data available to me at the time as far as I knew I would’ve been alone and/or I would have spent a fortune sending faulty items back-and-forth through customs, paying import duty each time.
Whoever decided not to be proactive in this guy’s case procured your firm a sizeable amount of bad publicity.
Not to mention that the team I am part of would’ve probably, as a side-effect of the proprietary software we are developing, also contributed various patches and so forth to various parts of the underlying open-source stack.
No problem, I understand. In your shoes I may have even done the same thing.
Just in case you didn't see them (they're not exactly easy to find with Google etc. for some reason), there are a number of positive reviews or comments online from various people. I know one thing we haven't (yet) done is to start assembling all the success stories around our hardware; this is always challenging due to the number of people that use the hardware without posting online for privacy reasons etc.
Without further ado:
Slightly out of date (the software now available for POWER is significantly more robust than it was at the time -- we even have Fedora desktop ISOs showing up now for POWER):
> I am quite impressed with it so far. Installation was a breeze, it compiles the kernel on 32 cores from spinning rust in 4m15s
Does anyone know how this compares to a recent x86 machine? This seems quite fast for the entire kernel, but I have no idea what typical build times are like.
2m3.673s; default config with loadable modules disabled; make -m 32. This is on a second-last generation 24C/48T Xeon (albeit a slower one) with all of the known speculative execution vulnerabilities mitigated except spec_store_bypass, and SMT still enabled (because htop is prettier with more pretend CPUs).
What's the I/O situation? I did my build from an HDD, WD Blue. Also, kernel config matters a lot - I used yesterday's master branch from Linus, with Alpine Linux 3.10's ppc64le config, plus olddefconfig to catch it up.
I agree, but I’m really curious if this is just because the commonly used compilers & libraries aren’t particularly optimised for ppc64le (which is a guess) or whether the hardware itself cannot match the equivalent Xeon and Ryzen offerings
Not that this matters to the end user, but it is still interesting IMO
I might take that bet core for core, matched node size (i.e. 14nm), run out of tmpfs or NVMe storage, and with both systems protected against Spectre, Meltdown etc. (i.e. no cheating by ignoring the ISA specifications)...
Slightly OT, but if you limit the playing field to open ISA / fully owner controlled systems, POWER9 is so far out front in terms of performance that it looks like an outlier. ;)
Well, to be fair I was thinking of the SiFive RISC-V board as one of the competitors, since my understanding was they did finally open the rest of their firmware a while back.
However, yes, by far most of the remaining owner controlled systems are even smaller -- some of the smaller (older) ARM Chromebooks (sans GPU, of course), the BeagleBone black, those kinds of systems.
Yeah, I also have a HiFive Unleashed which is totally open, but it doesn't really aim to compete with Raptor iiuc. Until recently it also won out on open ISA, too, and the Blackbird has to be bootstrapped by the ARM BMC - which is a nonfree ISA. Power consumption is another story as well. Raptor systems is definitely the only realistic competitor to x86_64, though, for desktop and server use-cases.
Yep, definitely agreed. FWIW we're not happy with the ARM baseboard controller either and are making progress toward fixing that particular concern. All I'll say on that is that the POWER ISA opening had to happen before we could start work in earnest on getting rid of the ASpeed... ;)
> both systems protected against Spectre, Meltdown etc. (i.e. no cheating by ignoring the ISA specifications)
Does the x86 specification say "speculation will have no observable side effects on the memory subsytem"? I wasn't aware of that. In other words, Spectre/Meltdown are bad security flaws, but I don't think they're the result of cheating.
Meltdown is absolutely cheating as it speculates through permission boundaries. I don't see how anyone could argue otherwise in good faith, especially considering that it was only Intel and IBM effected. AMD and SPARC were immune, and the extent of ARM's vulnerability was a single register, which seems like a bug, not reflective of an architectural design decision.
You can't design your software to protect against Meltdown, which makes the ISA guarantees useless. Whereas Spectre-type side channels can be mitigated through software, which is what cryptographic algorithm implementations have been doing for years.
There's far more room for debate regarding culpability for Spectre class attacks, though it's pretty clear that Intel deliberately pushed the envelope in ways that AMD and ARM weren't prepared to do.
Again, I don't see that as cheating or violating a spec (or at least any guarantee made by Intel about x86(-64) behavior that I know of). I would assume that speculation can do just about anything (access protected memory, run illegal/protected instructions, etc), as long as any effects aren't committed to architectural state until the speculated path retires. The existence of side channels (timing information of subsequent memory accesses) for retrieving information from speculative execution paths is a different issue. Perhaps it was naive of the architects to assume this wouldn't have any consequences beyond improving performance, but I don't know what spec it is supposed to violate.
What's the point of protected memory, especially when using VM extensions, and particularly with regards to SGX, if the architecture is implemented in such a way that unprivileged software can read the entire contents of memory and there's no way for software, either the kernel or the processing software, to prevent it? You can make a tortured, pedantic argument defending Intel if we disregard VM-x and SGX, that memory protection was originally intended only to prevent data corruption, not confidentiality, but at the end of the day all such an argument does is emphasize the deliberate choices Intel made to sacrifice confidentiality for performance. And those choices are all the more unforgivable considering Intel's primary motivation for taking these performance short cuts were to expand into and secure their dominance of the VM and cloud hosting market; a market predicated on the ability of their architecture having the nominal capability to ensure data confidentiality.
It'll depend on the kernel configuration. I don't know about the Raptor, however I have some experience with IBM's POWER9 servers and they do have tremendous single thread performance and many cores, and are quite a bit faster than Intel Xeons (although at a considerably higher price, but the customers usually need the performance and aren't too sensitive about price).
Incidentally, I'm pretty sure that Sir_Cmpwn meant 32 threads, not 32 cores. Raptor sells an 8-core CPU bundle, and those cores are SMT4 capable, meaning 4 threads per core. So it will appear as 32 CPUs if you run `htop` or `cat /proc/cpuinfo` in Linux.
Raptor does sell a 22-core SMT4 capable CPU, which, Yikes! that's a lot of computing power.
That Blackbird bundle seems like a beast. The Phoronix benchmarks show it outperformed by high end x86 chips though. I wonder if some of that comes down to gcc optimizations because x86 has been around longer.
I really like these machines. Unfortunately, I'm in Europe, where they seem to be harder (or riskier) to get hold of. I suppose I should also try to think of something reasonable to do with them, given their cost...
Fellow EU here: I was curious too, but the high cost + the insane import duty + risk of return. I'm 99% sure if I RMA'd it like OP did, the local customs would somehow not understand that it's a replacement/repair and sting me for the import duty once more :(
Worth noting that I actually feel quite bad about this, I really wanted to try one out, and vote with my wallet in favour of more open hardware, an alternative architecture and to support a smaller company like Raptor. I eagerly followed all the news around Raptor (both here on HN, lobste.rs and elsewhere) and read all the guy "ClassicHasClass"'s blogs @ talospace.com - but in the end I unfortunately can't justify to myself the expense (when I spec out a modest Blackbird system it consumes the better part of a month's salary, I've recently bought my first flat and I sense a recession around the corner...). That actually makes me feel a bit hypocritical and part of the "problem", but maybe I'm not actually the target market.
Anyone wanting control of a powerful modern computer is part of the target market. We just haven't been able to drive the price down further at this time -- doing hardware design around fully open source firmware is hard and a lot of the typical shortcuts to lower costs (including the always-concerning "post-sale monetization" concepts) simply aren't something we find acceptable on any level.
Basically, it doesn't do anyone any good for us to lower cost by giving up the full owner control experience that is centric to our product lines. :)
> I have christened it “flandre”3, which I think is fitting.
Youmu Konpaku is best character. Just gotta love the sword-characters in a bullet-game. Although Flandre was an "extra" endgame boss character, never actually playable IIRC. Really good music for her stage though.
----------
Glad to hear everything is working out for your machine. One thing I'm curious about:
> Installation was a breeze, it compiles the kernel on 32 cores from spinning rust in 4m15s
Is that 8-core / 32-thread CPU? I don't think there's actually a 32-real core CPU available from them. If so, I think that's an impressive speed for 8-real cores.
I hope this means some future work on Wayland performance on ppc64le or at least on 2D only is coming. My GPU-less 4-core Blackbird was unusable in Wayland with the basic BMC graphics (whereas it is quite sprightly in X.org on the same version of Fedora and with the exact same hardware loadout). I'll be honest and say I'm a Wayland skeptic generally for reasons I won't derail this post with, but I'm resigned to it probably being the future, and I would like to see it do better.
From a hardware perspective, nothing. I have one in my Talos II and it runs rather better in Wayland than my Blackbird, though I'm usually in X.org. My only objection here is why performance would be so terrible without one. It was also part of an experiment to see how cheap I could make the loadout and still have an adequate system.
However, from a libre computing perspective, the WX7100 in my Talos II requires firmware which is non-open, even though it is included in Linux. I imagine that this market would include people interested in reducing the amount of firmware they don't control as much as possible, so for this niche, that is relevant.
Which compositor are you using? If you don't have a GPU you'll want one with software rendering support. Using a software OpenGL implementation like llvmpipe will work but will be very slow.
We don't support this yet in wlroots, but it's definitely something we want to add, and not a limitation of the Wayland protocol at all.
It was whatever was out of the box with Fedora 30 and GNOME (I guess gnome-shell itself?), and indeed was very slow with llvmpipe. I look forward to the future work on this.
reply