Go chose to rely heavily on nil pointers, which is a design mistake (see Tony Hoare's apology for inventing it). The resultant tension between interfaces and nils is, in my opinion, an absurd side effect that cannot be explained away as anything except an ugly wart. We should have something better than this in 2016.
I say this as someone who uses Go daily for my work and mostly likes it (despite the many warts).
Precisely, and this is one area where go fails completely. The features don't interact well at all!
Tuple returns are everywhere, but there are no tools to operate on them without manually splitting the halves, checking conditionally if one of them exists, and returning something different based on each possibility. Cue the noise of subtly-different variants of `if res, err := nil; err != nil` in every function.
Imports were just paths to repositories. Everything was assumed to just pull from the tip of the branch, and this was considered to be just fine because nobody should ever break backwards compatibility. They've spent years trying to dig themselves out from under this one.
Everything should have a default zero value. Including pointers. So now we go back to having to do manual `nil` checking for anything that might receive a nil. But thanks to the magic of interfaces, if you call a function that returns a nil interface pointer, it will directly fail a nil comparison check! This is completely bonkers.
Go has implicit implementation of interfaces which makes exhaustive checking of case statements impossible. So you type-switch and hope nobody adds a new interface implementation. So you helpfully get strong typing everywhere except for the places you're most likely to actually mess something up.
Go genuinely feels like a language where multiple people each had their pet idea of some feature to add, but nobody ever came together to work on how to actually make those features work in concert with one-another. That anyone could feel the opposite is absolutely incomprehensible to me.
I agree completely. Go is opinionated about the wrong things, in the wrong way. Which in the end make it a worse language than it could have been.
One thing you did not mention is very varied quality of the standard library. I’m still baffled about ”standard date” crazyness. There are huge warts in the sql interface etc
My collegues used to joke golang is a great solution to google’s problems (for others, not as good as it arguably could have been to put it mildly)
> I just can't go back to Go with nil pointers and lack of decent enums/ADTs/pattern matching either.
Go is simply a badly designed language where the idea of "simplicity" has been maligned and proven bad ideas likes nil/null, exceptions and such have been introduced in a seemingly modern language. One would think that decades of Java, Javascript, etc. code blowing up because of this issues would teach someone something but seems that is not always the case.
The simple existence of the Billion Dollar Mistake of nils would suggest that maybe Rob Pike et al are capable of getting it wrong.
> Much of the language design was motivated by learnings within Google.
And the main problem Google had at the time was a large pool of bright but green CS graduates who needed to be kept busy without breaking anything important until the mcompany needed to tap into that pool for a bigger initiative.
> What other "modern" language that has more of a focus on software engineering, putting readability and maintainability and stability at the forefront?
This presupposes that golang was designed for readability, maintainability, and stability, and I assert it was not.
We are literally responding to a linked post highlighting how golang engineers are still spending resources trying to avoid runtime nil panics. This was widely known and recognized as a mistake. It was avoidable. And here we are. This is far from the only counterexample to golang being designed for reliability, it’s just the easiest one to hit you over the head with.
Having worked on multiple large, production code bases in go, they are not particularly reliable nor readable. They are somewhat more brittle than other languages I’ve worked with as a rule. The lack of any real ability to actually abstract common components of problems means that details of problems end up needing to be visible to every layer of a solution up and down the stack. I rarely see a PR that doesn’t touch dozens of functions even for small fixes.
Ignoring individual examples, the literal one thing we actually have data on in software engineering is that fewer lines of codes correlates with fewer bugs and that fewer lines of code are easier to read and reason about.
And go makes absolutely indefensible decisions around things like error handling, tuple returns as second class citizens, and limited abstraction ability that inarguably lead to integer multiples more code to solve problems than ought to be necessary. Even if you generally like the model of programming that go presents, even if you think this is the overall right level of abstraction, these flagrant mistakes are in direct contradiction of the few learnings we actually have hard data for in this industry.
Speaking of data, I would love to see convincing data that golang programs are measurably
more reliable than their counterparts in other languages.
Instead of ever just actually acknowledging these things as flaws, we are told that Rob Pike designed the language so it must be correct. And we are told that writing three lines of identical error handling around every one line of code is just Being Explicit and that looping over anything not an array or map is Too Much Abstraction and that the plus sign for anything but numbers is Very Confusing but an `add` function is somehow not, as if these are unassailable truths about software engineering.
Instead of actually solving problems around reliability, we’re back to running a dozen linters on every save/commit. And this can’t be part of the language, because Go Doesn’t Have Warnings. Except it does, they’re just provided by a bunch of independent maybe-maintained tools.
> enable the creation of even more complex abstractions and concepts
We’re already working on top of ten thousand and eight layers of abstraction hidden by HTTP and DNS and TLS and IP networking over Ethernet frames processed on machines running garbage-collected runtimes that live-translate code into actual code for a processor that translates that code to actual code it understands, managed by a kernel that convincingly pretends to be able to run thousands of programs at once and pretends to each program that it has access to exabytes of memory, but yeah the ten thousand and ninth layer of abstraction is a problem.
Or maybe the real problem is that the average programmer is terrible at writing good abstractions so we spend eons fighting fires as a result of our collective inability to actually engineer anything. And then we argue that actually it’s abstraction that’s wrong and consign ourselves to never learning how to write good ones. The next day we find a library that cleanly solves some problem we’re dealing with and conveniently forget that Abstractions Are Bad because that’s only something we believe when it’s convenient.
Yes, this is a rant. I am tired of the constant gaslighting from the golang community. It certainly didn’t start with “generics are complicated and unnecessary and the language doesn’t need them”. I don’t know why I’m surprised it hasn’t stopped since them.
> it does feel to me like a language designed to prevent poor programmers from making mistakes.
That's Rust. I would not call _any_ language containing some kind of nil pointers to be " designed to prevent poor programmers from making mistakes". Go is mainly designed to be "easy to read" (and excels at that if you are used to C-style syntax).
Surprisingly, I think we're actually mostly in agreement here, so there's not much to reply to. I think the only real takeaway is that we don't agree on the conclusions to draw.
> On Go hate - it's simple. It has reached quite some time ago the critical adoption rate where it will be made work in the domains it is applied to regardless of its merits (hello a post on HN describing the woes of a company investing millions in tooling to undo the damage done by bolting in NILs and their unsoudness so tightly). It has serious hype and marketing behind it, because other languages are either perceived as Java-kind-of-uncool, or are not noticed, or bundled, again, with Java, like C#. And developers, who have a rift where their knowledge of asynchronous and concurrent programming should be, stop reading at "async/await means no thread blocky" and never learn to appreciate the power and flexibility task/future-based system gives (and how much less ceremony it needs compared to channels or manually scheduled threads, green or not).
I agree that bolting on nil checking to Go is pretty much an admission that the language design has issues. That said, of course it does. You can't eat your cake and have it too, and the Go designers choose to keep the cake more often than not. To properly avoid nil, the Go language would've needed to adopt probably something like sum types and pattern matching. To be honest, that may have been better if they did, but also, it doesn't come at literally no language complexity cost, and the way Go is incredibly careful about that is a major part of what makes it uniquely appealing to begin with.
Meanwhile while Go gets nil checkers, JavaScript gets TypeScript, which I think really puts into perspective how relatively minor the problems Go has actually are.
> Just look at https://madnight.github.io/githut/#/. Go has won, it pays well, it gets "interesting and novel projects" - it does not need your help. Hating it is correct, because it is both more popular and worse (sometimes catastrophically so) at what other languages do.
I gotta say, I basically despise this mentality. This basically reads somewhere along the lines of, "How come Go gets all of the success and attention when other programming languages deserve it more?" To me that just sounds immature. I never thought this way when Go was relatively niche. People certainly use Python, JavaScript, and C++ in cases where they are far from the best tool for the job, but despite all of those languages being vastly more popular than Go, none of them enjoy the reputation of being talked about as the only programming language in history with no redeeming qualities.
People generally use Go (or whatever their favorite programming language is) for things because they know it and feel productive in it, not to spite C# proponents by choosing Go in a use case that C# might do better, or anything like that.
But if you want to think this way, then I can't stop you. I can only hope that some day it is apparent how this is not a very rational or productive approach to programming language debates.
Unfortunately, even though I'm sure it definitely plays no small part, I can't really assume that Go's popularity plays into any person's hatred of it, because flat-out, that would feel like a bad-faith assumption to make...
Thanks for writing this up; it was something of a wake-up call from a recent bout of fanboyism, and I found myself agreeing with most of your examples. I'm still learning and I haven't found any of these things to be dealbreakers yet, but they're absolutely questionable design decisions when you consider that other languages already solved many of these issues (with good reason). I think there's a lot to like about Go but I'm worried that the language will die young because of religious inertia.
We've been using Go at work for 2.5 years now and it has become my primary language of use at this point in time. I hate it. I hate it like hell. It's like they decided to take all the things that make other languages good - such as safety, brevity, intuitiveness, versioning and proper project structuring - and threw them out the window. As far as I'm concerned the good things about Go (large community, fairly efficient generated executables) do not outweigh the bad.
We've spent so much time fixing obscure bugs and behaviors that betray everything we've learned in the languages we used previously in our careers that it was really not worth it. And we still do. Finding developers is also an issue, as most developers do not really want to learn a new language, and Go is still not that common in enterprises (I used to be like that myself, but I've learned long ago that this shouldn't be such a big factor when thinking about a new job).
You can make the case that we're simply not good enough developers or Go is just not a good fit for the type of developers we are. Maybe. But the choices made in designing this language, and its popularity, absolutely baffle me.
It's very interesting to me how bad design decisions in Go are accepted by the users with little protest. There are other things - the Go path, error handling, surprising conventions in a language with no "magic", etc.
Yes, it was created by behemoths of computer science (but more accurately in this case, dinosaurs), and they clearly have not been writing and debugging large production systems for decades now. That's why the whole thing is clearly missing the lessons learned from modern language design.
Doesn't forcing your self into a corner where you have to either depend on unstable system interfaces or suffer severe performance penalties indicate poor design?
Obviously any set of abstractions is going to wind up making tradeoffs, and choices that make one particular set of problems easy can cause negative consequences in other areas. Just the more that I used go, the more I grew to question the logic behind these design decisions. Perhaps they make sense at Google, but to me they don't seem to make sense outside of its internally-controlled confines.
> Go would be decent for applications asymptotically approaching web browsers if the user interface options were better.
No, it wouldn't. Browsers are too performance critical. You need an industrial-strength optimization pipeline in the compiler, and even having a GC is risky.
(There are numerous other smaller-but-still-critical reasons why Go's design prevents it from working as the implementation language for a competitive browser, but these two are the biggest show-stoppers.)
I'm usually really willing to forgive a lot of stuff when justified by genuinely different design goals or priorities.
Unfortunately with Go I become less convinced with every passing year that they can keep getting away with this. They keep spinning obvious weaknesses as philosophical strengths, rather than admitting it is due to limited resources and backwards compatibility constraints. Their use cases (servers, UNIX tools) aren't actually unusual or different to other teams. It seems like every time I read about Go they've made what is simply a bad design decision that they later regret and explore fixing, but their rules cause them to keep compounding self-inflicted wounds. Compared to other language and runtime teams they just don't seem to know what they're doing.
Here are just some of the examples we've learned about so far.
Stack unwinding and hyper-inefficient calling conventions. Despite being designed for servers at Google, where throughput really matters a lot, they generate extremely bloated code and hardly use registers rather than generating unwind metadata that's consulted on need and using a tuned calling convention. Tuned calling conventions are optimisations that date back literally decades in the C world and yet Go doesn't have them!
This significantly reduces icache utilisation (hurting throughput) and means they can't use any existing tools, and yet the only benefit is it made their compiler easier to write initially. They increased the server costs of Go shops permanently by taking this shortcut which benefited only the compiler authors. Now they struggle to fix it because they don't have any de-optimisation engine either, so changing calling conventions makes it harder to get useful stack traces and would break user-authored inline assembly (which is rare for Go's use cases).
Compare to how the Java guys did it: the compiler generates highly optimised code and tables of metadata that let the runtime map register/stack state back to the programmers de-optimised view of the program. Methods can be inlined aggressively because the VM can always undo it, so it doesn't get in developer's way. That metadata is only consulted when a stack unwind is actually needed, which is rare. The rest of the time it sits cold in far away RAM or swapped to disk. Calling conventions aren't exposed to the user and can be changed as needed, but if you need custom assembly you go via JNI that uses the platform calling convention and accept a slower function call.
Go's approach isn't some principled matter of design, as evidenced by their explorations of fixing it. They just didn't plan ahead.
Garbage collectors. The Go team originally tried to claim their GC was some sort of massive advance, a GC for the ages. A few years later they gave a presentation where they admitted they had explored replacing it several times because it's extremely inefficient, but are hamstrung by a (self imposed) rule that they're only allowed one knob and don't want to make their compiler slower. Once again, the constraints of their compiler causes massive cost bloat for projects in production (where cost really matters).
Compare to Java: the default GC tries to strike a balance between throughput and latency, but if you need super low latency or super high throughput you can flip a switch to get that. The runtime can't know if your task is a batch job like a compiler or a latency sensitive HTTP server, so can tell it, but if you don't it'll take a middle path. Given the huge costs of large server farms, this is sensible!
Compile time. Whenever you read about the Go team's choices it's apparent they are willing to mangle basically anything to get themselves an easier to write or faster compiler, hence the fact that it hardly optimises and generates massive binaries. But this isn't the only way to get fast compile times.
Compare to Java: compilation is done in parallel with program execution and only where it matters. During development where you frequently start up and shut down programs, you're only waiting for the compiler frontend (javac) which is very simple and doesn't optimise at all, so it's fast like Go's is. When deployed to production the program automatically ends up optimised and running at peak performance and you don't even need to flip a "in prod" switch like with a C compiler: the fact that the program is long running is itself evidence that you're in prod and worth optimising.
This heuristic used to hurt a lot for small command line tools, which usually don't need to be very fast. But you can produce binaries with the GraalVM native-image tool that start as fast (or even faster) than C programs do now, so that's not a big deal any longer.
Generics. Well, this one has been thrashed out so much I won't cover it again here. Suffice it to say that other languages have all concluded this is worth having and managed to introduce it in either their first versions, or in a backwards compatible way later.
Debugging. The ability to easily debug binaries and get reasonable stack traces is known to make program optimisation hard, because optimisation means re-arranging the program behind the developer's back. It's hard to put a breakpoint on a function that was deleted by the optimiser, or inlined. That's why C compilers have debug vs non-debug modes. Debug binaries can be significantly slower than release binaries, hence the difference. In fact in the past I've seen cases where debug-mode C binaries were so slow you couldn't use them because getting the program to the point where it'd experience issues took so long. And of course forget about debugging production binaries.
Golang faces the same problem but in effect just always runs every program in debug mode.
Compare to Java: See the above description of the de-optimisation engine and tables. If you request a stack trace or probe a method with a method, the program is selectively de-optimised so the part being inspected by the developer looks normal whilst the rest of the program continues running at full speed. This means you can attach debuggers to any program at any time, without flags, and you can even attach debuggers to production JVMs at any time. This feature doesn't impose any throughput hit (it does consume memory, but it's cold memory).
So we can see that repeatedly the Go guys have made choices that seem to have wildly wrong cost/benefit tradeoffs, tradeoffs that literally nobody else made, and almost always the root cause is their duplication of effort vs other open source runtimes. They use a variety of fairly condescending justifications for this like "our employees are too young to handle more than one flag", but when you dig in you find they've usually explored changing things anyway. They just didn't succeed.
The go team's attempt at involving everyone in the priorities of the language has meant they lost focus on the wisdom of the original design. I spent 10 years writing go and I'm now expecting to have to maintain garbage go2 code as punishment for my experience. I wish they focused on making the language better at what it does, instead of making it look like other languages.
This insane perspective of “nothing is totally perfect so any improvements over what go currently does are pointless” whenever you confront a gopher with some annoying quirk of the language is one of the worst design flaws in the golang community hivemind.
> Go has the null pointer (nil). I consider it a shame whenever a new language, tabula rasa, chooses to re-implement this unnecessary bug-inducing feature.
I agree that all points in the article are very debatable. But this one I'm yet to see the counter-argument.
Can you give an example of a feature where the designers of go failed to consider other languages mistakes? Every discussion of there’s I’ve read has been thoughtful, open, and well cited.
I say this as someone who uses Go daily for my work and mostly likes it (despite the many warts).
reply