> You can test the interface. A type is just an interface around memory, albeit more consrained.
Wow, again, that's not the problem here. Errors in the standard libraries are defined at values. There is no point testing it as interfaces, it will not give you the nature of the error, since they are all defined with fmt.Errorf . Do you understand now the problem ? the problem is being consistant across codebases between errors are values and errors as types.
> No, an error implements the error interface. It means that it can be a value of any type that implements the constraint of having an Error method.
It doesn't matter what a value implements if you don't test its content by type. Std lib errors are meant to be tested by value, not by type. It has nothing to do with interfaces again. When you get a std lib error in order to know what error it is you compare it to an exported value, not its type. I don't know why you keep on insisting on interfaces that's not the core of the issue here.
They are strings. They are implementation of interface with a single `to_string` method, after you got `error`, you can't say (you could use switch on type, but you should guess which type could it be), so basically you have an opaque object with a single `to_string` method, which is equivalent to string.
I can't fancy a more error prone way than that which Go choose. How you match the errors, you match strings? You match types? You don't even know what types could there be, since all what the type system says is that function returns error.
> I don’t think I’ve ever seen a case where a Go function returned neither a value nor an error, or both a value and an error.
That's kind of the point. The type system should be powerful enough to disallow those cases then.
In practice, I've seen both, always accidentally. I've also (more commonly) seen a lot of confusion and annoyance around:
Okay, so this has to return a pointer for the error case, should the caller check that? If not, how do we square that with checking for nil pointers being generally a pretty good rule? If we do check, our unit test coverage has a blemish for every call since nothing can hit that. If we skip it being a pointer, then it's a zombie object.
It's just a lot of cognitive load and bikeshedding around an issue that shouldn't exist.
> If you design your functions in a way that that function will never receive an array, you dont have to test for it. And if some idiot tries one day, it should error very loudly that something is wrong.
Then you've moved it from a test to a runtime assertion. You're still re-implementing the type-checker, poorly.
The fact that the former scenario is much more likelier to happen in golang due to its error handling. The former can be caught using unit tests, and should stick out more in languages like Java with scoped try-with-resources blocks that limit the scope of lifetimed variables (another bad design in golang where defers are function scope, not local scope limited, introducing unnecessary runtime costs, while also being less useful in practice).
At the same time, you don't want to be writing unit tests for every simple scenario. Some people who use dynamic languages argue that you just type assert everything in unit tests and you should be good. Obviously, that's not how things work out in practice. Similarly here, because of the way errors are handled in golang, the same pattern is everywhere in the program, and it is extremely tedious to write the same unit tests over and over everywhere, while making sure to reach a certain level of coverage.
> Not all Go functions, even in the standard library, return errors declared using the error interface.
Do they just return integers then like C?
> Why would you ever need a stack trace when the network is down, for example?
Because it's still important to know where you are in the program when the network broke, in order to make sure that recovery happened correctly for example. Seeing a "network is down" error in the logs is useful, but more so is knowing exactly what I was doing (what if I wasn't expecting to access the network in a particular path? etc.)
> Every problem you encounter with errors will also be encountered with other types sooner or later.
Practicality matters, otherwise C is all we'd ever need, and we wouldn't be having constant CVE's etc.
> Mentioning Rust here is a is a bit strange seeing as how it contradicts the entire premise you're trying to push.
Not really. Rust errors compose nicely, and you're explicitly forced to handle them (unlike golang's), and are much harder to accidentally swallow or ignore compared to golang. This scenario is repeated many times when you compare golang to other better designed languages. The language constantly takes the easy way out so to speak, making the compiler implementation simpler, while pushing complexity onto the user. Furthermore, as I pointed out, there's nothing preventing us from using the same approach in Java, now that it has sealed types and pattern matching. This is not possible in golang.
Java's checked exceptions are not much different actually. They still require you to handle the error either by try/catch or by declaring that the method throws the same exception, or a superclass of it.
I also remembered another shortcoming of golang error handling that I've seen several times in real code bases, being forced to check both the error and the other return value to make sure that things are working. Yes, a properly written program shouldn't need to do that, but reality doesn't care. And what's ironic is that golang was supposed to be designed to support "programming in the large" (another unverified claim that is contradicted by reality). The fact that it opens these doors is indicative of the mentality that went into designing it.
> If you're arguing that you could just look at the file-open code to see the errors it returns, couldn't the same apply for seeing which exceptions are thrown?
No, because then you have to look at every function inside there. And there every function inside those functions. And then...
With typed error handling they're only one-level of abstraction deep. Actually zero-levels, since your function won't typecheck unless you handle the errors.
> For example take a method that wants a string in a particular format, you pass a string in another format and it breaks.
This sentence betrays a confusion about what types are, and how to use them. In particular, you seem to be mixing up two possible scenarios.
In the first scenario, we have a method which "wants a string" (i.e. its input type is 'String'), and we can "pass a string" (i.e. call it with an argument of type 'String'). This code passes the type-checker, since it implements the specification we gave: accept 'String'. Perhaps there are higher-level concepts like "formats" involved, and the resulting application doesn't work as intended; but that's not the fault of the types. More specifically: typecheckers verify that code implements its specification; but they do not validate that our specification captures the behaviour we want.
In the second scenario, we have a method which "wants a string in a particular format" (i.e. its input type is 'Format[A]'), and we "pass a string in another format" (i.e. call it with an argument of type 'Format[B]'). That is a type error, and the typechecker will spot that for us; perhaps pointing us to the exact source of the problem, with a helpful message about what's wrong and possible solutions. We do not get a runnable executable, since there is no meaningful way to interpret the inconsistent statements we have written.
tl;dr if you want a typechecker to spot problems, don't tell it that everything's fine!
If you want the safety and machine-assistance of such 'Format' types, but also want the convenience of writing them as strings, there are several ways to do it, e.g.
> A method that wants an integer in a range. An array of maximum N elements. And we can go on forever.
You're making the same mix-up with all of these. Fundamentatlly: does your method accept a 'Foo' argument (in which case, why complain that it accepts 'Foo' arguments??!) or does it accept a 'FooWithRestriction[R]' argument (in which case, use that type; what's the problem?)
Again, there are multiple ways to actually write these sorts of types, depending on preferences and other constraints (e.g. the language we're using). Some examples:
Your "integer in a range" is usually called 'Fin n', an array with exactly N elements is usually called 'Vector n t', etc. I don't think there's an "array of maximum N elements" type in any stdlib I've come across, but it would be trivial enough to define, e.g.
data ArrayOfAtMost Nat (t: Type) where
Nil : {n: Nat} -> ArrayOfAtMost n t
Cons : {n: Nat} -> (x: t) -> (a: ArrayOfAtMost n t) -> ArrayOfAtMost (S n) t
This is exactly the same as the usual 'Vector' definition (which is a classic "hello world" example for dependent types), except a Nil vector has size zero, whilst a Nil 'ArrayOfAtMost' can have any 'maximum size' we like (specified by its argument 'n'; the {braces} tell the compiler to infer its value from the result type). We use 'Cons' to prepend elements as usual, e.g. to represent an array of maximum size 8, containing 5 elements [a,b,c,d,e] of type 'Foo', we could write:
myArray: ArrayOfAtMost 8 Foo
myArray = Cons a (Cons b (Cons c (Cons d (Cons e (Nil 3)))))
Note that (a) we can write helper functions which make this nicer to write, and (b) just because the code looks like a linked-list, that doesn't actually dictate how we represent it in memory (we could use an array; or we could optimise it away completely https://en.wikipedia.org/wiki/Deforestation_(computer_scienc... )
Also note that we don't actually need to write such constraints directly, since we can often read them in from some external specification instead (known as a "type provider"), e.g. a database schema https://studylib.net/doc/18619182
> you end up doing things only to make the compiler happy, even if you know that a particular condition will never happen
You might know it, but the compiler doesn't; you can either prove it to the compiler, which is what types are for; or you can just tell the compiler to assume it, via an "escape hatch". For example, in any language which allows non-termination (including Idris, which lets us turn off the termination checker on a case-by-case basis), we can write an infinite loop which has a completely polymorphic type, e.g.
-- Haskell version
anything: a
anything = anything
-- Dependent type version
anything: (t: Type) -> t
anything t = anything t
We can use this to write a value of any type the compiler might require. Even Coq, which forbids infinite loops (except for co-induction, which is a whole other topic ;) ) we can tell the compiler to assume anything we like by adding it as an axiom.
> So to me using static types makes sense where types are used by the compiler to produce faster code (embedded contexts, basically, of course I don't use Python on a microcontroller) and as a linting of the code (Python type annotations or Typescript).
Static types certainly give a compiler more information to work with; although whether the resulting language is faster or not is largely orthogonal (e.g. V8 is a very fast JS interpreter; Typed Racket not so much).
As for merely using a type system for linting, that's ignoring most of the benefits. No wonder your code is full of inappropriate types like "string" and "int"! I highly recommend you expand your assumptions about what's possible, and take a look at something like Type Driven Development (e.g. https://www.idris-lang.org )
this is the sort of silliness that is common in the Go world. as noted elsewhere, errors are values in lots of languages - Rust, C, Scala, Haskell, etc etc, but Go explicitly has no way to handle them nicely, no specific syntax and no fancy type system stuff like sum types.
it is my very strong belief that this will eventually be fixed in Go and when it does, almost all the people currently saying "I like errors being values [and it's fine that Go makes it very annoying]" will quickly prefer having some actual language help for these values.
> error is an interface in Go which can be easily cast/checked for the underlying type.
Yeah it's done during runtime and the compiler won't be able to help you with it if you fail to do exhaustive type checking. It's a problem anytime you refactor your code. ADTs and pattern matching is pretty much the bare minimum language feature i expect from any statically typed language.
Illustrative, I don't know, but I'll try to give more context.
When writing a library, it is important that public items (like functions and enum) don't change between minor versions so that client code doesn't need to update their calls to the library.
Sometimes when refactoring code you end up modifying how a library function is implemented. Maybe it will now depend on some file being present on the system, while previously it wouldn't, meaning that the absence of that file adds a new error variant to this function.
In today's Rust, since the Error type of a Result is an explicit part of a function's signature, such a change is very noisy to the library's maintainer: it entails either modifying the signature of the public function to return a different error type, or modifying the Error type itself, which is also public.
When this happens, the change needs to be reconsidered: either you can defer it to later, provide an additional function with that new implementation and error variant, try to make it work with the error types you already have, or decide in that it actually warrants a major version bump, in conscience.
By contrast, if the set of errors of a function is inferred rather than part of its explicit signature, it means that modifying the implementation you can add a new variant without even realising it (for instance, by mixing the variant name with a variant returned by a sibling function that you thought was already used by this function) and break semver in a much more silent way.
I guess it also makes life harder for tooling, since it has to parse the implementation of a function (and all its subfunctions) to rebuild the set of errors, as opposed to simply parse the signature of the top-level function.
> Essentially it compiles to the same as C function that returns an `int` representing the error
That feels very limiting, I often use error types to e.g., attach data about the error. Is there a more general mechanism for sum types for when this shorthand doesn't apply?
> You might be expecting a linked list of integers, but at some point, some tired, coffee-fueled programmer with a deadline to meet accidentally put a string in there somewhere
How often does this really happen in practice? You know the software doesn't work when you try a wrong type, because it blows up on startup or on a request. In my experience my bugs aren't because of problems with types but with logic errors. It compiles but is not correct because you're doing the wrong thing. You know pretty quickly in Python or Javascript when you make a mistake because you pass in the wrong arguments and the right thing _doesn't_ happen.
Types don't save you from typing the wrong code or simply doing the wrong thing in a type safe way.
> It is usually better for a program to crash than for it to continue having corrupted its data
And it's even better to no let this program get corrupted data in the first place, which languages with a weak type system (such as Go) allow to happen whenever you use `interface{}`.
> but that doesn't mean it's the only thing they test.
That was exactly my point. In checking values, they also check types, without any extra effort.
> creates a whole category of errors
It doesn't create them. It just doesn't prevent them (type errors) at the language/compiler level. But since you have to test the values anyway, that doesn't really matter.
> delay the moment
Not really. Dynamic languages make possible environments that compile+run before the static compiler is done compiling.
Special how? Surely they are not special in a way that can't be captured by a type system. You say they should be handled in a special/deliberate way, but that's what it means for something to have a distinct type: it can't be handled like everything else. You just define special error-handling functions and they will work on errors and nothing else, and nothing else will work on errors.
As far as syntax goes, the I feel the big issue is about deferring error handling. You don't always want to mix error handling code with your algorithm logic, so you need to defer. We also talk about 'errors' so generally that we never bother to qualify which kinds of errors should be deferred or not in a way that is encapsulated in the type system. Or what kind of deferring should be available -- return errors or just move them later in the same scope? (Java's checked exceptions are a good example of attempting to do this, but it doesn't work out in practice. Programmers don't think like programs do: deferred error handling isn't the same as deferred error-handler writing, and both are needed in different ways.)
From what I can tell about error handling in Go, it isn't really a step forward. (And we really want a step forward.) Go just kind of falls back on "this was the last thing that worked, and making significant progress is probably too hard, so we won't bother trying."
That's the impression I get, at least. I don't actually do any Go programming, but I'm not inspired by its approach, either.
>> Not only do you constantly have to wrap and unwrap potential error values to
>> unify different error types
>
> This is a legitimate issue, although in Haskell you can use one error type
> like Go does if you want to.
SomeException is pretty much equivalent to Go's error, but it's not commonly
returned by libraries and such so using it barely helps with the need to
convert often. In addition all the types used by libraries as error values now
need to have Exception implemented, which they don't. So, in practice, no, you
cannot. Or rather, it wouldn't solve the core problem and create a new one.
>> And as if that wasn't bad enough, there are also actual throw/catch-style
>> exceptions, which are not checked in anyway
>
> This is also the case in Go (panics and recover).
True, panic/recover exist, but in practice I find them to be more akin to
setjmp/longjmp in C than throw/catch-style exceptions in other languages. Off
the top of my head I can't recall a case where I had to recover from a panic
that I didn't create in my own code (i.e. originated in a library call).
Exceptions in Haskell and other languages on the other hand are usually all
over the place (ok, usually not in pure code in Haskell) and I feel like I have
to constantly keep them in mind to avoid letting one slip through the seams and
then later having to track down where the hell that one came from.
>> I stopped writing Haskell maybe a year ago, but at least back then getting
>> GHC to print a stack trace when an exception wasn't caught was a major feat
>> that involved recompiling every single one of your hundreds of dependencies
>> with profiling flags.
>
> This is also the case in Go. Errors do not retain anything about their state.
> You need to use a package like https://github.com/go-errors/errors, which has
> the exact same issue you describe with dependencies.
That's fair and indeed it would often be handy to know where an error came
from. But since errors are just return values I don't really expect them to
have a stack trace attached to them, just like I don't expect any other value
to have that. It would still be neat for debugging sometimes, more so for
errors than other values. I think there are some proposals for improving on
this in Go 2.
What really frustrated me about Haskell though was debugging throw/catch-style
exceptions without a stack trace, because I basically had zero clues about
where it came from and there were no clear paths to trace in the code. In Go on
the other hand, errors don't just plop up from out of nowhere and panics, which
can, print a nice stack trace when not recovered from.
> A language that uses sum types (whithout any further guarantees) for error handling cannot formally enforce that invariant.
Huh? The invariant is enforceable:
type result =
| Success of result_you_want
| Error of error_info
You just need to be consistent about what you want. First you ask for returning partial successful results alongside error information, then you ask for the exact opposite. You can make a typeful model of lots of things, but first you need to make up your mind.
> trade conceptual clarity for efficiency.
Could you describe where exactly the efficiency gain comes from? If indirection is the problem, well, if anything, the Go approach requires more indirection.
Wow, again, that's not the problem here. Errors in the standard libraries are defined at values. There is no point testing it as interfaces, it will not give you the nature of the error, since they are all defined with fmt.Errorf . Do you understand now the problem ? the problem is being consistant across codebases between errors are values and errors as types.
reply