I don’t get it. Aren’t apple counts and orange counts both integers? Can you give an example of why you would want to subtype concrete types, and how this deficiency would prevent you from writing the kind of program that you want to write?
> sub f(Int $x) { $x + 1 }
sub f (Int $x) { #`(Sub|140221097113944) ... }
> sub g(Str $x) { f($x) }
===SORRY!=== Error while compiling:
Calling f(Str) will never work with declared signature (Int $x)
#define defmax(name, type) type name(type a, type b) { return a > b ? a : b; }
This seems to be exactly how you would define a "generic" max function in Go with the current tooling, too. Which is a shame because C macros just aren't good enough.
That's not super interesting (to me personally, an acknowledged idiot) because working programmers are hardly ever confused about types. Intellij pretty much already does that thing.
I have this idea that this paper does something super interesting that I'm too stupid to understand.
Is there a correctness angle I'm totally missing where you can constrain subtypes and ... not sure?
EDIT: I watched a thing earlier today where people felt they could reasonably subtype zero away and therefore never think about NAN as in the range or domain of their mathematical functions.
I want to learn more. Probably my degree of ignorance is such that I'm unable to ask the right questions. Can you help? This is probably an unfortunately hilarious question.
In that case, you need a subclass of string that does nothing but has its type as FirstNameType. Wherever you create or return strings that are first names, you'll need to instantiate FirstNameType instead.
The behavior is undefined if a value does not match its type declaration, and implementations are free to ignore some declarations, like CLISP. So for a portable approach just add a CHECK-TYPE:
(defun foo (s)
(check-type s string)
(print s))
The main point is that (TYPEP NIL 'STRING) is NIL.
You would declare the type of the returning value:
(i :: Int) = fromString s
Although once in a while it's useful to make a short synonym for disambiguating the type, x_of_y is certainly not a short synonym, so it's not commonly used.
> This creates a new type (SecretKey) that has a run-time representation that is just an Int. I.e. it has no additional information associated with it at run-time that you could use to distinguish it from an Int. At compile-time it is distinguished, however.
I can do exactly the same in Julia
struct SecretKey
value::Int
end
After the JIT compiler is done, there is no difference between this and an `Int`. The type associated with the value only exists conceptually at runtime. A `SecretKey` value in Julia will in most methods look no different from an `Int`
> without degenerating to passing around uninformative primitive types like Int (which introduces the possibility for errors).
That would be the same in a strongly typed dynamic language. Except the error will of course be caught at runtime. But it is not like you can use the types in an illegal way.
No, they're duck-typed. Types are attached to values, not variables, and for example trying to eval (+ 1 "hi") will throw a proper type exception.
You usually don't have to declare types but for example in Common Lisp you can optionally declare types for better performance in speed-critical parts of the code.
It has union types. Right now if you do 1 || 1.5 it gives you Int32 | Float64. If you do Foo.new || Bar.new, and Foo and Bar are not related (except they both inherit from Reference), then you get Foo | Bar. If Foo and Bar are related by some superclass Super, then you get Super+.
If you do:
a = [1, 'a', 1.5, "hello"]
you get Array(Int32 | Char | Float64 | String)
In a way, the Super+ type is a union type of all the subtypes of Super, including itself, but just with a shorter name.
reply