Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Agree- having map, filter, reduce with strong type guarantees will be a huge boon. Imagine being able to map over a collection and the code hints will understand the new output types in the chain. This is something you cannot have when your higher order ops are all typed to Interface {}.


sort by: page size:

The missing feature I want most is type narrowing across chained functions like:

arr.filter(a => a.kind === „bar“).map(a => /* a should now be of type Bar just like in a branch */)


Yeah but you can't implement it generically.

That means for every conceivable input type and output type combination you have to have a new MAP or REDUCE function to handle it.

All Modern languages except golang and elm only require one implementation of MAP that can handle all type combinations.

It's called parametric polymorphism.


Seems like this approach might also work for building type-specific collection implementations as well!

As I understand, the type would be implicitly applied to every func/struc in scope (which I assume would be the package)

It work quite well for package like a Map so any data structure/algorithm. I suspect it would be less ideal for filter/map/reduce.


Excited about “Higher order type inference from generic functions”. Great for libraries like RxJS, Ramda and axax.

The downside to that is if you want to write a new function that deals different subclasses, you can't keep all the logic in one place; you have to spray its implementation across all the different types it has to handle. This is the #1 reason I prefer languages with sum types and pattern matching (or the ability to emulate it via unions and instanceof checks, e.g. TypeScript or Python + mypy).

One important aspect of programming with functors is the ability to quantify over them, i.e. higher-kinded types. This is crucial for building reusable components on the functor-level of abstraction.

In modern OOP this would amount to quantifying over interfaces which is typically not even possible in those languages.


Well, I'll back off slightly from the assertion "fully implementable" because you have the f = s[i] syntax which wouldn't be fully implementable unless you allow the user to define [], which I agree is crazytown.

However, the make() functions for slice and map are the only functions in the language that allow you to pass in a type and get that type back as a return function. Why can't users have that functionality? Generics seem to be the easiest way to get it.


I find that when working with maps and sequences of maps with higher-order functions, types are especially critical to ensure that e.g. you didn't get your arguments backwards in your fold, or forget that your map values are sequences, not single elements. So much can be accomplished with a chain of higher-order invocations, and so much can go wrong (especially if you have Perl-style type coercions), but types are great at catching the errors.

At the same time, I think that programing with complex generic structures is a functional anti-pattern anyway: if your type is "Map (Foo, Bar) (Map Int (Set String))", and it's not somehow encapsulated (as in a class or ADT), good luck. In Python you would hide this in a class, but what is the Clojure solution (genuine question)?


Yes they are. And actually they would play nice with generics! Imagine

    type Ord a interface {
      Compare(a) int
    }
Also it would be nice to do something like

    func Max[a <: Ord a](a, a) a
(Syntax stolen from Scala)

Right now, interfaces are too opaque. You can't take a value of interface type X and return the same type. You have to return an opaque X, which gives you no guarantees about its concrete implementation. Parametricity is an extremely well-motivated and solved language feature!

I think if you add parametricity to Go functions (not even parametric types. Just type variables that play nice with all the builtins) you can write this and get guarantees about its implementation (assuming it follows the functor laws)

    func map[a,b](func(a) b, []a) []b

All valid points.

If you're going to do this sort of thing with much success, you really need to have a language with a fairly powerful type system. If function pointers are your only option for higher-order programming, I wouldn't even try. First class functions or interface polymorphism help, but I'd also want to have a language that makes it relatively easy to create (and enforce) types so that your extension points don't end up being overly generic.


Why not both? I want the language to provide an iterable/enumerable interface with the usual generic workhorses (map, reduce, etc.) and allow me to implement that interface as a free function for any type. If I'm authoring a type, I'll add specialized functions as needed.

This is part of the reason why I still feel constrained when using systems where the type system is bolted on after the fact, like Typescript, Flow, or Dialyzer. You can gain so much leverage from the type system being able to generate code like this for you.

ML can do this, but you have to mention the module explicitly. OCaml's upcoming modular implicits will allow this with less boilerplate though.


If you have a limited set of types, you can make extensive use of higher-level functions that compose frequently-needed operations on those types, and add your closures in the mix. If you have extensible types instead (OOP and such), you either have to program to these types, recreating the generic transformations for each type, or you'll need functions that can do those transformations on arbitrary types-presumably via internal mechanisms-thus circumventing strict type checking.

Not the person you are replying to but:

* Typeclasses (over higher kinded types)

* Extensible typeclass instance derivation

* Comprehensions on par with SQL in expressiveness


Still. Generics. `extends` with ternaries. `infer` within conditional types. Type narrowing. Lots of the language quirks.

Not sure what you're getting at? Even in Haskell you can easily pass data around as untyped maps of maps if you want. But usually you don't. But maybe you're talking more about when generics and obsession with not duplicating any code ever makes that part of the type system needlessly complex

Not just with generics, but also with dynamic dispatch on the runtime class of the K type which implements Ordered interface. ML traditionally doesn't have dynamic dispatch, so it uses the static mechanism of building a map type specialized to a given key type. The value type is completely generic, though.

I said 'traditionally' above because OCaml actually has dynamic dispatch now; it has a powerful and feature-complete OOP implementation, and a way to pass around modules at runtime as first-class objects. And in fact people take advantage of these capabilities to build more 'modern' APIs. But I would argue that the functor approach is one of, if not the best, overall.


Yeah you're right it's not a total order. I was being fast and loose and conveying a subjective feeling.

RE row polymorphism, Typescript doesn't quite have what users of ML-like languages are asking for when they want row polymorphism (which is usually parametric polymorphism rather than subtyping). But it's close.

As for convenient... well I'd argue by the time you've got the whole cornucopia of GHC extensions at the top of your file nothing is quite convenient at that point.

next

Legal | privacy