Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
New features coming in Julia 1.7 (lwn.net) similar stories update story
3 points by leephillips | karma 21974 | avg karma 3.76 2021-10-04 17:39:59 | hide | past | favorite | 89 comments



view as:

as someone who generally runs dev versions of Julia, I can very confidently say that 1.7 has a ton of really nice goodies. It's not a ground-breaking release in the way 1.5 and 1.6 were, but it does have a ton of small changes that just make the language smoother.

Lots of new syntactic changes. Would love to see Julia provide the ability to make user-defined prefix/postfix operators, e.g.

(f)' = x -> map(f, x)

(sqrt)'([1, 4, 9]) # 1 2 3


I haven’t used it much, but it seems like Julia is turning into a big language with lots of shortcuts?

Well, although it is sometimes described as a lisp, it’s not Lisp. It has a lot of syntax. Personally, I like the syntax, but some people prefer more minimalist languages.

It is a lisp in the same linage as Dylan.

No matter how long we try, people are scared when they see parentheses on the wrong place (according to their beliefs).


For me it is less about being scared, more about s-expressions being more friction-less in enough circumstances for it to matter.

One thing that - to me - seems like a missed opportunity is to use Lisp notation in mathematics. There are many fields such as algebra, logic and so on that lend itself to being expressed in terms of a Lisp. It isn't just about avoiding ambiguity, but also about having a uniform way to mechanically transform expressions.


Yeah, but in mathematics we end up in the classical Lisp vs ML discussion, there is a known paper about both styles actually, arguing for ML, as it was written from someone on the ML side of the fence.

In any case, maybe "Computer Algebra with LISP and REDUCE: An Introduction to Computer-aided Pure Mathematics" is something to dive into,

https://www.amazon.com/Computer-Algebra-LISP-REDUCE-Computer...


Thank you for sharing!

You may have recognized that I'm not terribly aware of the history behind this discussion and its arguments. It's more of an intuition built from using and exploring Scheme and Clojure.

My intuition is that if you have a uniform, flexible notation, then you get to use the same set of mechanical transformation tools for any language, which leads to more cross pollination and common solutions. But I could see how the meaning of that notation can get overloaded if you jump from context to context, say declarative versus algorithmic.


It depends how you look at it and how strictly you define "Lisp". Which parts of that definition really matter to you.

https://docs.julialang.org/en/v1/devdocs/ast/


> it’s not Lisp

At the very least it includes a lisp:

$ julia --lisp


> (; x, y) = newS

> Now x has the value 4 and y has the value 5.

I believe this is supposed to be a and b here (matching the names of the fields of the struct). That's what the NEWS.md [1] implies, and that's what intuitively makes sense given the ; syntax.

> Install package? ¦ (@v1.7) pkg> add Example + (y/n) [y]:

Surprised it defaults to [y] option, especially since packages can be pretty heavy with artifacts and lots of dependencies to precompile. One accidental extra Return and you might be sitting there for five minutes.

> keepat!(v, i)

Not a fan of the name. The ! indicates the mutation as the article points out, but keeponlyat! would have been much clearer and immediately obvious, enough to more than justify its length IMO.

Lots of nice quality of life improvements in this version. One that the article doesn't mention is `julia --project=@myenv` [1] - being able to specify a shared environment as the starting environment for the REPL.

[1] https://github.com/JuliaLang/julia/blob/v1.7.0-rc1/NEWS.md


> I believe this is supposed to be a and b here

You’re right! It’s a typo. I changed the struct to use a and b instead of x and y, but didn't replace the following paragraph. Thanks for noticing that. It’ll get fixed.


Having used the automatic package install feature for a few months already, I'm pretty sure we made the right decision. If you type `using x`, chances are you actually want to use it. Also, since 1.6 has added parallel precompilation, there aren't many packages that take that long. For reference DifferentialEquations currently adds in about 2 minutes.

> If you type `using x`, chances are you actually want to use it.

In my case, at least half of the time, the issue is that I forgot to switch to the right environment. (So it's nice that the output also shows the current environment that we'll be adding to.) I'm also just used to the convention of 'n' being the the default, for eg. with Linux package managers.


> Surprised it defaults to [y] option, especially since packages can be pretty heavy with artifacts and lots of dependencies to precompile. One accidental extra Return and you might be sitting there for five minutes.

You can interrupt it with Ctrl-C.


Is that guaranteed to not leave the package environment in some messed up state?

Julia's package management is stateless, so it won't mess it up badly at the very least.

A quick list of other nice things in the release.

Performance:

* faster hashing for a variety of types (Array, Symbol, and a few others)

* faster ldiv! for QR factorizations

* faster findall for AbstractArray

* faster expm, log2, and log10

* faster isless for floats

* faster @fastmath versions of exp, exp2, exp10

* faster cbrt, sinh, exp for Float16

* Improved array resizing on push!

Features

* Exact determinants for BigInt matrices

* lpad/rpad now use text-width instead of bytes

* Make it significantly easier to use MKL (or other BLAS libraries)

* opaque closures (complicated mostly internal feature)

I'm sure I've missed a bunch in this list (I compiled it by scanning through the commit list)


Are these really the pain points of the language needed to be addressed? What kills me specifically is the fact how sloooow are the initial warm-ups. I am forced to write a webserver in Julia. The startup time is ~5min and each initial request takes like 45s. The lengths I needed to go and the hacks I needed to do to reduce that are mesmerising.

Faster log10 is all nice but I would love to see a more aggressive compilation cache and improved tooling rather than these minor improvements.


I agree. More work on latency, pre compilation and PackageCompiler is sorely needed. Julia is also behind on the version of LLVM that is being used, and apparently they are regressions in performance with newer versions. The fact that performance is an implementation detail is concerning, and it would be nice to standardize some of the language.

However, I think it is unfair to compare the inclusion of log10 to the work required to get solutions to these harder problems. I guess why not both? ;)


> Julia is also behind on the version of LLVM that is being used,

Julia 1.6 (LLVM 11) and Julia 1.7 (LLVM 12) use the LLVM versions that were current, when the release branch was cut. LLVM 13 was finalized a fes days ago and will be released this week.

So I am not sure where the notion comes from that Julia is lagging behind.


> apparently they are regressions in performance with newer versions

There are? I mean, with every new LLVM release a few things gets faster and a few things slower but I am not aware of any big changes in the recent LLVM releases. Do you have any more information about that?


I like Julia a lot, but out of curiosity why are you writing a web server in it? I would 100% reach for Python for that use case.

I'm not OP, but I can easily see my team getting into the same situation.

Our use case involves dynamically displaying some analysis results on a status page derived from data from the field. The analysis was originally written in Python + scipy + matplotlib (Julia in OP's case). Why write the analysis code a second time just to display it on a demand-generated webpage?

You don't have to run the entire website on Julia, but maybe you do want to stand up a couple of RESTful endpoints that are integrated with the rest of the web application. If Julia can't do that, then your options will be limited.


That makes sense.

Exactly. We are training a very large neural model using [1]. We would not fix much if we decided to embed the Julia code inside a CPython wrapper or something...

[1] https://github.com/CTUAvastLab/JsonGrinder.jl


a lot of work is going into these areas, but most of it isn't work that happens in Base. over the past 6 months or so, a fair number of the important big packages have gotten a bunch faster to load. also 5 minutes for webserver startup isn't my experience I'm writing an admittedly small one and it only takes a few seconds to start.

That list is not a representative example of what the core devs are doing. It looks to me like it's mostly the work of Oscar Smith, he does a lot of these low-level numerical optimizations.

Here are a few more improvements in v1.7 off the top of my head:

* Further latency improvements (though much smaller than for 1.6)

* Big gains in inference precision for the compiler

* The above also unlocks better static analysis

* Improved speed for the package manager on Windows

* Begin work on adding atomics

* Thread-safe RNG

I'm sure there is more I've missed - I don't hack on Base Julia much.


I meant to add the atomics work to this. (I didn't mention the rng since it was already in the article). The others are important, but a bit harder to quantify (and harder to find from scanning the commits).

Wait didn't we get thread safe RNG a few minor releases ago? At least for the default global rng?

Some of those features seem wild to me, aren't ldiv,exp,log2, isless and fastmath stuff basically just "map to what Intel have given us"? I'm amazed how low level these functions are.

the Intel processor intrinsics are almost all completely broken. there are relatively good libm implementations, but packaging them is a nightmare, as is ensuring good performance across architectures. also in my experience, the Julia versions tend to have better performance.

We often can do better than what Intel gives us, so we do. It takes a lot of time, but slowly lots of low level functions that are taken to be intrinsics by most languages are getting written in pure, high performance julia.

Mini rant incoming.

I have a love/hate relationship with Julia. I’ve been using Julia professionally since 2018, and in my spare time since before that. And after writing a lot of code and reading and reviewing even more, my biggest complaint is that tooling suuuccckkssssss.

LanguageServer takes up to 10 minutes to start (5000-7000 line Julia project with lots of Julia dependencies). I usually open vim, go get coffee and then come back to start working on the project. And if the LanguageServer crashes or I have to restart vim for some reason, oh boy.

Even after 10 minutes, updates are SO slow. You can type something incorrectly and the LSP diagnostics won’t pick it up for at least 30-40 seconds. By then you may have moved to a different point in the codebase. Forget about switching branches when you are doing all this. I’ve tried PackageCompiler and weirdly it doesn’t really help here for some reason.

The worst part is there’s no way to run a subset of tests. You HAVE to run ALL the tests every time. This makes IDE driven development and test driven development absolutely non starters. REPL driven development is the ONLY way to go, and even the REPL has its readline quirks. And god bless Revise.jl. I shudder to think of life, it weren’t for Revise, the giant hack that it is.

There’s no officially blessed language formatter. LanguageServer.jl uses a formatter and JuliaFormatter.jl kind of exists on its own. And even these formatters don’t do enough, by the way (imo). Two examples that come to mind is converting explicit returns to implicit returns and changing using Foo -> import Foo: x,y. Not to mention the vim plugins for LSP and formatting are not great. Even the most popular Julia-vim plugin has some super odd opinionated un-vim like choices like high jacking TAB and the choice of commentstring syntax etc.

All this in daily development is painful as it is, and even more so when I have to review someone’s code. I’m a software engineer by training, and when I have to read domain specific code written by a scientist or a domain specific engineer, or worse when I have to refactor and clean it up, I cringe all the time. Try figuring out which method gets dispatched without using a repl, fun times. Maybe few people port a Pluto notebook script into a production application as much as I do but yikes is it painful. There is SO much down time. Wait for tests, wait for LSP, wait for formatter, wait for static analyzer with JET, make a change, rinse and repeat.

Contrast this to using Python, Rust, Zig, or Go and feedback is immediate! And unlike Julia, vscode isn’t the only golden editor. You can use pretty much any editor and you get first class support.

All in all, Julia as a language is super interesting. I would probably only use it for small personal projects though. Professionally, I would still stick with python / pypy, or even writing python and rust or go. Multiple dispatch alone isn’t worth the other pain points when working with a large team (>50 people).


One thing I think the Julia core team should look to get professional help on is human interface design in programming languages. I know there’s been a lot of strides in the right direction, but having used rust I can say that Julia is waayyyyy far behind. Specifically wrt to stacktraces, I think the Julia core team should just pay someone like Jonathan Turner to review their stacktraces and see how they can improve it. JT was involved in the rust stacktraces and when he was using Zig on stream he was immediately able to identify ways to improve Zig’s stacktraces. I think his kind of insight could go a long way in helping Julia be more friendly.

Another place is Pkg error messages. I’ve opened projects from a few years ago and things have just failed. I think it is because I’m not using the same Julia version that I did when I recorded the Manifest (weird that Julia doesn’t record the Julia version number in the Manifest). But I will never know I guess because I get almost cryptic error messages from Pkg. It is borderline infuriating.

I say all of this because I really like (or at least really want to like) Julia. And I know fellow coworkers who share this sentiment about some of these pain points. If these are fixed (and latency becomes better) I for one would pull fewer hairs out of my already balding head :)

I would ask that Julia advocates take all my criticism kindly. I know you are all passionate and have worked on a lot of these features yourselves. I’m expressing some opinionated frustrations.


I completely agree. Julia stack traces are abysmally bad. The only worst stacktraces I have to endure is Shiny... Essentially useless at best or deceptive at worst.

Somewhat helpful for Shiny stack traces:

    options(shiny.fullstacktrace = TRUE)
You should be prepared to scroll a bit, but you might get the information you're looking for more often.

Thank you. I will give that a try next time.

Julia stacktraces actually got a bunch better in 1.6 or 1.7 (I forget which). There's still a lot to be improved though. If you have specific error messages that you consider unhelpful, filing issues with the language (or package) tends to be a pretty good way of helping. Error message PRs are generally pretty good for new contributors, and are a low effort way of making the language better.

The Julia version is included in the Manifest. But I don’t know when they started doing that, perhaps old versions of the package manager did not.

This is new for 1.7

I don't think those namings are inconsistent. primitive types and abstract types are not structs, and it would be misleading to label them as such. structs and mutable structs on the other hand are.

So, using 'only struct' or 'only type' "consistently" would be more homogeneous, it would be, respectively, wrong and under-specific. I think it's good that the concepts of type and struct are separated like that.


I'm very interested in your friend's opinion. If you can't post a link, perhaps just a pointer to a web presence.

On the subject of language servers my pet peeve is that I can never get method documentation or parameter completion for the method I intend to use. Instead I have to scroll through 27 different methods with the name to find the right type.

It's usually faster to alt-tab, lookup the method and alt-tab back. Surley the language server can guess which method I am using based on types or existing usage.


I agree, I have the same love/hate relationship with Julia. When the stars align, the language and its ecosystem are incredibly fast and pleasant to use.

But most of the time, it is waiting for the incredibly slow doc website; trying to reverse engineer the lib that everyone presents you like the best there is, but is woefully underdocumented – if it is documented at all; delving through bad stacktraces; struggling with the abysmal multithreading experience; hoping for small syntaxic sugar such as a better syntax for one-line conditional execution than &&, do/while loops, or if-as-expressions while we get an nth unicode operator; waiting for plots; the REPL being unable to redefine structs; getting enthusiastic about AD, until understanding that it is so slow that is basically only works on toy problems; etc.

All in all, the language and its ecosystem gives the impression that it is not much more than heaps over heaps of promising PoCs stacked on each other, but would need thousand of CS engineering man-hours to go from promising academic project to actually usable environment.


I agree about the PoCs stacked on top of each other and the thousands of CS man hours required to transform it, I don’t think I could have said it better.

There’s some really really neat projects in the Julia community though, and some super smart and very hard working people. I’m wondering if some focused effort from the community guided by the stewards of the language may yield promising results? As opposed to letting people that are interested in a topic just naturally contribute?


> There’s some really really neat projects in the Julia community though, and some super smart and very hard working people

Absolutely. The people behind Turing, Flux, Zygote, and many others are obviously gifted. But they really need to invest on the practical CS side of things if they wish to further the general use of Julia – something that if unfortunately hard to sell in an academic context :/


I tried really hard to like Julia and it just isn't happening for me. Slow startup, bad documentation, fragmented ecosystem, poor error messages and stack traces, encouraging the use of unicode glyphs and operators... it's been a horrible beginner experience all while the Julia community keeps telling everyone (and each other) how amazing the language is. I'm not seeing it.

I agree about the slow startup, poor error messages and stacktraces, and that itself makes Julia a very rocky beginner experience. The number is times junior engineers have asked me to help resolve method errors is comically large. Unfortunately, when you know what is going on with dispatch you can see through the stacktrace quite easily, and this makes it difficult for experienced Julia devs to see why the status quo is a problem. I don’t imagine this will be easy to get the ayes for a change here, let alone what the change should be.

It is particularly frustrating when Julia advocates don’t see how much friction this causes and when they claim multiple dispatch and performance make this friction worth it. Surely core devs have seen beginners struggle as part of tutorials. Maybe they are just hoping these people will learn and figure it out? I’ve seen way too many people give up and just use python instead because they are not thinking about performance and just want things to work. I agree that I’m not seeing how Julia will take more of the general programming pie, especially given the work on getting JIT to Python in the next couple of years + ease of writing rust/go/zig, unless there’s significant more work put into reducing latency or PackageCompiler.jl.

But I don’t find the ecosystem fragmented or documentation poor. In fact one of the examples I give for great documentation is the Julia Manual. Can you provide examples where you find documentation lacking? As for Unicode glyphs, I find it hard to read too but domain scientists find it really easy and since that is the target audience I think the encouragement is warranted.


> unless there’s significant more work put into reducing latency or PackageCompiler.jl.

What features do you want out of PackageCompiler.jl? Any specific workflows you have in mind?


The manual is fine.

It's the in-repl documentation that tends to be poor.

That's gotten a bit better lately too though. For core julia at least.


Yes, I share similar thoughts. It was exciting when Julia came in, and it looked like it's going to replace a lot of python software in science. The symbolic differentiation is still quite nice. But every time I tried to code simple projects in julia, I've encountered packages that are not supported anymore, incompatibilities between versions and difficulties in reaching good enough performance among other things. I don't think I'll use Julia myself anymore and stick to python with occasional c/c++ for really performance-critical bits.

I think having v1.0 as the LTS hurt quite a bit since some packages (DifferentialEquations.jl) never worked on the LTS, so we just kind of YOLO'd for a bit from version to version only testing the latest. The next LTS is v1.6 once v1.7 comes out. I think a lot more package ecosystems will start adopting tests on the LTS, which will make versioning issues a lot less prevalent.

I think very few people use the LTS (especially not people that would write here) so I don't think that matters very much.

Agreed on the stacktraces. I think a few small tweaks can make them a thousand times easier to read https://github.com/JuliaLang/julia/issues/36517 (and an ongoing PR https://github.com/JuliaLang/julia/pull/40537)

> it's been a horrible beginner experience

For me and others, this has not been a problem.


The gap between how fantastic the language is according to Julia fanboys and how annoying my experience has been is very hard on me.

Everything needs to install a package, even simple debugging. Graphics is very broken, especially for headless environments. You get incomprehensible, unfixable, error messages since everything is statically linked. But you don't get speed since everything has to compile first. Unicode symbols are encouraged (at minimum it takes an order of magnitude more to type/paste/convert some Greek symbol than to type an English letter). All the experience you feel a smug attitude coming from the environment "we make everything perfect, thoughtful, beautiful for you, if something breaks is your fault" but then something actually breaks...

The most beautiful thing about the language is the computational physics community' enthusiasm about it (probably in the hope of finally abandoning Fortran). So JuliaCon's conferences are actually good and you find a lot of interesting computational problems solved in the language.

Only that the basic unfriendliness and unsmooth experience, coupled with a lot of immature features, really ruins it for me.


Absolutely agree about the disconnect between my own experience and the views coming from the community. It has made an already frustrating experience even more off putting.

I agree that the experience is extremely off putting. But having been in the community for almost 5 years now, I have to say there are super awesome lovely people that are mostly silently working in the background and very hard at that. The loud minority unfortunately is grating, and I do wish the stewards of the language established more strict guidelines on conduct or philosophy of the language. I don’t want to name names here, but when prominent members of the Julia community bash other programming languages while ignoring the painful friction in Julia, it comes across as very tone deaf and in poor taste, and does set a bad precedent.

I think having a weekly what’s great about Python/Rust/Go/Zig and how can we port this to Julia would be awesome, instead of the weekly gripes about Python/MATLAB/R.


> the weekly gripes about Python/MATLAB/R

At the cost of repeating the old Bjarne quote for the millionth time, you hear complaints because those are languages that people use in their day-to-day. If only Julia were more popular, you'd definitely see it get caught in the crossfire.

Also worth noting that in terms of intended audience Julia is more like Matlab and R than Python, most of the more abstract part of the discussion would likely apply to Julia to a lesser degree.


As a moderator on the primary discussion board — discourse — please do flag problematic posts and/or DM me. We don't see everything and rely on flags.

I agree, it's like they looked at the typical C++ compilation experience and thought "I wish Matlab had those issues too!"

It might be because my background is CS and not hard science, but non-ASCII variable names are one of the most baffling decisions I have ever seen in any programming language.

Whatever reading comprehension advantage they may carry is completely negated by the fact that they can't be typed from a regular keyboard.

I hear it's gaining traction, though, so it's likely many of the problems you mention will be fixed or mitigated, eventually.


Supporting the LaTeX expansion is just a necessary feature of any Julia-compatible code editor. It is not then particularly more difficult to type `\sigma` over `sigma`.

Also note that other languages often support non-ASCII identifiers as well --- for example, `s` is a completely valid identifier name in Python (though it is more restrictive than Julia, `s²` is not valid in Python and yet is valid in Julia). It even works in C, but that might just be a GNU extension.


It's not just identifiers though, it's also operators. To take an example from the OP article, I can't relate to anyone who claims that ?x is better than cbrt(x). The fact that this is not only allowed but actively encouraged is absolutely baffling to me.

Eh, I have to diagree on that. A mathematician reading ?x in code will immediately know what it means, but that isn't necessarily true when seeing cbrt(x). Julia was written for mathematicians, not programmers, and takes some getting used to.

The mathematician still has to type \cbrt + tab to type the glyph so they have to know the name of the function. Not only that, they have to know the name of the function from reading the Unicode glyph or be constantly pasting characters into the REPL. I don't get how that's more efficient in any way.

We are in the age of Unicode. The LaTeX entry methods are options, not required. I rarely use them, and don’t paste in characters. I hit a modifier key. Typing a Greek letter is no more inconvenient than typing an uppercase letter.

The language of science is mathematics, so code that more closely resembles mathematical notation can be more efficient for practicing scientists to understand. There's a lot of value in code that looks like the expression published in the paper, because when another scientist wants to build upon or modify that code, they'll read and understand the paper first, not the package code.

Also in physics, sometimes you get really large expressions with a lot of Greek letters and operators. In the paper, you make it a double-wide multiline equation with LaTeX. It makes a big difference if that corresponds to a few lines of Greek symbols in your code, and not twenty.


> I can't relate to anyone who claims that ?x is better than cbrt(x)

I do. It's a simple \cbrt<tab> away in any editor worth their salt, and it greatly improves readability of computation-heavy code.


They can very easily be typed from a regular keyboard, I'm doing it every day. \alpha<TAB>. Also, it's much more readable than typing the characters out like alpha_ij.

> Whatever reading comprehension advantage they may carry is completely negated by the fact that they can't be typed from a regular keyboard.

I also am not really a fan in many cases, but in the case of Bayesian models it makes so much more sense to write "s" than "sigma". In the case of these complex models, being able to stay closer to the math reduces complexity for the reader. Also, you don't have to use Unicode in your own Julia codebase. It's more of an extra feature which you can use if you want.


> Whatever reading comprehension advantage they may carry is completely negated by the fact that they can't be typed from a regular keyboard.

There's a curious blindspot among coders with regard to customizing their main interface with their machines. Keyboards are heavily customizable.

Some wins are extremely easy: If you want instant access to all Greek letters on a Mac (e.g.), you can have your caps lock key toggle between your standard layout and the Greek layout. (no \alpha, just a.) The rabbit hole runs deep from there if you are adventurous. Look at all the option key symbols you don't use. Swap them out for nice things like real arrows, ?, and whatever you fancy.

Maybe you have restraints or biases about work code being all ASCII, but typeability is up to you.


Customization is a double edged sword though. A keyboard is simple and universal, and it's always the same, my skills transfer between machines, operating systems, languages, software, etc.

Yes, it's just muscle memory and you can always re-train it, but there's very little return for my investment.

> Maybe you have restraints or biases about work code being all ASCII

I'm heavily biased towards all-ASCII for everything except explicitly multilingual contexts (UIs, display formats, browsers, document editing, etc.). As far as I'm concerned, any byte set to any value above 127 in any source file should be a compile-time error. A few reasons why:

- It's basically guaranteed something somewhere will screw up the encoding. ASCII is the safest subset.

- Better ability to quickly, reliably input characters across machines and tech stacks.

- Easy to memorize. Characters are easily and immediately recognizable by everyone worldwide.

- Some fonts might lack support for some non-ASCII characters.

- Many non-ASCII characters are just plain unreadable. On my screen, lowercase alpha looks like a lowercase latin "A".

I can see an argument for allowing non-ASCII characters inside string literals and comments, but with non-ASCII identifiers you're just looking for trouble.


> Yes, it's just muscle memory and you can always re-train it, but there's very little return for my investment.

As a data point of one, I've found the return to be enormous and the investment suprisingly small. You have a whole lifetime of typing ahead, so what's a small investment of time compared to that? Just something to consider.

> As far as I'm concerned, any byte set to any value above 127 in any source file should be a compile-time error.

People can vote by using the languages and tools they want, but I have been waiting for that limitation to die for quite some time. I also use function names longer than 6 characters. That said, I realize I'm lucky enough to be in charge of my own programming environments and don't have to worry about its adoption in unknown limited scenarios.

> I can see an argument for allowing non-ASCII characters inside string literals and comments, but with non-ASCII identifiers you're just looking for trouble.

Again, just as a data point, I've never found this to be an issue (if one is prudent and not obnoxious with it), though the language support needs to be in place. In my main language these days, Swift, it's fine. Julia and Kotlin too.


Absolutely. On Linux it’s as simple as defining a “dead Greek” key, and you have access to the whole Greek alphabet. When I hear people complaining about the terrible burden that Julia allows Greek letters, to me it sounds like someone complaining that it allows uppercase letters. Same thing: one modifier key.

I’ve been really enjoying it since 1.6 came out. A lot of the points you raise are perfectly valid, but the speed difference coming from R and python is real (so long as you’re writing computationally intensive programs and not scripts, for which it is a bad tool at the moment). There’s less friction to get performant code compared to writing supplemental functions in C++.

Also, I have to disagree with the unicode issue. The target audience is heavily invested in latex, so typing things like \theta is natural. It also results in code which is more terse compared to writing out all the letters, and at least for me results in a somewhat lower cognitive overhead when implementing algorithms etc.


> Julia fanboys

> smug attitude

This is a bit over the top, don't you think? Julia is an open community with varied contributors. I've not seen any core contributors here claiming Julia is perfect, if anything, they are conscientious about the "first time to print" and numerous other issues. Developers using Julia commenting here offer their unvarnished experience and critique.


Contrary to some folks here I've had a pretty good experience learning Julia while adding support for it as an scripting language in an open-source data IDE I'm building [0].

It makes a lot more sense as a language to me than R (which is also a supported scripting language), but I'm slowly coming around to R too.

I do agree that it is pretty slow to start, most other scripting languages I support finish much quicker on simple scripts. But I imagine the reason you use Julia over Ruby/Node.js is the libraries not just the startup time.

But I really like that it has a builtin package manager unlike almost any other language (except R) which makes it very easy to install missing dependencies in a try-catch block. (Useful in my situation since its an IDE that runs on your computer using your existing python/ruby/julia install and only depends on a single JSON library.)

It's also neat that all hosted Github Actions images come with Julia built in so it's pretty easy to be working with it in automated environments as well.

[0] https://github.com/multiprocessio/datastation


Is there a way to statically type check a Julia program? It would be really nice if the compiler threw an error on obviously wrong code.

Yes - the package JET.jl. It's still in its early phases so it's pretty rough, but I use it daily at my job.

There are a lot of caveats, though. Julia is fundamentally a dynamic language, so you can easily write code that works perfectly well but where the compiler has no idea what is going on before you run it. That code will not be statically analyzable.


I recently started a relatively complex machine learning project in Julia 1.6 and it’s been great so far. Occasionally the youthful nature of the language creates minor issues (eg fewer answers on stackoverflow) but overall it really has delivered on the promise of being very usable and productive for scientific computing and being fast and highly optimizable.

What's your project about?

Implementing this paper < https://arxiv.org/abs/1903.03129 > for a computer vision application.

I want to validate the complaints about startup and error messages as a Julia user / fanboy ;)

They don't really bother me anymore personally as I've gotten used to them and make frequent use of packagcompiler.jl

However, I'd just like to say that the core team is aware of these problems and there's active work being done to address them. The general path seems relatively clear, and it's currently just engineer time limited (not like we've hit a wall or ran out of ideas). But I'm confident that with the recent series a and Sciml picking up steam, these things will get ironed out in the next year or two.


It's good to see the package suggestions feature happen.

I believe I was the first to raise it as an issue/request on github, oh, 3-4 years ago? (based on how octave does it, which is truly excellent)

I remember it caused quite a vivid discussion, which very quickly (and frustratingly) veered off in a completely different tangent in the thread.

Good to see someone took it to heart after all :)


Legal | privacy