This is really depending on your view, but the moment you step outside of lists and use arrays, you are keeping modern CPU architecture in mind. And all of a sudden all those wonder car/cdr/cons concepts as well as things built upon it no longer holds, which, again depending on the view, actually harms the abstractions in Lisp, and we are actually back at the same page with all the other programming languages.
Also, when you are promoting something, it might not a good idea to compare against Python when it comes to speed, but when talking maintainability, only C is talking as a target here.
I'm not too sure, but Python has always had great integration with C. It also follows the familiar statement based, imperative procedural model pretty closely. And it has been bundled on unix-like systems for the last 25 years. In that sense, its tottally riding the same wave, leveraging the momentum behind those languages.
That's the crux of my argument. Not that Lisp is too slow today, it's quite fast considering today's programming language landscape. But that it was when it mattered, in the 70s and 80s.
Had Lisp been able to overcome that, we might be in a world where the most popular OS is implemented in Lisp with a Lisp API. Where the commonly thought paradigms in school would be expression based lambda calculus, functional programming, dynamic runtime object systems, meta-programming and the all mighty parenthesis syntax ;)
Lisp is dynamically typed and can reach C-like speeds.
Python is something like 40 times slower than C, lisp gets much closer with something like 4 times slower (all of this heavily depends on what you are doing obviously, so just rough ballparks here).
No. Their reputation for poor performance mostly arises because the default data structure is a singly–linked list. Although the language makes these lists extremely convenient for the programmer to use, linked lists invariably result in terrible cache locality. High–performance systems that are written in lisp (and they do exist), achieve that performance by avoiding lists and using vectors and arenas instead.
Of course, it must be said that this performance penalty rarely matters in practice. For most programs, the limiting factor is not performance but the cost of implementation and maintenance. Lisp has has many powerful and convenient features that make implementation faster and surer than in any other language, and which usually greatly lower the cost of maintenance as well. Thus by writing a new program in Lisp you can get it off the ground and earning you money faster than you could in almost any other language. You can also add features more cheaply as time goes on, allowing you to keep up with your competitors. It is only when your system is nearing completion, and all necessary features have been invented and implemented, that you should think about rewriting select parts of that system in a systems language where more efficiency is possible. Not only will you only understand the system after it is written, but you can measure the actual performance of the system to figure out which parts you need to rewrite, and which parts have acceptable performance already and thus do not need to be rewritten.
I don't buy the "Lisp is not concerned by implementation details" because all I see in Lisp are implementation details, namely everything is a singular linked list and you have to go somewhat out of your way to use anything else (switch from lists to actual arrays, and you have to modify your code from using CAR to AREF, for example). Don't forget the numerous equality operators either (EQ, EQL, EQUAL, etc) whose results are most definitely dependent upon the underlying implementation detail (EQ compares pointers, EQUAL compares visual representations).
For speed, I use profilers, because I'm still surprised at where the code is spending its time, even after 20 years of programming in C.
I'm beginning to think that Lisp appeals to programmers that like programing in raw Abstract Syntax Trees which is a small subset of all programmers.
I don't claim that "Lisp cannot run on modern hardware", just that it has performance sapping features that lower its competitiveness vs mainstream compiled languages.
If you don't care about performance and don't pretend to be in the same league as C i have no problem with Lisp/Scheme. Its the same deal with people pretending languages with GC such as Java are fit for systems programming and low-level tasks.
I get your point, but when the performance requirement is put on the table (hence a need for optimizations), in the first place I wouldn't choose Lisp.
I wasn't talking about all high level languages, just LISP. My experience is that people who like a particular language try to rationalize and convince others that there is no downside.
I would love to see and example of LISP being as fast as C++ with multi-threading and cache coherency taken into account, using the same amount of memory, with no pauses from the gc that would affect interactivity. If it hasn't happened in the last half century though, I don't think it's going to happen at all.
- In the past, it was better at abstraction, but slow and niche and you needed to shell out for hardware.
- In the present it is not better. Common Lisp is about on a par with Python in what it has built in, inferior in its ecosystem, and for some modern stuff (threads, sockets) you will have to go outside the language standard into the wild west of implementation extensions or third party libraries.
Lisp's one big selling point is macros. Macros are magic that lets you reprogram the language. Reprogrammed languages break the ability of a programmer to drop in and read the code. Languages that need reprogramming to get basic shit done are unfinished or academic (Scheme). Languages that can get stuff done, but tempt you to reprogram them are attractive nuisances (Common Lisp, Ruby). In use, they create incompatible tangles that don't mix well with other people's incompatible tangles. I have been bitten by this repeatedly in Ruby. But Ruby is still easier to get work done in than Common Lisp.
Basically it died of being meh with a side of footgun.
I think this is what is so cool about Common Lisp. You can literally get pretty darn close to C in performance and still be at the highest levels of development efficiency/prototyping speed. I don't think too many other languages can say so. Take Python. It has really fast development time, but very slow performance.
Common Lisp is not simply fast. Common Lisp enables you to write a lot of code in Lisp that is fast. If you write Python and just call the in C written functionality of the implementation it can be easily as fast. With Common Lisp it is possible to stay a long time in Lisp, without the need to use C to improve the speed. Plus you can write code with optimizations that can then get rid of much of the overhead (type checking, bounds checking, ...). But that should only be done locally. Generally the performance model of Common Lisp is not that simple. It takes some practice and some understanding of the compiler and underlying runtime to get code 'really' fast.
If you are disappointed with the speed of your code, you should really ask the experts in the Lisp community for some help. There are often trivial ways for speed up and then there are sometimes very complex ways to get better performance.
It is hard to learn this from books. You really need to interact with some people who have experience.
I think it's strange that anyone would even ask why common lisp isn't popular. There is no payoff to writing something in it. The binaries are huge, it is going to be a lot less clear than modern C++, it won't be as fast, you still have a garbage collector, the libraries are niche, the syntax is reversed, the tools are niche, the ecosystem is niche, and everything you do is the opposite of what is intuitive at first.
Then on top of all this is built on linked lists which are essentially an obsolete data structure in their simplest form.
There is no reason to learn something with backwards syntax and ancient tools when there isn't even any payoff. Write something in C and the program is fast, small, native and can be compiled anywhere in a fraction of a second. There is still a payoff for all the very real pain. In lisp there is just no reason to use it from any angle other than how clever someone can be with macros and that is the exact opposite of good sustainable programming.
Of course. But OTOH, we shouldn't immidiately discount LLs when writing code, just because the perf isn't optimal.
>but hardly means that we should be against writing fast software.
On the contrary.
Lisp is slow. Python is slow. Ruby is slow. Smalltalk is slow. What do these languages have in common? Dynamism. They trade speed for other that were, in the language designer's opinion, equally or more important than being fast.
The point is, that when you make a decision about performance, the decision has consequences in either direction. LLs are slow, but they have a lot of advantages. With skip lists you can get decent access tumes on ordered LLs, and in some contexts even O(n) is acceptable. But what LLs do better than other list structures like arrays is insertion. With an array, you get best-case O(1) appends, but all other types of insertion are O(n), and even appends can be O(n) if you run out of space and have to reallocate.
By contrast, provided you have a pointer to the location in the LL that you want to insert at (which is why most LL uses in applications like to insert at the head of the list), LL insertion is guaranteed O(1). That's sometimes useful.
Except Lisps that are intended for numerical work, like MACLISP on PDP-10s at the time, did have arrays. And as noted, MACLISP was around then faster than DEC's FORTRAN (an issue DEC fixed not too much later).
Lispers aren't stupid, which should be distinguished from how easy it is make a simple Lisp. Making a performant one takes effort on the scale of making any similar language implication good and fast.
Let me preface this by saying I used LISP professionally in the '80's for about ten years.
It's a great language. It is right up there at the top of my list with Assembler, APL and Forth as languages that taught me so much more than the typical C-like language path most people are exposed to today. And, yes, I used those languages professionally for years.
I have always said it is important to learn these non-C languages.
However...
> I've spent some time contemplating future-proof programming languages because I want to ensure that code I write will be usable in the future.
I think it is clear that it will not be long until you can use an AI-based tool to translate any program from language A to language B. And, in fact, likely improve, maintain and extend it.
For example, you might be able to have the AI tool write a function or module in assembler targeted at different processors and be able to accelerate critical code in a platform-specific manner that would be almost impossible for most developers to manage and maintain today.
I experimented with some of this using ChatGPT. We built a product using MicroPython that require hard real time performance. Sure, MicroPython was not the right choice to begin with. This was one of those projects where we walked into something that morphed and we were stuck. Being that I am perfectly comfortable in assembler, I replaced chunks of code with ARM assembly routings. The performance boost was massive, of course.
As an experiment, I wrote a specification for one of those modules and asked ChatGPT to write the code in ARM assembler. It took all of five seconds to get a listing. Let's just say it took me a lot longer. The code was not optimal, yet, it worked just fine. Someone like me, with experience in the domain, could easily take that as a starting point and improve from there. Just for kicks, I asked ChatGPT to write the same code in C, C++, JS, Python, 8080, 8085, 6502, 68K and x86 assembler. That probably took a minute or so. Did not test all of the generated code. All of it looked like it would run just fine.
In other words, I believe that, today, the only reason to pick a language is likely something like: It's what I know and it has the libraries, frameworks and support I need. In some cases, it's because it's the only way to achieve required performance (example: Python is 70+ times slower than C).
Code longevity is not likely to be an issue at all.
It seems that python development speed means well rounded libraries. And IIRC lisp is still not on par with that (quicklisp users feel free to correct me). Other than that I still think that linguistically, lisps still have an edge (and a sharp one).
Yeah, as I said above, it's not mostly about the performance¹ but about the bugginess, which is to say, the comprehensibility. There are arguments both ways about whether C or Lisp is more bug-prone (less comprehensible) but I think the results are in. And there are new languages like TypeScript which combine and exceed the advantages of both.
______
¹ Though GCC still routinely generates much better code than SBCL, much less the new Elisp compiler, that's not the primary consideration. I'm not sure why you're mentioning it.
I forgot to mention that the main lisp implementations (SBCL) are fast. Compared to languages like these it's a done deal.
Good point about long term maintenance. Ruby and Python aren't that great either, people say static typing is the way to go. Lisps are weird because they allow to hot swap most of the things which makes changing a system easy, but at the same time it leads to spaghetti images.
Also, when you are promoting something, it might not a good idea to compare against Python when it comes to speed, but when talking maintainability, only C is talking as a target here.
reply