> Lisp macros operate on a representation of the AST.
The representation of the AST is the AST in Lisp (Scheme has syntax objects).
> By overhead I mean the nim compiler can generate direct specific instatiations at compile time instead of doing runtime based duck-typing as in lisp.
SBCL has `deftransform`, which allows macro expansions to benefits from static type analysis.
Lisp macros operate on a representation of the AST.
By overhead I mean the nim compiler can generate direct specific instatiations at compile time instead of doing runtime based duck-typing as in lisp.
Like, you can have a macro that generates a different expansion based on the type of arguments passed, if you structure the macro that way.
The macro can also take entire code blocks as arguments of course, so you can implement general purpose control structures that are first class citizens.
First, can you talk a little about how Nim's macro system is more-powerful than Lisp's? That seems like a fairly broad claim, and unless it allows you to influence the compiler at compile time (and maybe even then), I am curious how that works out.
Second, one of the biggest issues with lisp is that macros end up sort of slow, especially when you're using them + lists to replicate ADTs with pattern matching (versus, say, ML, which can do very fast dispatch based on types). Doesn't Nim fall into that same trap?
Lisp and Scheme macros are just code that happens to execute at runtime, too. They don't necessarily need to transform the AST, though that's what they're often used for.
You can indeed accomplish some similar things with it.
> Does it have performance disadvantages?
A Common Lisp macro is evaluated once at compile time. In Smalltalk this is evaluated at runtime, so that adds some overhead. Usually you only have to build the AST only once though, and can use the result many times, so that overhead isn't really relevant.
> There is another key difference between Lisp macros and many other macro systems: in Lisp, the macros operate on expressions: the data structure of the code itself. Because Lisp code is written explicitly as a data structure, a tree made out of lists, this transformation is natural.
The author seems to be consciously avoiding "homoiconity", but this is still yelling "homoiconity". And "homoiconity" makes absolutely no sense. The source code of every programming language is an AST literal, and in principle every language can have macros by allowing compile-time AST to AST functions. This is precisely what has happened with Scala.
The only thing special about Lisp is that the AST is a list, whose operations are familiar to the programmer. With other languages the programmer has to learn some specialized API to write macros. The advantage of Lisp macros is that they are easier to write (than say, Scala macros), but this extra expressing power is enabled by having macros themselves instead of having "homoiconity".
As someone who's written a bit of Common Lisp, I can say that the macros are a pretty awesome feature to use. Common Lisp has the advantage over Clojure and Scheme that the SBCL compiler is incredibly fast; it can come close to C#/F# on Mono for some tasks, in spite of Lisp being a dynamic language. It also compiles code and expands macros much faster. It's got more power than the other lisps, but it's also less elegant, so make of that what you will.
You're really very wrong on Nim's macros. It follows Lisp's defmacro tradition instead of Scheme syntax-rules/syntax-case, but that doesn't make it any less powerful (many would argue it's demonstrably more powerful). You are also dead wrong on syntax-rules/syntax-case capabilities, or maybe on what the syntax/AST is, if you think that there's anything they can do that Nim can't. Both systems deal with AST which means they both are unable to introduce new ways of parsing the code, only transform already parsed one. In (some) Scheme and Common Lisp you get access to readtable, which is the parser, but that's really a different thing. And even in Lisps it's not that popular: Clojure and Emacs Lisp disallow this for example.
Personally I favour pattern-based macros, like the ones implemented in Dylan, Elixir or Sweet.js (to show some non-sexp-based languages with such macros); but there is nothing "wrong" with procedural macros and they are not, in any way, less robust.
You don't have to be excited by Nim, but you should try to avoid spreading lies just because you aren't. Maybe a "lie" is too strong a word, but this statement: "Nim's macro system seems to be far less robust than that" is really very wrong and I wanted to stress this fact.
For sure. Additionally, the syntax of lisp also makes these AST macros ergonomic and, in many cases, beautiful. Rust has AST-transforming macros, but they're harder to use because the syntax is more complex. They're also quite ugly.
This is not done over regular lisp lists, but over scheme syntax objects that retain the original source info.
Those syntax objects are also the basis of the hygienic macro systems in many schemes (at least ones using syntax-case) so that macros also benefit from that information.
The lisp source representation is already an AST, so these kinds of transformations are trivial.
Hmm, part of the magic of Lisp comes from compiling the macros. Both Lispworks and SBCL compile Lisp code, and I think SBCL effectively only operates in compiled mode and is very fast. I would guess that most other Lisps compile as well. Being compiled is why Lisp clocks in at speeds roughly the same as C or faster in many cases.
Lisp macros have runtime overhead? I don't think reader macros or compile macros have any runtime overhead unless you add it yourself.
More flexible input syntax than a programmable reader? Data like sexps and edn is pretty flexible to begin with, but a lot of lisps have fully programmable readers for arbitrary syntax. Some lisps even support grammar dialects to switch out of sexps entirely (such as racket).
Lisp has two big ideas. The first big idea is that a regular syntax allows the trivial implementation of macros. Just have a separate compilation where the AST is passed in as a list to different macros and then compile the result.
The second idea (not shared by the some Lisps like Scheme) is that of a system image which is modified in real time. This allows on the fly debugging, adding of new features, etc with no downtime.
Macros have worked there way into languages like Julia or Nim, while the system image idea is mostly constrained to Smalltalk and Common Lisp.
The best example of the power of macros is Racket, which has world class meta programming facilities and is probably the best language in existence for creating new languages, DSLs, and doing experimental PL research.
I've used both CL and Racket professionally, and they both shine in certain situations.
That being said, I've grown tired of the relative verbosity of both languages (minimal syntax has a high cost) and the performance and productivity cost of dynamic typing.
For most new engineering projects, I'd much rather use something like OCaml or Scala than CL or Racket. For scientific computing, I usually go with Julia or kdb+/q.
That being said, I think Racket especially shines in the development of internal business or research tools. The large number of high quality libraries (especially for GUIs) makes it a great choice for desktop apps.
> A lisp macro see a code block as a tree of symbols/primitives. It can do anything.
No, a regular macro still needs to be syntactically sensible. To handle arbitrary non-lisp syntax you need your lisp to support arbitrary reader macros (as in Common Lisp or — I believe — Racket) and that gets significantly more complex and involved and requires extensible/pluggable parsing.
Scheme does not have that for instance, SFRI-10 reader macros need to be wrapped in `#,()`, you can't just dump JSX or XML in Scheme source and expect it to work.
The beauty of Lisp's macros are that they're specifically _not_ expressed in a separate language: They're ordinary Lisp expressions that work directly on the AST.
Some Lisp advocates may tell you that Lisp even does not have hygienic macros. Lisp dialects like Scheme have. Lisp usually has procedural macros, which transform source code (in the form of nested lists), not ASTs (like Nim).
That Nim has 'powerful hygienic macros' is fine, many languages have powerful macros.
The representation of the AST is the AST in Lisp (Scheme has syntax objects).
> By overhead I mean the nim compiler can generate direct specific instatiations at compile time instead of doing runtime based duck-typing as in lisp.
SBCL has `deftransform`, which allows macro expansions to benefits from static type analysis.
reply