My point is only that unless you are using a hygienic macro system the idea that you are manipulating code in your macro is a (often white) lie. Code has semantics, a meaning, and unless the object you manipulate carries those semantics with it (that is, the syntax objects of eg `syntax-case`) you're just manipulating some data which has a necessarily superficial relationship with the code itself. Picolisp resolves this by simple "eliminating" lexical scope, which means that code really is trivially related to its denotation since the semantics of variable binding really are just "whatever is currently bound to this variable." Scheme resolves this by having syntax-transformations instead of macros: functions which genuinely manipulate syntax objects which carry along with them, among other things, information about their lexical context. Common Lisp accepts that most of the issues arising from the distinction between code itself and its nude denotation can be worked around and provides the tools to do that, but in Common Lisp one still transforms the denotation of the code, not the code itself. From my point of view, if one is purely interested in the aesthetics of the situation, the Scheme approach is much more satisfactory. From a practical point of view, it doesn't seem to be particularly onerous to program in, although the macros in scheme seem to lack the immediate intelligibility of the Common Lisp ones.
I mean yes and no. In a CL Macro you are manipulating lists of symbols and other atoms and in a sense that is code. But code has some static properties (of which lexical bindng is one) which are not reflected in that structure and which you can break pretty easily in a CL macro. A scheme syntax object carries that lexical information which is so critical to the meaning of the code and because it does it is much harder to accidentally manipulate the code in such a way that meaning of the code changes. It is exactly the static lexical binding semantics Common Lisp which introduce the conceptual tension in macro programming that requires the programmer to manually worry about gensyms. Because picolisp lacks lexical binding manipulating code lacks this complication (and, in fact, the complication of a macro system almost reduces to a trivial combination of quotation and evaluation).
> A lisp macro see a code block as a tree of symbols/primitives. It can do anything.
No, a regular macro still needs to be syntactically sensible. To handle arbitrary non-lisp syntax you need your lisp to support arbitrary reader macros (as in Common Lisp or — I believe — Racket) and that gets significantly more complex and involved and requires extensible/pluggable parsing.
Scheme does not have that for instance, SFRI-10 reader macros need to be wrapped in `#,()`, you can't just dump JSX or XML in Scheme source and expect it to work.
What does 'better' even mean? For what? Which of the dozen of Scheme macro systems?
Macros are a tool and over the past decades literally zillions of procedural macros have been written in Common Lisp. Somehow people managed to do that. Without 'hygienic' macros. Without rule-based macros.
Scheme has seen a lot of research in that area and that's great. Common Lisp is more concerned with core language stability.
Having a single macro system, which might have an older but careful design, still has some advantage: basically all code in Common Lisp uses the same macro system and all tools need to support only one macro system. Most of the time it gets the job done.
It's like asking whether a carbon bicycle frame is better than a steel one. Depends. If one is travelling, the steel one is possibly more robust and easier to repair. It might not include the latest research in lightweight materials, though.
These types of macros are very similar to Scheme's syntax-rules macros without the hygiene.
Viewing Lisp macros as an arbitrary function from syntax to syntax is fine, but such macros are not very Scheme-y. It's very difficult to have such macros and guarantee they're hygienic, which is why syntax-rules limits you to matching patterns and producing templates -- it effectively limits the kinds of functions which can be macros.
Other types of Scheme macros (which are non-standard), like explicit renaming and syntactic closures, require programmers to opt-in to hygiene. syntax-case (which is in R6RS) allows programmers to break hygiene by jumping through hoops, but is otherwise similar to TFA's system as well.
So if Scheme is a Lisp and these macros are like Scheme macros, I don't think it's inaccurate to call them Lisp-like macros. But it is imprecise.
As an aside, I've written a compiler for a Lisp-like language. Speaking from experience, getting the compiler to answer "yes" to 3 is non-trivial (even though it's dead-simple in an interpreter). I suppose that's why Racket has all the phase distinctions it has.
The Lisp language syntax is not motivated by 'ease of parsing'. It is motivated by the 'code as data' idea, which enables a lot of useful source transformations to be implemented AND used in a relatively easy way. One of the applications of that are macros.
Scheme got a lot wrong from a practical perspective. Use Lisp instead.
Type theories are over-hyped. 99.99% of all software is written in languages without advanced type systems.
Macros are nothing for 'excitement'. They are a tool. They enable the user to write source transformations, which has several applications in Lisp: control structures, compile-time computation, etc etc.
Certainly having experience with Lisp/Scheme will make it easier to deal with Macros.
As a "newbie" to Scheme, (well, actually what I played with was Arc, but I think it belongs to the Scheme family) I was able to write a few macros, and seeing them work in action was very nice indeed.
The tricky part is maintaining the macros or changing their behavior. It's like that saying goes: if you write code as cleverly as you can, you are not smart enough to debug it, because debugging is twice as hard.
Yes, written a compiler in common lisp and various toys in scheme. Spent some of yesterday porting functions from SRFI-1 to Shutt's Kernel which is pretty close to a lisp. Interesting that you claimed first class macros, they're usually second class in lisps.
I like scheme because I'm far enough down the compilers rabbit hole that language syntax looks like obfuscation in the way of the SSA representation. Writing the syntax tree directly is attractive there.
I find macros pretty confusing. Common lisp has straightforward behaviour but needs a lot of ceremony to make them reliable. Scheme's hygienic rewrites look simple but I don't understand the machinery behind them. Lexically scoped fexpr have obvious behaviour and implementation hence the interest in Kernel.
Which is to say I'm not disputing the lisp ~= lua premise from a position of total ignorance. The semantics look pretty similar to me. Lua doesn't have control over syntax, in (reader) macro sense, so perhaps it's metalua I should be equating to scheme. As above I'm not very focused on syntax.
I also write a lot of lua so am keenly interested in the distinctions I'm missing in the above.
Lisp and Scheme macros are just code that happens to execute at runtime, too. They don't necessarily need to transform the AST, though that's what they're often used for.
Some portions of the Scheme community favor a much more dynamic semantics for macros, more in the Lisp tradition; others favor building macros on top of a more compilation-friendly semantics
Forgive my ignorance, but aren't Lisp's semantics very compilation-friendly? Doesn't macro expansion happen at read-time?
EDIT: I think that's the case for Common Lisp. By "the Lisp tradition" did you mean prior dialects?
The issue is not so much that macros are not merely syntactic transformations, but rather than s-expressions alone aren’t sufficiently expressive to represent the syntax of any Lisp with a package system. The scheme macro system solves that by having macros return syntax objects where identifiers used in a macro can carry around a reference to the original context where the macro was defined. So referencing a function in a macro definition will look up the function in the context of where the macro is defined, and return a syntax object that refers to the correct function. To my knowledge, most modern macro systems work this way (Scheme, Dylan, etc.)
You're really very wrong on Nim's macros. It follows Lisp's defmacro tradition instead of Scheme syntax-rules/syntax-case, but that doesn't make it any less powerful (many would argue it's demonstrably more powerful). You are also dead wrong on syntax-rules/syntax-case capabilities, or maybe on what the syntax/AST is, if you think that there's anything they can do that Nim can't. Both systems deal with AST which means they both are unable to introduce new ways of parsing the code, only transform already parsed one. In (some) Scheme and Common Lisp you get access to readtable, which is the parser, but that's really a different thing. And even in Lisps it's not that popular: Clojure and Emacs Lisp disallow this for example.
Personally I favour pattern-based macros, like the ones implemented in Dylan, Elixir or Sweet.js (to show some non-sexp-based languages with such macros); but there is nothing "wrong" with procedural macros and they are not, in any way, less robust.
You don't have to be excited by Nim, but you should try to avoid spreading lies just because you aren't. Maybe a "lie" is too strong a word, but this statement: "Nim's macro system seems to be far less robust than that" is really very wrong and I wanted to stress this fact.
I disagree. The hardest part about teaching macros in Lisp is teaching the hoops we much jump through using gensym to avoid accidents. It’s better that the compiler do that for us. And it’s not as if it is impossible to make a deliberate capture in Scheme; you just have to opt in to it, rather than carefully avoiding it in 99% of the macros that you write.
I guess he doesn't really know the parts when lisp macros are uncool. He even had the breaking example in his slide. The x++ part also breaks in lisp and only works in scheme, or when you have to introduce gensym into your macro. scheme macros are the real thing, but are not as simple and homomorph as lisp macros. they are more like constexpr matchers. if only we would have structural matching in our languages. and then do that at compile-time.
"Even Lisp ended up with macros with own sub language..." I assume you're talking about Scheme approaches like `syntax-case` and `syntax-rules`? In Common Lisp, macros are written in Lisp, the same way you would write runtime code. Unquoting and splicing are part of the core language (e.g. they can be applied to data as well as to code).
reply