The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Life after D -- M for metaprogramming

12

As promised, sample code.

The first part are some macro definitions based loosely on the syntax of Powerflex. Commands change state and return only a success/error indicator. Functions return a value.

// macro definitions

#command open_database S for? @?
// args are $1 (symbol) $2 (optional symbol)
// java code
  Check_error // trappable error
#end

#function create_table @ [@]
// args are $1 (symbol) $2 (list of symbols)
// java code
  Check_error // trappable error
#end

#command insert_table @ [V]
// args are $1 (symbol) $2 (list of values)
// java code
  Check_error // trappable error
#end

#function join T T
// args are $1 (table value) $2 (table value)
// java code
  Check_error // trappable error
#end

#function rename T [@,@]
// args are $1 (table value) $2 (list of pairs of symbols)
// java code
  Check_error // trappable error
#end

#function select T [@]
// args are $1 (table value) $2 (list of symbols)
// java code
  Check_error // trappable error
#end

The second part is a program written using these macros.

// code using macros
Open_database Supplier for update
Create_table S S# SNAME STATUS CITY
If_error print “Assume already exists”
Insert_table S “S1” “Smith” 20 “London”
// etc

// sample queries using 3 alternative syntaxes (statement, function, fluent)
Var q1 = join S SP
Var q2 = rename(q1,CITY,SCITY)
Var q3 = q2.join(P)
Var q4 = q3.select S# P# CITY

// this is the full query as a one liner
Var q4 = S.rename(CITY,SCITY).join(SP).join(P).select(S#,P#,CITY)
pretty_print q4

The final M code is easy to read and write, easy to debug and easy to extend. No '4GL cliff'.

The M implementation generates fully type-checked Java code (or some other language if you prefer).

I haven't found anything like M in my searches, and I know from experience it's a tough thing to build. But it would do the job and it just might be worth a try.

Andl - A New Database Language - andl.org

How do you do a loop?

How do you write a function/procedure?

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

This post was intended to convey a flavour, not present a complete solution. It's easy to ask questions that so far have no answers.

The Powerflex macro language has compile-time arithmetic, recursion and code blocks which allow you to write generative loops, functions and an infinite variety of syntactic structures. That's how I generated 1 million lines of code from just 10. I would expect the M language to have those, and more. But please be aware: macro pre-processing is an unfamiliar topic to most programmers and you need to be careful to be careful how you use the words: do you mean in the M macro language, or in the target host language?

  • loops, functions and procedures are written in Java (or other host language), as usual
  • the M language provides compile-time iteration, recursion and blocks to control code generation.

The problem with Powerflex is :

  1. macros are hard to write, understand, debug and maintain
  2. there is limited ability to check syntax, types and symbol usage
  3. there is only integer arithmetic and token-pasting, no strings or code execution (at compile time)
  4. it doesn't understand the host language, so it can generate gibberish.

But it can produce wonderfully readable, compact programs to replace verbose repetitive boilerplate. It's over 30 years old, but no language I know can do that.

Andl - A New Database Language - andl.org
Quote from dandl on March 21, 2021, 1:01 pm

This post was intended to convey a flavour, not present a complete solution. It's easy to ask questions that so far have no answers.

The Powerflex macro language has compile-time arithmetic, recursion and code blocks which allow you to write generative loops, functions and an infinite variety of syntactic structures. That's how I generated 1 million lines of code from just 10. I would expect the M language to have those, and more. But please be aware: macro pre-processing is an unfamiliar topic to most programmers and you need to be careful to be careful how you use the words: do you mean in the M macro language, or in the target host language?

  • loops, functions and procedures are written in Java (or other host language), as usual
  • the M language provides compile-time iteration, recursion and blocks to control code generation.

The problem with Powerflex is :

  1. macros are hard to write, understand, debug and maintain
  2. there is limited ability to check syntax, types and symbol usage
  3. there is only integer arithmetic and token-pasting, no strings or code execution (at compile time)
  4. it doesn't understand the host language, so it can generate gibberish.

But it can produce wonderfully readable, compact programs to replace verbose repetitive boilerplate. It's over 30 years old, but no language I know can do that.

The problems you've noted for Powerflex are exactly the type of problems generally associated with macro systems -- which have a long history and which re-emerge every so often. Java's annotations are distantly the same kind of thing (internally, they do the same kind of thing, sort of) so they have the same sort of problems.

Macro languages are fine for limited text replacement/generation -- or in Java's case, for annotating methods that should be deprecated, which ones are unit tests, etc. -- but before long developers are stretching the bounds of the well-intended-but-never-intended-for-that facility to breaking point, and their utility is rapidly overshadowed by growing complexity, difficulty, and general abomination.

They also generally must work within the semantic bounds of the host language. If you want, for example, only structural typing and the host language only supports nominal typing, that generally exceeds reasonable expectations of a macro system. Then transpilation/compilation of a distinct language may be more reasonable.

There's an argument that the desire for a macro language is always an indication that your host language isn't good enough. Anything you want a macro language to do, you should be able to do without macros as a "native" language facility.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on March 21, 2021, 4:26 pm
Quote from dandl on March 21, 2021, 1:01 pm

This post was intended to convey a flavour, not present a complete solution. It's easy to ask questions that so far have no answers.

The Powerflex macro language has compile-time arithmetic, recursion and code blocks which allow you to write generative loops, functions and an infinite variety of syntactic structures. That's how I generated 1 million lines of code from just 10. I would expect the M language to have those, and more. But please be aware: macro pre-processing is an unfamiliar topic to most programmers and you need to be careful to be careful how you use the words: do you mean in the M macro language, or in the target host language?

  • loops, functions and procedures are written in Java (or other host language), as usual
  • the M language provides compile-time iteration, recursion and blocks to control code generation.

The problem with Powerflex is :

  1. macros are hard to write, understand, debug and maintain
  2. there is limited ability to check syntax, types and symbol usage
  3. there is only integer arithmetic and token-pasting, no strings or code execution (at compile time)
  4. it doesn't understand the host language, so it can generate gibberish.

But it can produce wonderfully readable, compact programs to replace verbose repetitive boilerplate. It's over 30 years old, but no language I know can do that.

The problems you've noted for Powerflex are exactly the type of problems generally associated with macro systems -- which have a long history and which re-emerge every so often. Java's annotations are distantly the same kind of thing (internally, they do the same kind of thing, sort of) so they have the same sort of problems.

Macro languages are fine for limited text replacement/generation -- or in Java's case, for annotating methods that should be deprecated, which ones are unit tests, etc. -- but before long developers are stretching the bounds of the well-intended-but-never-intended-for-that facility to breaking point, and their utility is rapidly overshadowed by growing complexity, difficulty, and general abomination.

The point here is that the problems relate to attempts that are ancient history. We tried to do X on our S/360 or VAX or PC/XT and it didn't work, so we don't need to try X again, ever. The number of new languages is vast, on quite shaky grounds. The number with any kind of meta-programming is tiny, but there is no consensus why that is so.

The list I found includes: Dlang, Elixir, Haxe, Nemerle, Nim, Terra. Why so few?

They also generally must work within the semantic bounds of the host language. If you want, for example, only structural typing and the host language only supports nominal typing, that generally exceeds reasonable expectations of a macro system. Then transpilation/compilation of a distinct language may be more reasonable.

Why? Typically macros use a quite difference syntax to create and are often not limited by lexical constraints, reserved words, etc. There is absolutely no reason why you can't have macros creating types and doing the header inference implied by TTM. Piece of cake.

There's an argument that the desire for a macro language is always an indication that your host language isn't good enough. Anything you want a macro language to do, you should be able to do without macros as a "native" language facility.

Not so. The more common reason is that the language is powerful but verbose. Macros (as the name implied) let you get more done with less writing. They're also great for cross-cutting, like logging, asserts, test hooks, mocks, where you want what's included in the generated code to change according to settings.

Andl - A New Database Language - andl.org
Quote from dandl on March 22, 2021, 3:10 am
Quote from Dave Voorhis on March 21, 2021, 4:26 pm
Quote from dandl on March 21, 2021, 1:01 pm

This post was intended to convey a flavour, not present a complete solution. It's easy to ask questions that so far have no answers.

The Powerflex macro language has compile-time arithmetic, recursion and code blocks which allow you to write generative loops, functions and an infinite variety of syntactic structures. That's how I generated 1 million lines of code from just 10. I would expect the M language to have those, and more. But please be aware: macro pre-processing is an unfamiliar topic to most programmers and you need to be careful to be careful how you use the words: do you mean in the M macro language, or in the target host language?

  • loops, functions and procedures are written in Java (or other host language), as usual
  • the M language provides compile-time iteration, recursion and blocks to control code generation.

The problem with Powerflex is :

  1. macros are hard to write, understand, debug and maintain
  2. there is limited ability to check syntax, types and symbol usage
  3. there is only integer arithmetic and token-pasting, no strings or code execution (at compile time)
  4. it doesn't understand the host language, so it can generate gibberish.

But it can produce wonderfully readable, compact programs to replace verbose repetitive boilerplate. It's over 30 years old, but no language I know can do that.

The problems you've noted for Powerflex are exactly the type of problems generally associated with macro systems -- which have a long history and which re-emerge every so often. Java's annotations are distantly the same kind of thing (internally, they do the same kind of thing, sort of) so they have the same sort of problems.

Macro languages are fine for limited text replacement/generation -- or in Java's case, for annotating methods that should be deprecated, which ones are unit tests, etc. -- but before long developers are stretching the bounds of the well-intended-but-never-intended-for-that facility to breaking point, and their utility is rapidly overshadowed by growing complexity, difficulty, and general abomination.

The point here is that the problems relate to attempts that are ancient history. We tried to do X on our S/360 or VAX or PC/XT and it didn't work, so we don't need to try X again, ever. The number of new languages is vast, on quite shaky grounds. The number with any kind of meta-programming is tiny, but there is no consensus why that is so.

The list I found includes: Dlang, Elixir, Haxe, Nemerle, Nim, Terra. Why so few?

It's difficult to reason about metaprograms.

It also tends to exhibit a diode effect -- it's much easier to go one way generating code than the other, which is treating code as data. Typically, you want to do both, so that you can both generate code and sensibly ingest -- to possibly affect code generation -- based on pre-existing code.

Treating code as data becomes much easier if your language is homoiconic, but that impacts human readability. Lisp is homoiconic. So is XML.

Notably, Lisp has the most sophisticated and well-integrated macro system, as the macro language is Lisp. Everything else, for reasons already noted, is poor.

They also generally must work within the semantic bounds of the host language. If you want, for example, only structural typing and the host language only supports nominal typing, that generally exceeds reasonable expectations of a macro system. Then transpilation/compilation of a distinct language may be more reasonable.

Why? Typically macros use a quite difference syntax to create and are often not limited by lexical constraints, reserved words, etc. There is absolutely no reason why you can't have macros creating types and doing the header inference implied by TTM. Piece of cake.

Macros easily create code. They do not easily ingest code, so having them sensibly integrate with existing code is difficult. That means you can have header inference implied by TTM as long as it's entirely within the macro space -- which is effectively the essence of having a separate transpiled language. Once you try to integrate macro-generated code with native language code, you have contradictions -- code that differs semantically in the macro space from the native space -- or code that is only notionally "safe" in the macro space but unsafe when accessed in the native space.

Of course, this is well-trodden territory in language design, to the point that macro processing is really no longer considered viable.

There's an argument that the desire for a macro language is always an indication that your host language isn't good enough. Anything you want a macro language to do, you should be able to do without macros as a "native" language facility.

Not so. The more common reason is that the language is powerful but verbose. Macros (as the name implied) let you get more done with less writing. They're also great for cross-cutting, like logging, asserts, test hooks, mocks, where you want what's included in the generated code to change according to settings.

Again, that's really only viable via transpilation -- where you're working in an entirely separate language -- not via macro generation, where you inevitably mix macro and native language, resulting in contradictory, potentially confusing, and potentially unsafe results. Macros get more done with less writing only up to the point where they cause more complexity and gotchas than they solve, at which point modern languages solve the same problems better with in-language features like genericity, more sophisticated type systems, and general in-language expressivity.

In short a good language shouldn't need macros to simplify code. Ordinary procedures/functions should be sufficient.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on March 22, 2021, 10:52 am
Quote from dandl on March 22, 2021, 3:10 am
Quote from Dave Voorhis on March 21, 2021, 4:26 pm
Quote from dandl on March 21, 2021, 1:01 pm

This post was intended to convey a flavour, not present a complete solution. It's easy to ask questions that so far have no answers.

The Powerflex macro language has compile-time arithmetic, recursion and code blocks which allow you to write generative loops, functions and an infinite variety of syntactic structures. That's how I generated 1 million lines of code from just 10. I would expect the M language to have those, and more. But please be aware: macro pre-processing is an unfamiliar topic to most programmers and you need to be careful to be careful how you use the words: do you mean in the M macro language, or in the target host language?

  • loops, functions and procedures are written in Java (or other host language), as usual
  • the M language provides compile-time iteration, recursion and blocks to control code generation.

The problem with Powerflex is :

  1. macros are hard to write, understand, debug and maintain
  2. there is limited ability to check syntax, types and symbol usage
  3. there is only integer arithmetic and token-pasting, no strings or code execution (at compile time)
  4. it doesn't understand the host language, so it can generate gibberish.

But it can produce wonderfully readable, compact programs to replace verbose repetitive boilerplate. It's over 30 years old, but no language I know can do that.

The problems you've noted for Powerflex are exactly the type of problems generally associated with macro systems -- which have a long history and which re-emerge every so often. Java's annotations are distantly the same kind of thing (internally, they do the same kind of thing, sort of) so they have the same sort of problems.

Macro languages are fine for limited text replacement/generation -- or in Java's case, for annotating methods that should be deprecated, which ones are unit tests, etc. -- but before long developers are stretching the bounds of the well-intended-but-never-intended-for-that facility to breaking point, and their utility is rapidly overshadowed by growing complexity, difficulty, and general abomination.

The point here is that the problems relate to attempts that are ancient history. We tried to do X on our S/360 or VAX or PC/XT and it didn't work, so we don't need to try X again, ever. The number of new languages is vast, on quite shaky grounds. The number with any kind of meta-programming is tiny, but there is no consensus why that is so.

The list I found includes: Dlang, Elixir, Haxe, Nemerle, Nim, Terra. Why so few?

It's difficult to reason about metaprograms.

It is indeed. One reason is the lack of familiarity; another is the poor tools; another is poor language design. But unless you can think of a reason why should be intrinsically harder than any other kind of programming, I'm not going to worry about this objection.

It also tends to exhibit a diode effect -- it's much easier to go one way generating code than the other, which is treating code as data. Typically, you want to do both, so that you can both generate code and sensibly ingest -- to possibly affect code generation -- based on pre-existing code.

Treating code as data becomes much easier if your language is homoiconic, but that impacts human readability. Lisp is homoiconic. So is XML.

Notably, Lisp has the most sophisticated and well-integrated macro system, as the macro language is Lisp. Everything else, for reasons already noted, is poor.

Noted. What do you need in the 'ingest' department? At least a couple of those languages I quoted let you manipulate the AST. Is that enough? Do you have a use case?

They also generally must work within the semantic bounds of the host language. If you want, for example, only structural typing and the host language only supports nominal typing, that generally exceeds reasonable expectations of a macro system. Then transpilation/compilation of a distinct language may be more reasonable.

Why? Typically macros use a quite difference syntax to create and are often not limited by lexical constraints, reserved words, etc. There is absolutely no reason why you can't have macros creating types and doing the header inference implied by TTM. Piece of cake.

Macros easily create code. They do not easily ingest code, so having them sensibly integrate with existing code is difficult. That means you can have header inference implied by TTM as long as it's entirely within the macro space -- which is effectively the essence of having a separate transpiled language. Once you try to integrate macro-generated code with native language code, you have contradictions -- code that differs semantically in the macro space from the native space -- or code that is only notionally "safe" in the macro space but unsafe when accessed in the native space.

Not at all. If you want a compiler to place a new interpretation on existing source code, or do something new while compiling, you need to modify the compiler. But it's easy to see how a macro facility can ingest previously compiled code: just use reflection. The basic idea is to extend the macro processor to execute arbitrary (compiled) code in libraries. So write some reflection wrappers to extract record definitions out of existing code, and generate new code based on old definitions.

Of course, this is well-trodden territory in language design, to the point that macro processing is really no longer considered viable.

There's an argument that the desire for a macro language is always an indication that your host language isn't good enough. Anything you want a macro language to do, you should be able to do without macros as a "native" language facility.

Not so. The more common reason is that the language is powerful but verbose. Macros (as the name implied) let you get more done with less writing. They're also great for cross-cutting, like logging, asserts, test hooks, mocks, where you want what's included in the generated code to change according to settings.

Again, that's really only viable via transpilation -- where you're working in an entirely separate language -- not via macro generation, where you inevitably mix macro and native language, resulting in contradictory, potentially confusing, and potentially unsafe results. Macros get more done with less writing only up to the point where they cause more complexity and gotchas than they solve, at which point modern languages solve the same problems better with in-language features like genericity, more sophisticated type systems, and general in-language expressivity.

In short a good language shouldn't need macros to simplify code. Ordinary procedures/functions should be sufficient.

IMO it's trivially easy to come up with examples where that just isn't so. I've given a few already.

 

Andl - A New Database Language - andl.org
Quote from dandl on March 23, 2021, 3:02 am
Quote from Dave Voorhis on March 22, 2021, 10:52 am
Quote from dandl on March 22, 2021, 3:10 am
Quote from Dave Voorhis on March 21, 2021, 4:26 pm
Quote from dandl on March 21, 2021, 1:01 pm

This post was intended to convey a flavour, not present a complete solution. It's easy to ask questions that so far have no answers.

The Powerflex macro language has compile-time arithmetic, recursion and code blocks which allow you to write generative loops, functions and an infinite variety of syntactic structures. That's how I generated 1 million lines of code from just 10. I would expect the M language to have those, and more. But please be aware: macro pre-processing is an unfamiliar topic to most programmers and you need to be careful to be careful how you use the words: do you mean in the M macro language, or in the target host language?

  • loops, functions and procedures are written in Java (or other host language), as usual
  • the M language provides compile-time iteration, recursion and blocks to control code generation.

The problem with Powerflex is :

  1. macros are hard to write, understand, debug and maintain
  2. there is limited ability to check syntax, types and symbol usage
  3. there is only integer arithmetic and token-pasting, no strings or code execution (at compile time)
  4. it doesn't understand the host language, so it can generate gibberish.

But it can produce wonderfully readable, compact programs to replace verbose repetitive boilerplate. It's over 30 years old, but no language I know can do that.

The problems you've noted for Powerflex are exactly the type of problems generally associated with macro systems -- which have a long history and which re-emerge every so often. Java's annotations are distantly the same kind of thing (internally, they do the same kind of thing, sort of) so they have the same sort of problems.

Macro languages are fine for limited text replacement/generation -- or in Java's case, for annotating methods that should be deprecated, which ones are unit tests, etc. -- but before long developers are stretching the bounds of the well-intended-but-never-intended-for-that facility to breaking point, and their utility is rapidly overshadowed by growing complexity, difficulty, and general abomination.

The point here is that the problems relate to attempts that are ancient history. We tried to do X on our S/360 or VAX or PC/XT and it didn't work, so we don't need to try X again, ever. The number of new languages is vast, on quite shaky grounds. The number with any kind of meta-programming is tiny, but there is no consensus why that is so.

The list I found includes: Dlang, Elixir, Haxe, Nemerle, Nim, Terra. Why so few?

It's difficult to reason about metaprograms.

It is indeed. One reason is the lack of familiarity; another is the poor tools; another is poor language design. But unless you can think of a reason why should be intrinsically harder than any other kind of programming, I'm not going to worry about this objection.

It also tends to exhibit a diode effect -- it's much easier to go one way generating code than the other, which is treating code as data. Typically, you want to do both, so that you can both generate code and sensibly ingest -- to possibly affect code generation -- based on pre-existing code.

Treating code as data becomes much easier if your language is homoiconic, but that impacts human readability. Lisp is homoiconic. So is XML.

Notably, Lisp has the most sophisticated and well-integrated macro system, as the macro language is Lisp. Everything else, for reasons already noted, is poor.

Noted. What do you need in the 'ingest' department? At least a couple of those languages I quoted let you manipulate the AST. Is that enough? Do you have a use case?

They also generally must work within the semantic bounds of the host language. If you want, for example, only structural typing and the host language only supports nominal typing, that generally exceeds reasonable expectations of a macro system. Then transpilation/compilation of a distinct language may be more reasonable.

Why? Typically macros use a quite difference syntax to create and are often not limited by lexical constraints, reserved words, etc. There is absolutely no reason why you can't have macros creating types and doing the header inference implied by TTM. Piece of cake.

Macros easily create code. They do not easily ingest code, so having them sensibly integrate with existing code is difficult. That means you can have header inference implied by TTM as long as it's entirely within the macro space -- which is effectively the essence of having a separate transpiled language. Once you try to integrate macro-generated code with native language code, you have contradictions -- code that differs semantically in the macro space from the native space -- or code that is only notionally "safe" in the macro space but unsafe when accessed in the native space.

Not at all. If you want a compiler to place a new interpretation on existing source code, or do something new while compiling, you need to modify the compiler. But it's easy to see how a macro facility can ingest previously compiled code: just use reflection. The basic idea is to extend the macro processor to execute arbitrary (compiled) code in libraries. So write some reflection wrappers to extract record definitions out of existing code, and generate new code based on old definitions.

Of course, this is well-trodden territory in language design, to the point that macro processing is really no longer considered viable.

There's an argument that the desire for a macro language is always an indication that your host language isn't good enough. Anything you want a macro language to do, you should be able to do without macros as a "native" language facility.

Not so. The more common reason is that the language is powerful but verbose. Macros (as the name implied) let you get more done with less writing. They're also great for cross-cutting, like logging, asserts, test hooks, mocks, where you want what's included in the generated code to change according to settings.

Again, that's really only viable via transpilation -- where you're working in an entirely separate language -- not via macro generation, where you inevitably mix macro and native language, resulting in contradictory, potentially confusing, and potentially unsafe results. Macros get more done with less writing only up to the point where they cause more complexity and gotchas than they solve, at which point modern languages solve the same problems better with in-language features like genericity, more sophisticated type systems, and general in-language expressivity.

In short a good language shouldn't need macros to simplify code. Ordinary procedures/functions should be sufficient.

IMO it's trivially easy to come up with examples where that just isn't so. I've given a few already.

It's virtually true by definition. Every requirement for which a macro facility is a solution, is inherently a requirement for a (better) non-macro language feature.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

There's an argument that the desire for a macro language is always an indication that your host language isn't good enough. Anything you want a macro language to do, you should be able to do without macros as a "native" language facility.

Not so. The more common reason is that the language is powerful but verbose. Macros (as the name implied) let you get more done with less writing. They're also great for cross-cutting, like logging, asserts, test hooks, mocks, where you want what's included in the generated code to change according to settings.

Again, that's really only viable via transpilation -- where you're working in an entirely separate language -- not via macro generation, where you inevitably mix macro and native language, resulting in contradictory, potentially confusing, and potentially unsafe results. Macros get more done with less writing only up to the point where they cause more complexity and gotchas than they solve, at which point modern languages solve the same problems better with in-language features like genericity, more sophisticated type systems, and general in-language expressivity.

In short a good language shouldn't need macros to simplify code. Ordinary procedures/functions should be sufficient.

IMO it's trivially easy to come up with examples where that just isn't so. I've given a few already.

It's virtually true by definition. Every requirement for which a macro facility is a solution, is inherently a requirement for a (better) non-macro language feature.

For some (wrong/old) definition of 'macro'.

I'm using macro as a shorthand to refer to meta-programming, nothing more. The only language in widespread use with simple text substitution macros is C (and inherited unchanged by C++). Macros were the only solution to a range of problems in the early 1970s. Some (but not all) of those problems have been solved in later languages by other means. Many have not.

My assertion is that there are problems for which the only reasonable solution is meta-programming (aka macros on steroids). Java has failed totally to realise this, C# does a little better, but neither of them offer any kind of user-written compile-time logic. Generics are just alternative implementations of the same code, and Java annotations/C# attributes are just metadata, not metaprogramming.

I quoted Powerflex to give a flavour of the power of compile-time logic, not because I want to go back to the 1980s, but because I want to bring macros into the 2020s.

Andl - A New Database Language - andl.org
Quote from dandl on March 23, 2021, 11:07 pm

There's an argument that the desire for a macro language is always an indication that your host language isn't good enough. Anything you want a macro language to do, you should be able to do without macros as a "native" language facility.

Not so. The more common reason is that the language is powerful but verbose. Macros (as the name implied) let you get more done with less writing. They're also great for cross-cutting, like logging, asserts, test hooks, mocks, where you want what's included in the generated code to change according to settings.

Again, that's really only viable via transpilation -- where you're working in an entirely separate language -- not via macro generation, where you inevitably mix macro and native language, resulting in contradictory, potentially confusing, and potentially unsafe results. Macros get more done with less writing only up to the point where they cause more complexity and gotchas than they solve, at which point modern languages solve the same problems better with in-language features like genericity, more sophisticated type systems, and general in-language expressivity.

In short a good language shouldn't need macros to simplify code. Ordinary procedures/functions should be sufficient.

IMO it's trivially easy to come up with examples where that just isn't so. I've given a few already.

It's virtually true by definition. Every requirement for which a macro facility is a solution, is inherently a requirement for a (better) non-macro language feature.

For some (wrong/old) definition of 'macro'.

I'm using macro as a shorthand to refer to meta-programming, nothing more.

(Like Erwin, I'm barely bothering to follow these threads, but on a general point ...)

'Macro' vs 'meta-programming' vs 'generic programming' are not the same thing. (Though I'd take issue with some of each of those wikis.) Do not use abbreviations or 'shorthands' round here. If you're talking about meta-programming then use 'meta-programming'.

 

The only language in widespread use with simple text substitution macros is C (and inherited unchanged by C++). Macros were the only solution to a range of problems in the early 1970s. Some (but not all) of those problems have been solved in later languages by other means. Many have not.

My assertion is that there are problems for which the only reasonable solution is meta-programming (aka macros on steroids). Java has failed totally to realise this, C# does a little better, but neither of them offer any kind of user-written compile-time logic. Generics are just alternative implementations of the same code, and Java annotations/C# attributes are just metadata, not metaprogramming.

I quoted Powerflex to give a flavour of the power of compile-time logic, not because I want to go back to the 1980s, but because I want to bring macros into the 2020s.

If you want to demonstrate compile-time logic, suggest you look at 'Dependent typing', and the commonalities with theorem-proving/programs correct by construction.

12