The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Life after D with Safe Java

So I tried to find some languages to test that idea. See https://builtin.com/software-engineering-perspectives/new-programming-languages and https://www.rankred.com/new-programming-languages-to-learn/. If there are other people out there who see things as I do, they will make themselves visible by creating new languages.

So I pruned the list by excluding languages

  • that have a history of more than 20 years (any direct derivative of ML/OCaml, Python, Elixir?),
  • those that simply try to make JS development saner (Elm, Typescript and probably Dart),
  • two that focus rather strongly on the numerical niche (R, Julia)
  • and I'm dubious about Go (boring, low level), Pony (not stable).

Of those that are left, I tried to find the five principles: safer-higher-shorter, meta and fits in (meaning interoperable with existing code/libraries).

  • words for safer are very common, nominating the areas of type inference, memory, concurrency
  • words for shorter appear often eg 'concise'
  • words suggesting higher do not (except in the OCaml family)
  • Crystal, Elixir, Groovy, Nim, Rust have meta-programming/macros
  • There is a mix of native and VM, with words like 'interoperable' and 'performance'. Only Kotlin and Groovy target the JVM, but most offer some way of interacting with native code.

So this research tells me that most (4 of 5) of the principles I've been plugging are out there driving language development. Of this list, the only ones suitable for general app development right now are probably Groovy and Nim, with Crystal as an outsider. The principles of safe and meta are well-served, but any attempt to find higher is doomed to failure.

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

You've missed Haskell, Eiffel, Erlang, Mercury, Kotlin, Scala...

I didn't 'miss' anything. The first 4 are too old for this purpose, I have Kotlin, and Scala wasn't on those lists. But I've now included it. I could add F# too, but these are not particularly new.

Remember: the purpose is to find things that are causing enough pain to get people to create new languages. Completeness of the list is not really a requirement.

Go is "boring"?

What does that mean?

It's a better C, but I don't see that it adds any new ideas.

But perhaps more relevant is to where it looks like you're going are specialist languages like SAS, SPSS, K, and -- this is a biggie -- numerous commercial in-house languages that are used to solve a category of problems, or target a specific industrial domain, and essentially don't show up in the usual lists at all. They're not general-purpose languages, they're not generally available (in some cases they're not "available" at all; they're strictly used in-house to deliver solutions), and (from what I've seen) do deliver significant productivity increases in their niches.

I'll add them if I can find them. But remember: it's the pain they respond to that I care about, not the language itself.

They're usually transpilers, emitting compile-able C, Java, Julia, and others.

Do you have any examples to support that view? It's not true of most of the languages on that list.

...At least, until dramatically more effective ways to develop software emerge from the functional/goal-directed/logic/advanced-type-systems world.

But of course I might be completely wrong. If there's anything we can say for certain about the IT industry, it's that in the long term it's almost completely unpredictable.

If that's who we're waiting for, it's even more depressing, but I think it's wrong. The FP tank is empty, Prolog is niche and apart from unions, there isn't much more to get out of type theory. [Please enlighten me if I missed something.]

Computational type theory is no more or less than the study of mechanisms to reduce possible bugs in programs, so there's probably still a lot to get out of that.

That really is the target for my safer: not adding features and complexity but languages that let the compiler avoid whole classes of unsafe coding. Crystal and a couple of the others specifically nominate that.

Virtually any functional or logical programming language implies it, too, along with the inevitable reduction in complexity and increase an predictability that comes from minimising or isolating mutable state.

But that's not new, so it's already factored into choices people make. If it was a solved problem, there wouldn't be the demand for languages to include it.

There are companies using functional and logical programming to build software tools that other companies buy and use. As long as that appears set to continue and grow -- and it does; this is cutting-edge stuff, these are absolutely not legacy products -- and as long as there isn't anything in general programming that precludes effective use of functional and logical programming (in general, there isn't) then the fact these paradigms currently largely thrive only in some niches represents opportunity, even if only (at least initially) to fill other niches.

The FP languages are indeed survivors, but the ideas are well-established and do not attract followers. New language (revolution!) that incorporate these ideas as well as others stand a better chance.

Indeed, new languages that incorporate ideas from functional and logical programming stand a better chance, as I was suggesting before. Merely offering safety plus unspecified notions of "shorter" and "higher" are essentially what Rust and Digital Mars's D provide now. The general movement in C#/.NET and Java/JVM is toward safer, definitely -- note the option types in C#, and Kotlin for the JVM -- but also facilities drawn from functional programming like pattern matching and, previously, LINQ and Streams.

But my thesis is specifically that choices made 30 years ago when Java was but a pup are now constraints on how they can evolve. You can't remove unsafe null pointers and exceptions and casts from Java and still call it Java. We have to pay attention to 'languages that are not Java' in the last 20 years if we want to find where the pain lies.

 

Andl - A New Database Language - andl.org
Quote from dandl on April 19, 2021, 10:36 am

So I tried to find some languages to test that idea. See https://builtin.com/software-engineering-perspectives/new-programming-languages and https://www.rankred.com/new-programming-languages-to-learn/. If there are other people out there who see things as I do, they will make themselves visible by creating new languages.

So I pruned the list by excluding languages

  • that have a history of more than 20 years (any direct derivative of ML/OCaml, Python, Elixir?),
  • those that simply try to make JS development saner (Elm, Typescript and probably Dart),
  • two that focus rather strongly on the numerical niche (R, Julia)
  • and I'm dubious about Go (boring, low level), Pony (not stable).

Of those that are left, I tried to find the five principles: safer-higher-shorter, meta and fits in (meaning interoperable with existing code/libraries).

  • words for safer are very common, nominating the areas of type inference, memory, concurrency
  • words for shorter appear often eg 'concise'
  • words suggesting higher do not (except in the OCaml family)
  • Crystal, Elixir, Groovy, Nim, Rust have meta-programming/macros
  • There is a mix of native and VM, with words like 'interoperable' and 'performance'. Only Kotlin and Groovy target the JVM, but most offer some way of interacting with native code.

So this research tells me that most (4 of 5) of the principles I've been plugging are out there driving language development. Of this list, the only ones suitable for general app development right now are probably Groovy and Nim, with Crystal as an outsider. The principles of safe and meta are well-served, but any attempt to find higher is doomed to failure.

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

What kind of higher-order abstractions do we need? Do we even know?

Perhaps there is a slow evolution going on already? Some years ago few people had heard of hash maps and red-black trees, now we just use them. Or things like messaging was complicated, now we just plug in a JMS-queue and get things like guaranteed ordered delivery.

Paul Graham writes how Lisp macros was a major advantage in ViaWeb, but when Yahoo acquired it they rewrote it in C++, so presumably maintaining the macros was perceived as a greater pain than using and maintaining the higher-order entites created.

I recently read about how in the field of dense linear algebra, where some algorithms originated for pencil and paper 100 years ago, you can specify the transform you want and by nD-grammars (or graph grammars) and all the well-known variants of the algorithms you can generate the most efficient combined algorithm for your specific use case and hardware. But that domain doesn't really help us in general, I don't think.

SAP and similar would presumably consist of a lot of higher-order constructs for creating business systems, yet trying to install such gives rise to speculations such as "How many SAP-consultants can you  fit on the head of a pin?". Does that really help the specific case? If so, does it help us in general?

Your complaint about the details getting you down reminds me of the fundamental theorem of software engineering, that any problem can be solved by adding another layer of indirection. Yet no matter how many layers you add, you still need to handle all the specific details of the business.

So what kind of higher-order abstractions would be "revolutionary"?

As I write this, I realize we are still struggling with capturing the intended design of software. We can test specific runs, and we can write code that handles all possible input, but we still cannot easily write design specifications that all future refactorings of the code must adhere to. Types capture some things, contracts some. Would that be the kind of revolution you're after, to come up with a way to write the design and then generate valid code?

Quote from tobega on April 19, 2021, 1:27 pm
Quote from dandl on April 19, 2021, 10:36 am

So I tried to find some languages to test that idea. See https://builtin.com/software-engineering-perspectives/new-programming-languages and https://www.rankred.com/new-programming-languages-to-learn/. If there are other people out there who see things as I do, they will make themselves visible by creating new languages.

So I pruned the list by excluding languages

  • that have a history of more than 20 years (any direct derivative of ML/OCaml, Python, Elixir?),
  • those that simply try to make JS development saner (Elm, Typescript and probably Dart),
  • two that focus rather strongly on the numerical niche (R, Julia)
  • and I'm dubious about Go (boring, low level), Pony (not stable).

Of those that are left, I tried to find the five principles: safer-higher-shorter, meta and fits in (meaning interoperable with existing code/libraries).

  • words for safer are very common, nominating the areas of type inference, memory, concurrency
  • words for shorter appear often eg 'concise'
  • words suggesting higher do not (except in the OCaml family)
  • Crystal, Elixir, Groovy, Nim, Rust have meta-programming/macros
  • There is a mix of native and VM, with words like 'interoperable' and 'performance'. Only Kotlin and Groovy target the JVM, but most offer some way of interacting with native code.

So this research tells me that most (4 of 5) of the principles I've been plugging are out there driving language development. Of this list, the only ones suitable for general app development right now are probably Groovy and Nim, with Crystal as an outsider. The principles of safe and meta are well-served, but any attempt to find higher is doomed to failure.

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

What kind of higher-order abstractions do we need? Do we even know?

Perhaps there is a slow evolution going on already? Some years ago few people had heard of hash maps and red-black trees, now we just use them. Or things like messaging was complicated, now we just plug in a JMS-queue and get things like guaranteed ordered delivery.

Paul Graham writes how Lisp macros was a major advantage in ViaWeb, but when Yahoo acquired it they rewrote it in C++, so presumably maintaining the macros was perceived as a greater pain than using and maintaining the higher-order entites created.

That, and hiring competent Lisp developers was probably more costly than Yahoo was willing to spend.

I recently read about how in the field of dense linear algebra, where some algorithms originated for pencil and paper 100 years ago, you can specify the transform you want and by nD-grammars (or graph grammars) and all the well-known variants of the algorithms you can generate the most efficient combined algorithm for your specific use case and hardware. But that domain doesn't really help us in general, I don't think.

SAP and similar would presumably consist of a lot of higher-order constructs for creating business systems, yet trying to install such gives rise to speculations such as "How many SAP-consultants can you  fit on the head of a pin?". Does that really help the specific case? If so, does it help us in general?

Your complaint about the details getting you down reminds me of the fundamental theorem of software engineering, that any problem can be solved by adding another layer of indirection. Yet no matter how many layers you add, you still need to handle all the specific details of the business.

So what kind of higher-order abstractions would be "revolutionary"?

As I write this, I realize we are still struggling with capturing the intended design of software. We can test specific runs, and we can write code that handles all possible input, but we still cannot easily write design specifications that all future refactorings of the code must adhere to. Types capture some things, contracts some. Would that be the kind of revolution you're after, to come up with a way to write the design and then generate valid code?

Some years ago, I did some consulting work for a fellow who was quite convinced he'd solved that problem -- at least for the domain he worked in -- using a surprising amount of XML and fill-in UI forms to populate XML tags.

Then again, I note he's doing something completely different now, so maybe not.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from tobega on April 19, 2021, 1:27 pm
Quote from dandl on April 19, 2021, 10:36 am

So I tried to find some languages to test that idea. See https://builtin.com/software-engineering-perspectives/new-programming-languages and https://www.rankred.com/new-programming-languages-to-learn/. If there are other people out there who see things as I do, they will make themselves visible by creating new languages.

So I pruned the list by excluding languages

  • that have a history of more than 20 years (any direct derivative of ML/OCaml, Python, Elixir?),
  • those that simply try to make JS development saner (Elm, Typescript and probably Dart),
  • two that focus rather strongly on the numerical niche (R, Julia)
  • and I'm dubious about Go (boring, low level), Pony (not stable).

Of those that are left, I tried to find the five principles: safer-higher-shorter, meta and fits in (meaning interoperable with existing code/libraries).

  • words for safer are very common, nominating the areas of type inference, memory, concurrency
  • words for shorter appear often eg 'concise'
  • words suggesting higher do not (except in the OCaml family)
  • Crystal, Elixir, Groovy, Nim, Rust have meta-programming/macros
  • There is a mix of native and VM, with words like 'interoperable' and 'performance'. Only Kotlin and Groovy target the JVM, but most offer some way of interacting with native code.

So this research tells me that most (4 of 5) of the principles I've been plugging are out there driving language development. Of this list, the only ones suitable for general app development right now are probably Groovy and Nim, with Crystal as an outsider. The principles of safe and meta are well-served, but any attempt to find higher is doomed to failure.

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

What kind of higher-order abstractions do we need? Do we even know?

I think we do. Higher means getting rid of lower level concerns and either substituting something more abstract or leaving it to the compiler. I would want to get rid of:

  • memory, pointers, constructors/destructors
  • exceptions
  • different number types
  • type declarations
  • move semantics vs reference semantics

Actually, some of those languages do some of those things. Go is a move in the opposite direction.

Perhaps there is a slow evolution going on already? Some years ago few people had heard of hash maps and red-black trees, now we just use them. Or things like messaging was complicated, now we just plug in a JMS-queue and get things like guaranteed ordered delivery.

Those are generic collections (C++ templates) and conventional libraries, little different from the Unix/Windows system APIs. Old (language) tech.

Paul Graham writes how Lisp macros was a major advantage in ViaWeb, but when Yahoo acquired it they rewrote it in C++, so presumably maintaining the macros was perceived as a greater pain than using and maintaining the higher-order entites created.

I recently read about how in the field of dense linear algebra, where some algorithms originated for pencil and paper 100 years ago, you can specify the transform you want and by nD-grammars (or graph grammars) and all the well-known variants of the algorithms you can generate the most efficient combined algorithm for your specific use case and hardware. But that domain doesn't really help us in general, I don't think.

SAP and similar would presumably consist of a lot of higher-order constructs for creating business systems, yet trying to install such gives rise to speculations such as "How many SAP-consultants can you  fit on the head of a pin?". Does that really help the specific case? If so, does it help us in general?

Your complaint about the details getting you down reminds me of the fundamental theorem of software engineering, that any problem can be solved by adding another layer of indirection. Yet no matter how many layers you add, you still need to handle all the specific details of the business.

But that is the point: to handle the necessary complexity and avoid the accidental complexity.

So what kind of higher-order abstractions would be "revolutionary"?

As I write this, I realize we are still struggling with capturing the intended design of software. We can test specific runs, and we can write code that handles all possible input, but we still cannot easily write design specifications that all future refactorings of the code must adhere to. Types capture some things, contracts some. Would that be the kind of revolution you're after, to come up with a way to write the design and then generate valid code?

An enormous amount of effort has gone into optimising compilers for C. It's hard, because the C compiler has little idea what you're trying to do. It's all pointers and machine data types down there. The compiler needs more information to work from.

My thesis is that moving stuff from run time to compile time is a good start. You design a data model and  you capture it in SQL data types and FK links, or a Json or XML schema etc. If the compiler could see that information it could check your code against it. The critical thing is to identify the 'single source of truth', and as far as possible that should be data. If code is the truth you don't know what it means until you run it, and that's too late.

 

Andl - A New Database Language - andl.org
Quote from dandl on April 20, 2021, 12:03 am
Quote from tobega on April 19, 2021, 1:27 pm
Quote from dandl on April 19, 2021, 10:36 am

 

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

What kind of higher-order abstractions do we need? Do we even know?

I'm not seeing why this discussion is taking place on TTM. Is there anything Relational-specific? Wouldn't you get more feedback on a programming language forum?

I think we do. Higher means getting rid of lower level concerns and either substituting something more abstract or leaving it to the compiler. I would want to get rid of:

  • memory, pointers, constructors/destructors

You mean constructors that run code to build a value? The ones TTM rejects in favour of 'Selectors'? So do you allow Selectors? (Haskell/most Functional Programming languages have what they call 'Constructors' that are a lot closer to 'Selectors'.) If you do allow, how to access the values inside the data structure without 'destructors'?

  • exceptions

Divide-by-zero and numeric-overflow exceptions? IO/comms exceptions? File-not-found exceptions? Those events are going to happen. Are programmers not to be allowed to code responses to them?

  • different number types

Why try to overturn centuries of number theory? I want Integrals for some purposes, Floats for different purposes, I want to control precision vs performance of arithmetic vs memory footprint. That is, I want to know what I'm asking for when I declare a variable at a type; I don't want to worry about the bytes at machine level directly.

  • type declarations

Are you crazy?

  • move semantics vs reference semantics

'move' means destructive overwriting in-situ? Which interacts badly with flow-of-control accessing that location. Haskell and most more abstract FP languages already adhere to 'value semantics'. I believe half-FP-ish languages (F#, Scala) don't.

Actually, some of those languages do some of those things. Go is a move in the opposite direction.

SAP and similar would presumably consist of a lot of higher-order constructs for creating business systems, yet trying to install such gives rise to speculations such as "How many SAP-consultants can you  fit on the head of a pin?". Does that really help the specific case? If so, does it help us in general?

Mostly what SAP consultants do is business modelling/analysis. Then translate to mostly configuration plus some ABAP (or whatever it's called these days). It's not a programming language 'problem', it's a vaguely-specified/relying on common sense business rules problem.

Your complaint about the details getting you down reminds me of the fundamental theorem of software engineering, that any problem can be solved by adding another layer of indirection. Yet no matter how many layers you add, you still need to handle all the specific details of the business.

But that is the point: to handle the necessary complexity and avoid the accidental complexity.

Anybody can write lists of bullet points off the top of their head. And anybody can disagree with specific points. You have to show why some point is necessary vs accidental complexity. And I suspect you can do that only in context of some specific feature proposal for some specific language.

An enormous amount of effort has gone into optimising compilers for C. It's hard, because the C compiler has little idea what you're trying to do. It's all pointers and machine data types down there. The compiler needs more information to work from.

Optimisers for LLVM can see both more of "what you're trying to do" (particularly if they know from which HLL the code was generated); and have knowledge of the target hardware.

My thesis is that moving stuff from run time to compile time is a good start. You design a data model and  you capture it in SQL data types ...

emm I thought you were doing away with numeric types and declarations?

and FK links, or a Json or XML schema etc. If the compiler could see that information it could check your code against it. The critical thing is to identify the 'single source of truth', and as far as possible that should be data.

Then we need much richer ways to declare the properties of data. Like arbitrary Boolean expressions required to hold over data structures. Ah, we seem to have re-invented TTM (RM Pre 20, 23). See also 'Liquid Haskell'.

 

Quote from AntC on April 20, 2021, 1:53 am
Quote from dandl on April 20, 2021, 12:03 am
Quote from tobega on April 19, 2021, 1:27 pm
Quote from dandl on April 19, 2021, 10:36 am

 

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

What kind of higher-order abstractions do we need? Do we even know?

I'm not seeing why this discussion is taking place on TTM. Is there anything Relational-specific? Wouldn't you get more feedback on a programming language forum?

The initial question was about how to merge the ideas of TTM/D into an existing programming language, which leads into what kind of language could host D concepts. But I agree, much of this belongs elsewhere.

I think we do. Higher means getting rid of lower level concerns and either substituting something more abstract or leaving it to the compiler. I would want to get rid of:

  • memory, pointers, constructors/destructors

You mean constructors that run code to build a value? The ones TTM rejects in favour of 'Selectors'? So do you allow Selectors? (Haskell/most Functional Programming languages have what they call 'Constructors' that are a lot closer to 'Selectors'.) If you do allow, how to access the values inside the data structure without 'destructors'?

I mean the accidental complexity of the difference in syntax and semantics from ordinary functions. Yes, there must ways to construct values, but why not just:

type Point(x,y)
let p = Point(90, 0.5)

Destructors are not accessors, they are a bizarre requirement to tell the compiler how to take out the garbage. C# has them too (Dispose()).

  • exceptions

Divide-by-zero and numeric-overflow exceptions? IO/comms exceptions? File-not-found exceptions? Those events are going to happen. Are programmers not to be allowed to code responses to them?

First off, these should almost never happen. The compiler should detect the possibility, and resolve it at compile time (by explicit code, implicit conversion, default, etc).

Second: many I/O operations can succeed or fail, but failure is normal, not exceptional. Code for it (something like maybe), or allow it to panic.

Third: runtime failure that prevents the program from continuing is a panic, by which I mean it should return to a proven safe state (like when you die in a game).

  • different number types

Why try to overturn centuries of number theory? I want Integrals for some purposes, Floats for different purposes, I want to control precision vs performance of arithmetic vs memory footprint. That is, I want to know what I'm asking for when I declare a variable at a type; I don't want to worry about the bytes at machine level directly.

Of course, but (mostly) you don't need to tell the compiler. Your choice of literals and library functions are enough for the compiler to infer the most suitable type, or tell you when it needs more info.

  • type declarations

Are you crazy?

Not at all. Obviously you need a way to define new types, but you do not (usually) need to tell the compiler the type of a value. It already knows, by inference.

  • move semantics vs reference semantics

'move' means destructive overwriting in-situ? Which interacts badly with flow-of-control accessing that location. Haskell and most more abstract FP languages already adhere to 'value semantics'. I believe half-FP-ish languages (F#, Scala) don't.

Not exactly. One of the strengths of OO is the ability to create little parcels of state and encapsulate it with the operations that change state. It's a kind of Smalltalk-ish thing: tell the fritz object to turn itself on and the schnitz object to turn itself off. This is useful, but at the other end Java fails badly in having just a few primitive value types and no way to create new ones. C# fails differently: there is a way, but the syntax and semantics are different.

But I don't buy the full FP thing, and it's not a feature of the new languages I surveyed. I want both, so I talk in terms of mutable objects (that maintain internal state) and immutable (that don't).

Actually, some of those languages do some of those things. Go is a move in the opposite direction.

SAP and similar would presumably consist of a lot of higher-order constructs for creating business systems, yet trying to install such gives rise to speculations such as "How many SAP-consultants can you  fit on the head of a pin?". Does that really help the specific case? If so, does it help us in general?

Mostly what SAP consultants do is business modelling/analysis. Then translate to mostly configuration plus some ABAP (or whatever it's called these days). It's not a programming language 'problem', it's a vaguely-specified/relying on common sense business rules problem.

Your complaint about the details getting you down reminds me of the fundamental theorem of software engineering, that any problem can be solved by adding another layer of indirection. Yet no matter how many layers you add, you still need to handle all the specific details of the business.

But that is the point: to handle the necessary complexity and avoid the accidental complexity.

Anybody can write lists of bullet points off the top of their head. And anybody can disagree with specific points. You have to show why some point is necessary vs accidental complexity. And I suspect you can do that only in context of some specific feature proposal for some specific language.

I agree, but for now I'll do that in the context of what other people have already done and found useful. This is still a research phase.

An enormous amount of effort has gone into optimising compilers for C. It's hard, because the C compiler has little idea what you're trying to do. It's all pointers and machine data types down there. The compiler needs more information to work from.

Optimisers for LLVM can see both more of "what you're trying to do" (particularly if they know from which HLL the code was generated); and have knowledge of the target hardware.

A C pointer can point at literally anything, and a C array is just a pointer in drag. Any fool can write C code that can fool any optimiser, and even take pride in it.

My thesis is that moving stuff from run time to compile time is a good start. You design a data model and  you capture it in SQL data types ...

emm I thought you were doing away with numeric types and declarations?

A data model needs to capture design intent, which means identifying attribute types in the schema. The idea is that no program ever has to repeat that information, because the compiler already knows, by reading the schema.

and FK links, or a Json or XML schema etc. If the compiler could see that information it could check your code against it. The critical thing is to identify the 'single source of truth', and as far as possible that should be data.

Then we need much richer ways to declare the properties of data. Like arbitrary Boolean expressions required to hold over data structures. Ah, we seem to have re-invented TTM (RM Pre 20, 23). See also 'Liquid Haskell'.

By all means. TTM gets this right: a type is a named set of values. But if the compiler is to enforce constraints expressed in code, you need it executing at compile time, which is meta-programming.

 

 

Andl - A New Database Language - andl.org
Quote from dandl on April 20, 2021, 12:03 am
Quote from tobega on April 19, 2021, 1:27 pm
Quote from dandl on April 19, 2021, 10:36 am

So I tried to find some languages to test that idea. See https://builtin.com/software-engineering-perspectives/new-programming-languages and https://www.rankred.com/new-programming-languages-to-learn/. If there are other people out there who see things as I do, they will make themselves visible by creating new languages.

So I pruned the list by excluding languages

  • that have a history of more than 20 years (any direct derivative of ML/OCaml, Python, Elixir?),
  • those that simply try to make JS development saner (Elm, Typescript and probably Dart),
  • two that focus rather strongly on the numerical niche (R, Julia)
  • and I'm dubious about Go (boring, low level), Pony (not stable).

Of those that are left, I tried to find the five principles: safer-higher-shorter, meta and fits in (meaning interoperable with existing code/libraries).

  • words for safer are very common, nominating the areas of type inference, memory, concurrency
  • words for shorter appear often eg 'concise'
  • words suggesting higher do not (except in the OCaml family)
  • Crystal, Elixir, Groovy, Nim, Rust have meta-programming/macros
  • There is a mix of native and VM, with words like 'interoperable' and 'performance'. Only Kotlin and Groovy target the JVM, but most offer some way of interacting with native code.

So this research tells me that most (4 of 5) of the principles I've been plugging are out there driving language development. Of this list, the only ones suitable for general app development right now are probably Groovy and Nim, with Crystal as an outsider. The principles of safe and meta are well-served, but any attempt to find higher is doomed to failure.

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

What kind of higher-order abstractions do we need? Do we even know?

I think we do. Higher means getting rid of lower level concerns and either substituting something more abstract or leaving it to the compiler. I would want to get rid of:

  • memory, pointers, constructors/destructors
  • exceptions
  • different number types
  • type declarations
  • move semantics vs reference semantics

Sounds like a Business Basic circa late 1970's or VSI BASIC, circa today. See https://en.wikipedia.org/wiki/VSI_BASIC_for_OpenVMS

Or see http://neilrieck.net/demo_vms_html/openvms_demo_index.html. (P.S. Are you a qualified OpenVMS programmer? If not, don't look. Apparently.)

Often the problem here isn't so much your ideas but your tendency to write descriptions which leave out more than they include, and thus raise more questions than they answer. In particular, your use of terminology often implies long discarded (for good reason) approaches, though that doesn't appear to be the intent.

For example, if your bullet list actually means sophisticated (e.g., algebraic types / dependent typing) type systems, type inference (with optional annotations or obligatory ones for, say, function parameters), value semantics (with references where needed), pattern-matching dispatch, then it's fine. But as written, not so much. E.g., do you really want to get rid of all type declarations? Do you really mean one numeric type, or do you mean the specific numeric type is inferred? Etc.

If so, does that mean pure inference?  That can be hard to read.

Do you mean a mix of type annotations (manifest typing) and type inference?  That's good -- it's what modern Java and C# allow: Inference where it helps, with annotations to improve readability.

Do you mean something else?

More questions than answers...

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from dandl on April 19, 2021, 5:08 am

So I tried to find some languages to test that idea. See https://builtin.com/software-engineering-perspectives/new-programming-languages and https://www.rankred.com/new-programming-languages-to-learn/. If there are other people out there who see things as I do, they will make themselves visible by creating new languages.

So I pruned the list by excluding languages

  • that have a history of more than 20 years (any direct derivative of ML/OCaml, Python, Elixir?),
  • those that simply try to make JS development saner (Elm, Typescript and probably Dart),
  • two that focus rather strongly on the numerical niche (R, Julia)
  • and I'm dubious about Go (boring, low level), Pony (not stable).

Of those that are left, I tried to find the five principles: safer-higher-shorter, meta and fits in (meaning interoperable with existing code/libraries).

  • words for safer are very common, nominating the areas of type inference, memory, concurrency
  • words for shorter appear often eg 'concise'
  • words suggesting higher do not (except in the OCaml family)
  • Crystal, Elixir, Groovy, Nim, Rust have meta-programming/macros
  • There is a mix of native and VM, with words like 'interoperable' and 'performance'. Only Kotlin and Groovy target the JVM, but most offer some way of interacting with native code.

So this research tells me that most (4 of 5) of the principles I've been plugging are out there driving language development. Of this list, the only ones suitable for general app development right now are probably Groovy and Nim, with Crystal as an outsider. The principles of safe and meta are well-served, but any attempt to find higher is doomed to failure.

Did you look at the Raku language yet?  I recommend it.  May very well outdo some of the other newer and niche languages on your final list.

Quote from Dave Voorhis on April 20, 2021, 7:51 am
Quote from dandl on April 20, 2021, 12:03 am
Quote from tobega on April 19, 2021, 1:27 pm
Quote from dandl on April 19, 2021, 10:36 am

So I tried to find some languages to test that idea. See https://builtin.com/software-engineering-perspectives/new-programming-languages and https://www.rankred.com/new-programming-languages-to-learn/. If there are other people out there who see things as I do, they will make themselves visible by creating new languages.

So I pruned the list by excluding languages

  • that have a history of more than 20 years (any direct derivative of ML/OCaml, Python, Elixir?),
  • those that simply try to make JS development saner (Elm, Typescript and probably Dart),
  • two that focus rather strongly on the numerical niche (R, Julia)
  • and I'm dubious about Go (boring, low level), Pony (not stable).

Of those that are left, I tried to find the five principles: safer-higher-shorter, meta and fits in (meaning interoperable with existing code/libraries).

  • words for safer are very common, nominating the areas of type inference, memory, concurrency
  • words for shorter appear often eg 'concise'
  • words suggesting higher do not (except in the OCaml family)
  • Crystal, Elixir, Groovy, Nim, Rust have meta-programming/macros
  • There is a mix of native and VM, with words like 'interoperable' and 'performance'. Only Kotlin and Groovy target the JVM, but most offer some way of interacting with native code.

So this research tells me that most (4 of 5) of the principles I've been plugging are out there driving language development. Of this list, the only ones suitable for general app development right now are probably Groovy and Nim, with Crystal as an outsider. The principles of safe and meta are well-served, but any attempt to find higher is doomed to failure.

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

What kind of higher-order abstractions do we need? Do we even know?

I think we do. Higher means getting rid of lower level concerns and either substituting something more abstract or leaving it to the compiler. I would want to get rid of:

  • memory, pointers, constructors/destructors
  • exceptions
  • different number types
  • type declarations
  • move semantics vs reference semantics

Sounds like a Business Basic circa late 1970's or VSI BASIC, circa today. See https://en.wikipedia.org/wiki/VSI_BASIC_for_OpenVMS

Or see http://neilrieck.net/demo_vms_html/openvms_demo_index.html. (P.S. Are you a qualified OpenVMS programmer? If not, don't look. Apparently.)

Really? I haven't used that since around 1984 so my memory is hazy, but VB6 was roughly comparable.It's hard to say exactly what we can do now we couldn't do then, but I guess the defining differences are the type system and modules/libraries (which wer pretty basic in BASIC). And it had heaps of cruft, as I recall. No lessons there I fear.

Often the problem here isn't so much your ideas but your tendency to write descriptions which leave out more than they include, and thus raise more questions than they answer. In particular, your use of terminology often implies long discarded (for good reason) approaches, though that doesn't appear to be the intent.

For example, if your bullet list actually means sophisticated (e.g., algebraic types / dependent typing) type systems, type inference (with optional annotations or obligatory ones for, say, function parameters), value semantics (with references where needed), pattern-matching dispatch, then it's fine. But as written, not so much. E.g., do you really want to get rid of all type declarations? Do you really mean one numeric type, or do you mean the specific numeric type is inferred? Etc.

I was giving a list of stuff to take out, so we can think higher and leave more of the details to the compiler. Yes (obviously) a type system as capable as TTM requires, which means (obviously) the ability to define new types. By all means add union types, I was only doing the subtractions.

Why do you need type declarations? Why do you need to declare which type of number to use? If the compiler can track every value from the point where it was created, why does it need our help?

Yes, there are some issues with value ambiguity aka overloading, but that's all I can think of and that looks solvable.

If so, does that mean pure inference?  That can be hard to read.

Do you have examples?

Do you mean a mix of type annotations (manifest typing) and type inference?  That's good -- it's what modern Java and C# allow: Inference where it helps, with annotations to improve readability.

Not really. C# allows var in many places, but not all. But here is an example of what I have to write and this is dumb!

Dictionary<ResultKinds, string> resultlookup = new Dictionary<ResultKinds, string> {
  { ResultKinds.None, "No Result" },
  { ResultKinds.Win, "{0} has Won" },
  { ResultKinds.Lose, "{0} has Lost" },
  { ResultKinds.Draw, "Draw" },
};

It could so easily be:

resultlookup = {
  None: "No Result",
  Win: "{0} has Won",
  Lose: "{0} has Lost",
  Draw: "Draw",
};

 

Andl - A New Database Language - andl.org
Quote from dandl on April 20, 2021, 9:19 am
Quote from Dave Voorhis on April 20, 2021, 7:51 am
Quote from dandl on April 20, 2021, 12:03 am
Quote from tobega on April 19, 2021, 1:27 pm
Quote from dandl on April 19, 2021, 10:36 am

So I tried to find some languages to test that idea. See https://builtin.com/software-engineering-perspectives/new-programming-languages and https://www.rankred.com/new-programming-languages-to-learn/. If there are other people out there who see things as I do, they will make themselves visible by creating new languages.

So I pruned the list by excluding languages

  • that have a history of more than 20 years (any direct derivative of ML/OCaml, Python, Elixir?),
  • those that simply try to make JS development saner (Elm, Typescript and probably Dart),
  • two that focus rather strongly on the numerical niche (R, Julia)
  • and I'm dubious about Go (boring, low level), Pony (not stable).

Of those that are left, I tried to find the five principles: safer-higher-shorter, meta and fits in (meaning interoperable with existing code/libraries).

  • words for safer are very common, nominating the areas of type inference, memory, concurrency
  • words for shorter appear often eg 'concise'
  • words suggesting higher do not (except in the OCaml family)
  • Crystal, Elixir, Groovy, Nim, Rust have meta-programming/macros
  • There is a mix of native and VM, with words like 'interoperable' and 'performance'. Only Kotlin and Groovy target the JVM, but most offer some way of interacting with native code.

So this research tells me that most (4 of 5) of the principles I've been plugging are out there driving language development. Of this list, the only ones suitable for general app development right now are probably Groovy and Nim, with Crystal as an outsider. The principles of safe and meta are well-served, but any attempt to find higher is doomed to failure.

That's perhaps because "higher" isn't a generally-used term. Indeed, general-purpose increases in abstraction level are relatively rare, and sometimes controversial as to whether they're really higher-order abstractions or low-level foundations, etc. Better, perhaps, to use terms like expressivity, compose-ability, etc. It's the same terminological focus as using "concise" instead of "shorter", because the former implies readability and understandability whereas the latter suggests overly-terse APL or Z notation.

Those are just my tags. The point is I found very little to indicate that higher-order abstractions were a driver, in comparison to the heavy emphasis on safer, along with meta and performance (which I wasn't looking for). I was also surprised not to find more emphasis on fits in: many languages seem to take the view that native code and C libraries are enough, which to my view is plain wrong.

What kind of higher-order abstractions do we need? Do we even know?

I think we do. Higher means getting rid of lower level concerns and either substituting something more abstract or leaving it to the compiler. I would want to get rid of:

  • memory, pointers, constructors/destructors
  • exceptions
  • different number types
  • type declarations
  • move semantics vs reference semantics

Sounds like a Business Basic circa late 1970's or VSI BASIC, circa today. See https://en.wikipedia.org/wiki/VSI_BASIC_for_OpenVMS

Or see http://neilrieck.net/demo_vms_html/openvms_demo_index.html. (P.S. Are you a qualified OpenVMS programmer? If not, don't look. Apparently.)

Really? I haven't used that since around 1984 so my memory is hazy, but VB6 was roughly comparable.It's hard to say exactly what we can do now we couldn't do then, but I guess the defining differences are the type system and modules/libraries (which wer pretty basic in BASIC). And it had heaps of cruft, as I recall. No lessons there I fear.

It wasn't meant to be an analysis. Just pointing out that your bullet points are vague enough to mean anything from cutting-edge future advancements to crude legacy languages. E.g., "getting rid of ... type declarations" could mean type inference per modern languages, or literally no type declarations and "type inference" of a handful of primitive types per vintage BASIC.

Often the problem here isn't so much your ideas but your tendency to write descriptions which leave out more than they include, and thus raise more questions than they answer. In particular, your use of terminology often implies long discarded (for good reason) approaches, though that doesn't appear to be the intent.

For example, if your bullet list actually means sophisticated (e.g., algebraic types / dependent typing) type systems, type inference (with optional annotations or obligatory ones for, say, function parameters), value semantics (with references where needed), pattern-matching dispatch, then it's fine. But as written, not so much. E.g., do you really want to get rid of all type declarations? Do you really mean one numeric type, or do you mean the specific numeric type is inferred? Etc.

I was giving a list of stuff to take out, so we can think higher and leave more of the details to the compiler. Yes (obviously) a type system as capable as TTM requires, which means (obviously) the ability to define new types. By all means add union types, I was only doing the subtractions.

Why do you need type declarations? Why do you need to declare which type of number to use? If the compiler can track every value from the point where it was created, why does it need our help?

It doesn't need our help, we need its help, so to speak.

What we sometimes want is the explicit readability provided by annotations, particularly on function/method/procedure declarations. This is both to improve readability (if dispatch is disambiguated by name) and to drive dispatch (if the name is overloaded.)

It's also a nice option for variable declarations. It's not just for readability; it allows us to explicitly specify a supertype/generic to allow assignment of any value belonging to a set of specified substitutable types (usually by explicitly specifying a supertype.)

Yes, there are some issues with value ambiguity aka overloading, but that's all I can think of and that looks solvable.

If so, does that mean pure inference?  That can be hard to read.

Do you have examples?

Sure. This (made-up code) is hard to read:

var criteria = document.getApplicationCriteria();
var fuzzFactor = document.getInsertionFuzzFactor();
var range = document.performSelection(fuzzFactor, criteria);
var success = document.activateSelection(range, criteria);

This is more verbose, but easier to read:

Filter criteria = document.getApplicationCriteria(); 
double fuzzFactor = document.getInsertionFuzzFactor(); 
Pair<int, int> range = document.performSelection(fuzzFactor, criteria); 
boolean success = document.activateSelection(range, criteria);

This function declaration is hard to read, and difficult to visually verify against specifications/contracts:

fn DeepSearch(struct, criteria) { ... }

This is easy to read, and easy to visually verify against specifications/contracts:

fn DeepSearch(Tree<TreeNode<CustomNode>> struct, Filter criteria) -> TreeNode<CustomNode> { ... }

My emphasis is on readability, rather than terseness or being shorter above all else.

Do you mean a mix of type annotations (manifest typing) and type inference?  That's good -- it's what modern Java and C# allow: Inference where it helps, with annotations to improve readability.

Not really. C# allows var in many places, but not all. But here is an example of what I have to write and this is dumb!

Dictionary<ResultKinds, string> resultlookup = new Dictionary<ResultKinds, string> {
  { ResultKinds.None, "No Result" },
  { ResultKinds.Win, "{0} has Won" },
  { ResultKinds.Lose, "{0} has Lost" },
  { ResultKinds.Draw, "Draw" },
};

It could so easily be:

resultlookup = {
  None: "No Result",
  Win: "{0} has Won",
  Lose: "{0} has Lost",
  Draw: "Draw",
};

 

At least you can say this:

var resultlookup = new Dictionary<ResultKinds, string> {
  { ResultKinds.None, "No Result" },
  { ResultKinds.Win, "{0} has Won" },
  { ResultKinds.Lose, "{0} has Lost" },
  { ResultKinds.Draw, "Draw" },
};

think Kotlin will allow you to elide ResultKinds. in the equivalent context, but maybe I'm thinking of something else.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org