The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Life after J

PreviousPage 3 of 5Next
Quote from dandl on April 5, 2021, 11:58 pm

So in all seriousness, what would make you switch languages? Is slow and steady tweaking of tiny features in Java at a glacial pace enough, always more or less tracking equivalents in C++? Surely there must be something frustrating about Java that some Big New Thing would fix? And if so what?

Yes, Java and C# were meant to be a C++ with softer, less dangerous edges. That was the point.

But one imperative, has-a-few-functional-addons C/C++-derived (whether syntax, semantics, or both) language is pretty much like every other. In terms of overall developer productivity and essential problem-solving approach, C, C++, C#, Java, JavaScript, TypeScript, Ruby, Python, PHP, Swift, Rust (and for that matter, FORTRAN, Pascal, PL/I, COBOL, etc.) are equivalent. When we need to do fine-grained, build-it-out-of-toothpicks software construction, any of them are fine; simply choose the one most suited (or least un-suited) to the job at hand.

No, that won't do.

That won't do what?

ASM => Fortran/Cobol/Algol => C => C++ is a continuous upward path of expressive power. C => Pascal => Java/C# is an upward path of safety. They're not equivalent.

They are equivalent for business applications, which I took to be the implicit context here. If you need to write a device driver, Java/C# (or Pascal) would not be the right choice -- but only because of lack/presence of certain low-level facilities -- but if you need to write a Website backend, they are equivalent. If you're talking about overall language semantics and general abstract expressivity, they are equivalent.

No, they really aren't,

No, they really are, but I am talking about their general expressivity and essential semantics rather than an ad hoc developer impression of one language vs another. Java/C#/Pascal/COBOL/Ada/C++/C/Python/TypeScript/Ruby/Rust/PHP/etc are of equivalent expressivity, sharing the same core semantics and differing (rather negligibly) in peripheral functionality. LISP and Haskell are different, though they (and the former especially) share a lot with the big lot of imperative languages. Prolog is different (but, again, sharing aspects too.)

There is both considerable overlap and distinct outliers (usually simultaneously) among these rather amorphous groups, but the imperative group are essentially the same, which is why switching from one language to another within the group doesn't significantly change the shape or nature of problem solving. Whether or not a given language lets you (say) reference explicit memory addresses might be very important to whether you can implement a device driver or not, but doesn't make a conceptual difference to how you write algorithms.

Thus, they are equivalent.

and in any case, business applications are a tiny subset of the interesting problems to be solved by programming.

Add their capabilities, safety, expressivity and power -- but don't take away what we can do now with Java/C#/Python/whatever, whether though clean integration or absorption -- and there may be a compelling reason to switch.

Bingo. What we all really want is a safer simpler C++.

We've got it, and it's called Rust.

I offered Rust as an example of meta-programming, and you rejected it. The 'hello world' program in Rust uses a macro.

I reject macros, in Rust and outside of it.

What I really don't want is yet another safer, simpler C++. We've got enough of those already.

want to step up in capability and absorb 3GL functionality, and definitely not stay at a maximum of 3GL level.

I think we do need to look closely at functional, declarative, logical, and goal-directed languages for inspiration and influence, and step up a level without losing 3GL capability. We've got enough languages that take a step over without stepping up, and adding yet another one is not going to make a compelling difference.

You can want but you're not going to get until 3GL is fixed.

Goal: a new language S that can completely replace C++ (and Java/C#), but is both safe and "as simple as possible but no simpler".

Rust.

Maybe. I haven't yet had the opportunity to do a project in Rust so I can't say, but I would make a small bet it scores no more than about 75% on my list.

Which implies:

  • No end-runs around the compiler. We tell the compiler what we're doing, it checks up on us, the runtime has no way to cheat. [Caveat: highly restricted access to 'unsafe' code?]
  • Support for multiple memory models: stack, heap, ref-counted, GC, we don't care but it just works, safely. [Note: C# value types are not GC. People care.]
  • No null references: compiler tracks all usage.
  • No casts, all conversions checked at compile time eg: Maybe, conversions with defaults, pattern matching.
  • No exception handling, except for recovery from catastrophic failure to a provable safe point.
  • No reflection/RTTI: whatever would be done at runtime using reflection must be done at compile time so it can be made safe.
  • No ifdefs/macros/annotations/code generation, but a  meta-programming extension to replace existing use cases with code the compiler can check.
  • Provably thread safe (async, promises, whatever).
  • Interop: compiler-checked access to Java, C# and extern "C" libraries. [Caveat: do we trust the libraries?]
  • Compiles to executable/DLL/SO etc.

It's Rust. There's perhaps work to be done to better integrate with Java/JVM (though there's https://github.com/jni-rs/jni-rs) and C#/.NET (e.g., see https://medium.com/@chyyran/calling-c-natively-from-rust-1f92c506289d), but it's there.

Not sure I agree with this: "No exception handling, except for recovery from catastrophic failure to a provable safe point," particularly as "except for recovery from catastrophic failure to a provable safe point" is exception handling. Perhaps you mean "no use of exception handling as an unwieldy alternative to return values."

That too, but no, it's the provable safety that is missing. Java exception handling is horribly intrusive, C# is not much better, C++ is just unsafe. The language should deal with all the small things (Rust has Option<T>) and just provide one means of disaster recovery. In Rust that's 'panic', which is a better name for what I mean.

A friend of mine once said: I write it, I compile it. If it compiles, it works; then as long as I got the logic right, it's done. He was talking about Burroughs Algol, at a time when most operating systems were written in assembler. We're not there yet with Java/C# and not even close with C++. You can write an O/S in S, safely.

So tell me if you think any of the shopping list is beyond us. Would you use it? I know I would.

I don't know anyone actually using Rust. Plenty of folks are playing with it. I plan to do so at some point.

I haven't seen anything yet that would compel me to switch from C# and Java, because I find them perfectly adequate for most fine-grained, 3GL-style programming, with occasional forays into C++ when I need to do something explicitly with memory (mainly on Arduinos for personal projects) or consistently high performance and low latency (paid work.)

There is nothing on this planet that would get you to switch for the kind of work you do and get paid for. I haven't done any paid programming for over 30 years, and when I did it wasn't what you call 'business programming', so my perspective is rather different. My customers do 'business programming' and I hang out with people who do 'non-business programming', and I write tools for them. I've written code in over 50 languages, and compilers for several of them. I'm seriously into language features. I would switch, but right now safety is not enough on its own. It is a guiding principle worth following.

Yes, Rust was the best I found on my quest for M and then S. I'm increasingly certain that until we solve the safety problem for all kinds of programming, we can't move to the next level.

Write a language and scratch that itch. That's what all of us implementers have done. We're a very diverse group, so trying to sell us on a vapourware bullet-point list is only going to spur debate. That's fine such as it is (this is a discussion forum, after all) but if you're looking for some broad consensus or buy-in, it's almost certainly not going to happen. Criticism, though -- you'll get a lot of that.

So now I need a project to try it out, and perhaps implementing the Extended RA is one worth trying. One thing Andl taught me was: I really miss programming with relations!

Is it the relations and relational algebra in particular -- no duplicate tuples, the particular relational operators and such -- that are appealing?

Or is it programming by writing expressions to transform input to output via immutable arguments and return values, and a set of composable operators?

Whilst I appreciate the relational model (of course), for me it's the latter. The relational model is just one example of the benefit of expressing (certain) programs using stateless transformations, but there are others: C# LINQ; Java Streams; various vector/matrix libraries; even bash scripting using find, sed, awk, cut, grep as operators with pipe to pass output from one to the next.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

ASM => Fortran/Cobol/Algol => C => C++ is a continuous upward path of expressive power. C => Pascal => Java/C# is an upward path of safety. They're not equivalent.

They are equivalent for business applications, which I took to be the implicit context here. If you need to write a device driver, Java/C# (or Pascal) would not be the right choice -- but only because of lack/presence of certain low-level facilities -- but if you need to write a Website backend, they are equivalent. If you're talking about overall language semantics and general abstract expressivity, they are equivalent.

No, they really aren't,

No, they really are, but I am talking about their general expressivity and essential semantics rather than an ad hoc developer impression of one language vs another. Java/C#/Pascal/COBOL/Ada/C++/C/Python/TypeScript/Ruby/Rust/PHP/etc are of equivalent expressivity, sharing the same core semantics and differing (rather negligibly) in peripheral functionality. LISP and Haskell are different, though they (and the former especially) share a lot with the big lot of imperative languages. Prolog is different (but, again, sharing aspects too.)

There is both considerable overlap and distinct outliers (usually simultaneously) among these rather amorphous groups, but the imperative group are essentially the same, which is why switching from one language to another within the group doesn't significantly change the shape or nature of problem solving. Whether or not a given language lets you (say) reference explicit memory addresses might be very important to whether you can implement a device driver or not, but doesn't make a conceptual difference to how you write algorithms.

Thus, they are equivalent.

Then we differ. But you may have missed the point that I was talking about a progression: assembler of the 1960s, Fortran of the 1970s, Pascal of the 1980s. To some extent they have now converged, but those languages in their times were less expressive and presented different semantics from the languages we use today. The mental models are different even now building embedded code in assembler or Tiny C as against C++ or Ada. If you don't see that, then I won't try to persuade you.

Yes, Rust was the best I found on my quest for M and then S. I'm increasingly certain that until we solve the safety problem for all kinds of programming, we can't move to the next level.

Write a language and scratch that itch. That's what all of us implementers have done. We're a very diverse group, so trying to sell us on a vapourware bullet-point list is only going to spur debate. That's fine such as it is (this is a discussion forum, after all) but if you're looking for some broad consensus or buy-in, it's almost certainly not going to happen. Criticism, though -- you'll get a lot of that.

It's too hard, the hurdle is too high. Andl showed me that, if I didn't know already.

So now I need a project to try it out, and perhaps implementing the Extended RA is one worth trying. One thing Andl taught me was: I really miss programming with relations!

Is it the relations and relational algebra in particular -- no duplicate tuples, the particular relational operators and such -- that are appealing?

Or is it programming by writing expressions to transform input to output via immutable arguments and return values, and a set of composable operators?

Whilst I appreciate the relational model (of course), for me it's the latter. The relational model is just one example of the benefit of expressing (certain) programs using stateless transformations, but there are others: C# LINQ; Java Streams; various vector/matrix libraries; even bash scripting using find, sed, awk, cut, grep as operators with pipe to pass output from one to the next.

It's the ability to use a higher mental model. It's the same movement as from spaghetti code to structured programming, from explicit for loops to foreach, from loops to streams. It's being able to think of a relation as a single entity, not a list of rows or a stream of tuples. It's being able to think: if I joined this to that and projected it onto the other then it would have this shape and it fit into that need. I had that feeling with arrays in APL, and it's a rare feeling. It's not a pipe so much as a production line of whole assemblies built out of components.

 

Andl - A New Database Language - andl.org
Quote from dandl on April 6, 2021, 11:32 am

ASM => Fortran/Cobol/Algol => C => C++ is a continuous upward path of expressive power. C => Pascal => Java/C# is an upward path of safety. They're not equivalent.

They are equivalent for business applications, which I took to be the implicit context here. If you need to write a device driver, Java/C# (or Pascal) would not be the right choice -- but only because of lack/presence of certain low-level facilities -- but if you need to write a Website backend, they are equivalent. If you're talking about overall language semantics and general abstract expressivity, they are equivalent.

No, they really aren't,

No, they really are, but I am talking about their general expressivity and essential semantics rather than an ad hoc developer impression of one language vs another. Java/C#/Pascal/COBOL/Ada/C++/C/Python/TypeScript/Ruby/Rust/PHP/etc are of equivalent expressivity, sharing the same core semantics and differing (rather negligibly) in peripheral functionality. LISP and Haskell are different, though they (and the former especially) share a lot with the big lot of imperative languages. Prolog is different (but, again, sharing aspects too.)

There is both considerable overlap and distinct outliers (usually simultaneously) among these rather amorphous groups, but the imperative group are essentially the same, which is why switching from one language to another within the group doesn't significantly change the shape or nature of problem solving. Whether or not a given language lets you (say) reference explicit memory addresses might be very important to whether you can implement a device driver or not, but doesn't make a conceptual difference to how you write algorithms.

Thus, they are equivalent.

Then we differ. But you may have missed the point that I was talking about a progression: assembler of the 1960s, Fortran of the 1970s, Pascal of the 1980s. To some extent they have now converged, but those languages in their times were less expressive and presented different semantics from the languages we use today. The mental models are different even now building embedded code in assembler or Tiny C as against C++ or Ada. If you don't see that, then I won't try to persuade you.

Yes, Rust was the best I found on my quest for M and then S. I'm increasingly certain that until we solve the safety problem for all kinds of programming, we can't move to the next level.

Write a language and scratch that itch. That's what all of us implementers have done. We're a very diverse group, so trying to sell us on a vapourware bullet-point list is only going to spur debate. That's fine such as it is (this is a discussion forum, after all) but if you're looking for some broad consensus or buy-in, it's almost certainly not going to happen. Criticism, though -- you'll get a lot of that.

It's too hard, the hurdle is too high. Andl showed me that, if I didn't know already.

Then I'm not clear where you were/are going with this thread.  Are you not planning to build this post-Java/C#/C++ language?

Or is this a general "we can but dream" discussion about Rust what it might look like?

So now I need a project to try it out, and perhaps implementing the Extended RA is one worth trying. One thing Andl taught me was: I really miss programming with relations!

Is it the relations and relational algebra in particular -- no duplicate tuples, the particular relational operators and such -- that are appealing?

Or is it programming by writing expressions to transform input to output via immutable arguments and return values, and a set of composable operators?

Whilst I appreciate the relational model (of course), for me it's the latter. The relational model is just one example of the benefit of expressing (certain) programs using stateless transformations, but there are others: C# LINQ; Java Streams; various vector/matrix libraries; even bash scripting using find, sed, awk, cut, grep as operators with pipe to pass output from one to the next.

It's the ability to use a higher mental model. It's the same movement as from spaghetti code to structured programming, from explicit for loops to foreach, from loops to streams. It's being able to think of a relation as a single entity, not a list of rows or a stream of tuples. It's being able to think: if I joined this to that and projected it onto the other then it would have this shape and it fit into that need. I had that feeling with arrays in APL, and it's a rare feeling. It's not a pipe so much as a production line of whole assemblies built out of components.

Exactly. I get that feeling from using Rel. I also get that feeling from C# LINQ; Java Streams; various vector/matrix libraries; bash scripting using find, sed, awk, cut, grep; and strings and string operators in various languages.

Only I don't think of a list of rows or a stream of tuples. It's a container, a collection, a matrix, an array, a text stream, a string, or a relation.

In short, it's values and operators that take values as arguments and return a value.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

Yes, Rust was the best I found on my quest for M and then S. I'm increasingly certain that until we solve the safety problem for all kinds of programming, we can't move to the next level.

Write a language and scratch that itch. That's what all of us implementers have done. We're a very diverse group, so trying to sell us on a vapourware bullet-point list is only going to spur debate. That's fine such as it is (this is a discussion forum, after all) but if you're looking for some broad consensus or buy-in, it's almost certainly not going to happen. Criticism, though -- you'll get a lot of that.

It's too hard, the hurdle is too high. Andl showed me that, if I didn't know already.

Then I'm not clear where you were/are going with this thread.  Are you not planning to build this post-Java/C#/C++ language?

Or is this a general "we can but dream" discussion about Rust what it might look like?

I thought the point of the discussion was 'life after D'. Given that there are some really good ideas in TTM but the language as specified has failed to gain traction, what kind of language should we strive for.

I started on the theme of M, because C# (and perhaps other GP languages) is nearly good enough to do TTM-alike, but needs compile-time extensions. to implement a genuine extended RA. My theme was: shorter, safer, higher. As I worked through the arguments I came to realise that safer comes first: you can't do higher if you're worried about null pointers and exceptions. Higher automatically leads to shorter. You can't use text macros to do shorter because you lose safer. It has to be safer->higher->shorter. M is part of it, but not the driver.

On my current understanding Rust is the closest, but on current indications their focus is more an Ada/C++ replacement than a Java/C#/Python replacement and perhaps not well suited as a D. Finding out is a project.

So now I need a project to try it out, and perhaps implementing the Extended RA is one worth trying. One thing Andl taught me was: I really miss programming with relations!

Is it the relations and relational algebra in particular -- no duplicate tuples, the particular relational operators and such -- that are appealing?

Or is it programming by writing expressions to transform input to output via immutable arguments and return values, and a set of composable operators?

Whilst I appreciate the relational model (of course), for me it's the latter. The relational model is just one example of the benefit of expressing (certain) programs using stateless transformations, but there are others: C# LINQ; Java Streams; various vector/matrix libraries; even bash scripting using find, sed, awk, cut, grep as operators with pipe to pass output from one to the next.

It's the ability to use a higher mental model. It's the same movement as from spaghetti code to structured programming, from explicit for loops to foreach, from loops to streams. It's being able to think of a relation as a single entity, not a list of rows or a stream of tuples. It's being able to think: if I joined this to that and projected it onto the other then it would have this shape and it fit into that need. I had that feeling with arrays in APL, and it's a rare feeling. It's not a pipe so much as a production line of whole assemblies built out of components.

Exactly. I get that feeling from using Rel. I also get that feeling from C# LINQ; Java Streams; various vector/matrix libraries; bash scripting using find, sed, awk, cut, grep; and strings and string operators in various languages.

Only I don't think of a list of rows or a stream of tuples. It's a container, a collection, a matrix, an array, a text stream, a string, or a relation.

In short, it's values and operators that take values as arguments and return a value.

That's the aspiration, but again, you can't think higher while you're still concerned with safety, and as long as you think container, it's still about the rows. I've used all those tools, and almost all of them require code written at the row level: a regex, or a tuple expression or similar. You aspire to think higher, but the code you write is row level. Matrix libraries and APL, set operations, the pure RA are the exception in that they work on the whole thing ('closed over relation'), but most of those things work on rows.

The Sudoku solver I wrote in Andl showed me what might be possible, but it's very different from Linq and pipelines. I'm currently working with a data model of 7 relations, but you wouldn't know that from the code.

 

Andl - A New Database Language - andl.org
Quote from dandl on April 7, 2021, 10:23 am

Yes, Rust was the best I found on my quest for M and then S. I'm increasingly certain that until we solve the safety problem for all kinds of programming, we can't move to the next level.

Write a language and scratch that itch. That's what all of us implementers have done. We're a very diverse group, so trying to sell us on a vapourware bullet-point list is only going to spur debate. That's fine such as it is (this is a discussion forum, after all) but if you're looking for some broad consensus or buy-in, it's almost certainly not going to happen. Criticism, though -- you'll get a lot of that.

It's too hard, the hurdle is too high. Andl showed me that, if I didn't know already.

Then I'm not clear where you were/are going with this thread.  Are you not planning to build this post-Java/C#/C++ language?

Or is this a general "we can but dream" discussion about Rust what it might look like?

I thought the point of the discussion was 'life after D'. Given that there are some really good ideas in TTM but the language as specified has failed to gain traction, what kind of language should we strive for.

I started on the theme of M, because C# (and perhaps other GP languages) is nearly good enough to do TTM-alike, but needs compile-time extensions. to implement a genuine extended RA. My theme was: shorter, safer, higher. As I worked through the arguments I came to realise that safer comes first: you can't do higher if you're worried about null pointers and exceptions. Higher automatically leads to shorter. You can't use text macros to do shorter because you lose safer. It has to be safer->higher->shorter. M is part of it, but not the driver.

On my current understanding Rust is the closest, but on current indications their focus is more an Ada/C++ replacement than a Java/C#/Python replacement and perhaps not well suited as a D. Finding out is a project.

So now I need a project to try it out, and perhaps implementing the Extended RA is one worth trying. One thing Andl taught me was: I really miss programming with relations!

Is it the relations and relational algebra in particular -- no duplicate tuples, the particular relational operators and such -- that are appealing?

Or is it programming by writing expressions to transform input to output via immutable arguments and return values, and a set of composable operators?

Whilst I appreciate the relational model (of course), for me it's the latter. The relational model is just one example of the benefit of expressing (certain) programs using stateless transformations, but there are others: C# LINQ; Java Streams; various vector/matrix libraries; even bash scripting using find, sed, awk, cut, grep as operators with pipe to pass output from one to the next.

It's the ability to use a higher mental model. It's the same movement as from spaghetti code to structured programming, from explicit for loops to foreach, from loops to streams. It's being able to think of a relation as a single entity, not a list of rows or a stream of tuples. It's being able to think: if I joined this to that and projected it onto the other then it would have this shape and it fit into that need. I had that feeling with arrays in APL, and it's a rare feeling. It's not a pipe so much as a production line of whole assemblies built out of components.

Exactly. I get that feeling from using Rel. I also get that feeling from C# LINQ; Java Streams; various vector/matrix libraries; bash scripting using find, sed, awk, cut, grep; and strings and string operators in various languages.

Only I don't think of a list of rows or a stream of tuples. It's a container, a collection, a matrix, an array, a text stream, a string, or a relation.

In short, it's values and operators that take values as arguments and return a value.

That's the aspiration, but again, you can't think higher while you're still concerned with safety, and as long as you think container, it's still about the rows.

No more or less than "as long as you think relation, it's still about the tuples."

Dealing with possible null values or exceptions tends to be what complicates most value/operator systems like Java Streams and C# LINQ, and for that matter, strings and string operators, matrix libraries in the usual popular languages, etc. There are usually mechanisms for making these somewhat manageable, though had there never been null it would perhaps have generally been easier.

I've used all those tools, and almost all of them require code written at the row level: a regex, or a tuple expression or similar. You aspire to think higher, but the code you write is row level. Matrix libraries and APL, set operations, the pure RA are the exception in that they work on the whole thing ('closed over relation'), but most of those things work on rows.

The relational model is also "work on rows" (or "work on tuples"), in a simplistic sense. String operators "work on characters", and so on. For all such systems, you can either view them as values and operators on values, or view them as complex structures and operators on components of complex structures. To effectively use them, we generally understand their semantics as being both. That's the case whether we're considering the relational model, C# LINQ, Java Streams, strings and string operators, linear algebra systems, and numerous other implementations of the essential values/operators idea.

The Sudoku solver I wrote in Andl showed me what might be possible, but it's very different from Linq and pipelines. I'm currently working with a data model of 7 relations, but you wouldn't know that from the code.

The next step is Prolog, of course. See https://www.swi-prolog.org/pldoc/man?section=clpfd-sudoku

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Darren Duncan on April 5, 2021, 11:30 pm
Quote from dandl on April 5, 2021, 11:16 pm
Quote from Darren Duncan on April 4, 2021, 8:26 pm

I have the impression that C# is seeing a resurgence now.  It is indeed the most comparable to Java but I have found it had a better standard library and some other better language design aspects. I think that part of what held C# back historically was that it was officially Windows-only.  But its new version with the open-source .NET Core lineage, seems to be rapidly gaining in popularity among developers and in features, and its also being touted that the current version also has great runtime performance, with only close-to-the-metal languages doing better.

It's an opportunity to get rid of the cruft from 2.0 and earlier. Java should do the same, before it's too late.

While I know that C# only really got good with .NET 4.0, or at least I consider that the minimum usable version for what I want, can you give a quick summary of what you mean by "cruft from 2.0 and earlier"?  I actually only started using .NET with 4.5 so that is my life experience, and anything I know about earlier versions is just from reading documentation.

I don't have a list to hand. The bad things I recall from the early days are mostly left over parts of the framework library, pre-generics. Lots of the early collections are still there and don't work nicely. Thins lige ArrayList come to mind, but also IEnumerable vs IEnumerable<>. The new 'core' library is far better in that respect. Please avoid.

The bits of the language they got wrong are all still there: enums, arrays: [] [,] and [][], switch, no typedefs, C-style loops. But it's too late to fix those.

 

 

Andl - A New Database Language - andl.org
Quote from Dave Voorhis on April 7, 2021, 12:03 pm
Quote from dandl on April 7, 2021, 10:23 am

Yes, Rust was the best I found on my quest for M and then S. I'm increasingly certain that until we solve the safety problem for all kinds of programming, we can't move to the next level.

Write a language and scratch that itch. That's what all of us implementers have done. We're a very diverse group, so trying to sell us on a vapourware bullet-point list is only going to spur debate. That's fine such as it is (this is a discussion forum, after all) but if you're looking for some broad consensus or buy-in, it's almost certainly not going to happen. Criticism, though -- you'll get a lot of that.

It's too hard, the hurdle is too high. Andl showed me that, if I didn't know already.

Then I'm not clear where you were/are going with this thread.  Are you not planning to build this post-Java/C#/C++ language?

Or is this a general "we can but dream" discussion about Rust what it might look like?

I thought the point of the discussion was 'life after D'. Given that there are some really good ideas in TTM but the language as specified has failed to gain traction, what kind of language should we strive for.

I started on the theme of M, because C# (and perhaps other GP languages) is nearly good enough to do TTM-alike, but needs compile-time extensions. to implement a genuine extended RA. My theme was: shorter, safer, higher. As I worked through the arguments I came to realise that safer comes first: you can't do higher if you're worried about null pointers and exceptions. Higher automatically leads to shorter. You can't use text macros to do shorter because you lose safer. It has to be safer->higher->shorter. M is part of it, but not the driver.

On my current understanding Rust is the closest, but on current indications their focus is more an Ada/C++ replacement than a Java/C#/Python replacement and perhaps not well suited as a D. Finding out is a project.

So now I need a project to try it out, and perhaps implementing the Extended RA is one worth trying. One thing Andl taught me was: I really miss programming with relations!

Is it the relations and relational algebra in particular -- no duplicate tuples, the particular relational operators and such -- that are appealing?

Or is it programming by writing expressions to transform input to output via immutable arguments and return values, and a set of composable operators?

Whilst I appreciate the relational model (of course), for me it's the latter. The relational model is just one example of the benefit of expressing (certain) programs using stateless transformations, but there are others: C# LINQ; Java Streams; various vector/matrix libraries; even bash scripting using find, sed, awk, cut, grep as operators with pipe to pass output from one to the next.

It's the ability to use a higher mental model. It's the same movement as from spaghetti code to structured programming, from explicit for loops to foreach, from loops to streams. It's being able to think of a relation as a single entity, not a list of rows or a stream of tuples. It's being able to think: if I joined this to that and projected it onto the other then it would have this shape and it fit into that need. I had that feeling with arrays in APL, and it's a rare feeling. It's not a pipe so much as a production line of whole assemblies built out of components.

Exactly. I get that feeling from using Rel. I also get that feeling from C# LINQ; Java Streams; various vector/matrix libraries; bash scripting using find, sed, awk, cut, grep; and strings and string operators in various languages.

Only I don't think of a list of rows or a stream of tuples. It's a container, a collection, a matrix, an array, a text stream, a string, or a relation.

In short, it's values and operators that take values as arguments and return a value.

That's the aspiration, but again, you can't think higher while you're still concerned with safety, and as long as you think container, it's still about the rows.

No more or less than "as long as you think relation, it's still about the tuples."

Well, no. The RA is fully defined over relations, with nary a tuple in sight. One of the serious flaws in TD is that it embeds tuple-notation into its version of RA, which breaks the model. Algebra A showed us a way to express selection and new values as relational operators, but again broke the model by expressing relcons as tuples. The extended RA I proposed has no tuples, anywhere. It has headings (for projection and rename) and it has functions (for selection, new values and aggregation) but absolutely no tuples. Yes, you need some kind of syntax for literals but that's a language choice and that doesn't have to be tuples either.

Dealing with possible null values or exceptions tends to be what complicates most value/operator systems like Java Streams and C# LINQ, and for that matter, strings and string operators, matrix libraries in the usual popular languages, etc. There are usually mechanisms for making these somewhat manageable, though had there never been null it would perhaps have generally been easier.

I agree: that's the safer step I've been talking about, but higher comes after that.

I've used all those tools, and almost all of them require code written at the row level: a regex, or a tuple expression or similar. You aspire to think higher, but the code you write is row level. Matrix libraries and APL, set operations, the pure RA are the exception in that they work on the whole thing ('closed over relation'), but most of those things work on rows.

The relational model is also "work on rows" (or "work on tuples"), in a simplistic sense. String operators "work on characters", and so on. For all such systems, you can either view them as values and operators on values, or view them as complex structures and operators on components of complex structures. To effectively use them, we generally understand their semantics as being both. That's the case whether we're considering the relational model, C# LINQ, Java Streams, strings and string operators, linear algebra systems, and numerous other implementations of the essential values/operators idea.

I'm talking mental model, not implementation. Your "work on" is implementation, my "think about" is the abstraction, the mental model. If we want to operate on strings we should do so with string operators (leaving the character nasties to the implementor). It's not safer to think about both, it's safer to think at the higher level and have the implementation guarantee that it works right at the lower level.

Case in point: text processing. The mental model is (should be): a text object and operators on it. We are aware that it consists of strings and delimiters which in turn are characters and bytes, but we don't want to think about that and it seems we can't avoid it: grep and the Unix shell tools force us to think at the level of strings (lines). We write some code, it doesn't work, now we find it used the wrong CRLF convention so we're back into characters. To have operators on text objects we first need safer, so strings and characters can be ignored. The the mental model is no longer 'read lines, do something to each, write lines' but instead it's 'read text, apply operators, write text'. That higher.

The Sudoku solver I wrote in Andl showed me what might be possible, but it's very different from Linq and pipelines. I'm currently working with a data model of 7 relations, but you wouldn't know that from the code.

The next step is Prolog, of course. See https://www.swi-prolog.org/pldoc/man?section=clpfd-sudoku

I did one of those when I was learning Turbo Prolog, but it wasn't fun. The mental model is too different and it seemed to be all about where to put the cut operator. I don't think I ever got to the stage where I could have written that, but the Andl one does exactly the same thing in about 50 lines of code.

Andl - A New Database Language - andl.org
Quote from dandl on April 10, 2021, 1:54 am
Quote from Dave Voorhis on April 7, 2021, 12:03 pm
Quote from dandl on April 7, 2021, 10:23 am

 

 

 

Then I'm not clear where you were/are going with this thread.  Are you not planning to build this post-Java/C#/C++ language?

I'm also not clear. The thread seems another of those 'yes it is' - 'not it really isn't' pointless debates. (dandl would perhaps be aware of YouTube's 'Ozzy Man', and his colourful way of describing such debates.) And I wouldn't bother intervening, except dandl has interjected some arrant nonsense ...

Or is this a general "we can but dream" discussion about Rust what it might look like?

I thought the point of the discussion was 'life after D'. Given that there are some really good ideas in TTM but the language as specified has failed to gain traction, what kind of language should we strive for.

Although I don't like Tutorial D, and the counter-reaction against it in the industry has obscured that TTM is really about a 'family of languages'; it's the ideas that have failed to gain traction. In particular, the idea we should do without nulls. See also the constant complaining on StackOverflow that NATURAL JOIN is a wholly bad idea -- IOW the idea of relations being sets of sets of attribute-value pairs.

 

So now I need a project to try it out, and perhaps implementing the Extended RA is one worth trying. One thing Andl taught me was: I really miss programming with relations!

Is it the relations and relational algebra in particular -- no duplicate tuples, the particular relational operators and such -- that are appealing?

Or is it programming by writing expressions to transform input to output via immutable arguments and return values, and a set of composable operators?

Whilst I appreciate the relational model (of course), for me it's the latter. The relational model is just one example of the benefit of expressing (certain) programs using stateless transformations, but there are others: C# LINQ; Java Streams; various vector/matrix libraries; even bash scripting using find, sed, awk, cut, grep as operators with pipe to pass output from one to the next.

It's the ability to use a higher mental model. It's the same movement as from spaghetti code to structured programming, from explicit for loops to foreach, from loops to streams. It's being able to think of a relation as a single entity, not a list of rows or a stream of tuples. It's being able to think: if I joined this to that and projected it onto the other then it would have this shape and it fit into that need. I had that feeling with arrays in APL, and it's a rare feeling. It's not a pipe so much as a production line of whole assemblies built out of components.

Exactly. I get that feeling from using Rel. I also get that feeling from C# LINQ; Java Streams; various vector/matrix libraries; bash scripting using find, sed, awk, cut, grep; and strings and string operators in various languages.

Only I don't think of a list of rows or a stream of tuples. It's a container, a collection, a matrix, an array, a text stream, a string, or a relation.

In short, it's values and operators that take values as arguments and return a value.

That's the aspiration, but again, you can't think higher while you're still concerned with safety, and as long as you think container, it's still about the rows.

No more or less than "as long as you think relation, it's still about the tuples."

Well, no. The RA is fully defined over relations, with nary a tuple in sight. One of the serious flaws in TD is that it embeds tuple-notation into its version of RA, which breaks the model. Algebra A showed us a way to express selection and new values as relational operators, but again broke the model by expressing relcons as tuples.

Arrant nonsense.

Relations are sets. Some of the relational operators are set operators. But relations are not sets of just anything. I fail to see how you could adequately express the model without charactering the elements of those sets, and for example explaining (in whatever concrete syntax you clothe it):

TUP{ X 1, Y 'foo' }     // }
TUP( Y 'foo', X 1 )     // } are duplicates, so not allowed in the same relation value

TUP{ X 1 }              // }
TUP{ X 1, Y 'foo' }     // } are not duplicates, nevertheless are not allowed in the same relation value

TUP{ X 1, X 2 }         // not allowed in any relation value, even though elements of the TUP are not duplicates

 

The extended RA I proposed has no tuples, anywhere. It has headings (for projection and rename) and it has functions (for selection, new values and aggregation) but absolutely no tuples. Yes, you need some kind of syntax for literals but that's a language choice and that doesn't have to be tuples either.

In what sense is whatever you proposed any sort of RA? How do we for example attach characteristic predicates to relations so that we (or rather users of a database) can tell whether the database content matches the 'mini-world' of the enterprise?

Remember that the operations of the RA are a means to enquire about salient facts and their implications the database content is representing.

 

That's the aspiration, but again, you can't think higher while you're still concerned with safety, and as long as you think container, it's still about the rows.

No more or less than "as long as you think relation, it's still about the tuples."

Well, no. The RA is fully defined over relations, with nary a tuple in sight. One of the serious flaws in TD is that it embeds tuple-notation into its version of RA, which breaks the model. Algebra A showed us a way to express selection and new values as relational operators, but again broke the model by expressing relcons as tuples.

Arrant nonsense.

Relations are sets. Some of the relational operators are set operators. But relations are not sets of just anything. I fail to see how you could adequately express the model without charactering the elements of those sets, and for example explaining (in whatever concrete syntax you clothe it):

Again, you're right into implementation detail. A relation is (a) a safer data structure that conforms to certain rules (see implementation details) and (b) an argument to a higher relational operator. The point about safer (details guaranteed by the implementation) is to get to higher (don't think about the implementation).

TUP{ X 1, Y 'foo' } // }
TUP( Y 'foo', X 1 ) // } are duplicates, so not allowed in the same relation value
TUP{ X 1 } // }
TUP{ X 1, Y 'foo' } // } are not duplicates, nevertheless are not allowed in the same relation value
TUP{ X 1, X 2 } // not allowed in any relation value, even though elements of the TUP are not duplicates
TUP{ X 1, Y 'foo' } // } TUP( Y 'foo', X 1 ) // } are duplicates, so not allowed in the same relation value TUP{ X 1 } // } TUP{ X 1, Y 'foo' } // } are not duplicates, nevertheless are not allowed in the same relation value TUP{ X 1, X 2 } // not allowed in any relation value, even though elements of the TUP are not duplicates
TUP{ X 1, Y 'foo' }     // }
TUP( Y 'foo', X 1 )     // } are duplicates, so not allowed in the same relation value

TUP{ X 1 }              // }
TUP{ X 1, Y 'foo' }     // } are not duplicates, nevertheless are not allowed in the same relation value

TUP{ X 1, X 2 }         // not allowed in any relation value, even though elements of the TUP are not duplicates

The above is implementation detail, ignored in the context of the RA.

The extended RA I proposed has no tuples, anywhere. It has headings (for projection and rename) and it has functions (for selection, new values and aggregation) but absolutely no tuples. Yes, you need some kind of syntax for literals but that's a language choice and that doesn't have to be tuples either.

In what sense is whatever you proposed any sort of RA? How do we for example attach characteristic predicates to relations so that we (or rather users of a database) can tell whether the database content matches the 'mini-world' of the enterprise?

Remember that the operations of the RA are a means to enquire about salient facts and their implications the database content is representing.

A relation exposes a heading, and the business predicate relates the business facts to the content of the relation by means of the heading. The results of an enquiry will always be another relation, with a heading. The implementation will provide a means to convert between relations and other representations, but that's not part of the RA.

I'm absolutely serious about this. The only way you can really think about any of this stuff at any level is by not thinking about the levels further down. To really think about relations you have to not think about tuples or values or strings or characters or encodings or bytes (or memory cells or chips or silicon or electrons).

Andl - A New Database Language - andl.org
Quote from dandl on April 10, 2021, 6:45 am

 

I'm absolutely serious about this. The only way you can really think about any of this stuff ...

I see no evidence you're thinking. This is a wild amorphous "vapourware bullet-point list ". Any debate is going to be like trying to nail jello to a wall, because you'll just wriggle away claiming any critique is not being 'high level' enough.

D&D have thought about "this stuff". Of course they're not the only who have. Of course you're entitled to disagree with their specifics. But then you must come up with an alternative to the same level of detail as their specifics. Until then your criticisms are hot air.

If you don't like relations being specified in terms of tuples, nor RA operations be expressed in terms of effects on tuples (per Appendix A and/or HH&T 1975), provide an alternative specification. Or stop using 'relations' or 'RA' -- because again you're using a private language containing terms that seem familiar/carry a familiar connotation but denote something different/so far making no sense.

PreviousPage 3 of 5Next