The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Meaning of a relation

Quote from Dave Voorhis on January 18, 2021, 3:03 pm
Quote from tobega on January 18, 2021, 2:00 pm

...

The typical Java and C# programmer only uses basic crud operations moving data back-and-forth between database and web page, what interest would they have? (In Java, many also got mired in JPA and Hibernate, they will never be able to extricate themselves.)

...

Likewise Entity Framework and C#.

There is a certain breed of anti-SQL sentiment that propels use of Hibernate and EF, but the alternative isn't LINQ or Streams or some new language. The core EF/Hibernate fan doesn't like any of that, either -- though they'll grudgingly use LINQ or Streams if forced. The biggest fans of Hibernate and EF typically don't want anything remotely query-like or declarative in their code. What they want is record-by-agonising-record processing in their favourite programming language, just like they learned in (or before) first-year university.

Nested for loops (or foreach loops) for the win. :-(

I suspect that's where some of the knee-jerk NoSQL fanboyism comes from -- anything but SQL, or anything too SQL-like, which unsurprisingly LINQ gets lumped into and perhaps surprisingly (or perhaps not) Java Streams winds up there too.

Julia is getting a lot of love of late, and I know of one commercial project that is using it at the core of a relational-model -based product, but it's Datalog rather than Codd (or post-Codd, i.e., TTM et al.) RM. It's not yet released as far as I know. Or maybe it is -- I haven't checked lately. It's these guys: https://www.relational.ai/

That's a great way to go for actual reasoning from data. But it's a big step to writing programs that way and I'm not sure it covers the "I just want to generate some interesting business reports" use case. However, the time for that kind of thing may have passed when everybody wants their own datalake and machine learning?

As for compile-time checking, I agree that runtime checks are not acceptable in a compiled language, but should we really disqualify languages that aren't compiled? Users of these have developed methods for dealing with lack of compiler help, usually by them being really easy to just rinse-and-repeat, but also by testing. Uncle Bob used to be a huge fan of compile-time static type-checking but is now saying that we have already mucked about with all possible programming languages and should just pick "the last programming language" and that it should be Clojure. Not that he's right, IMO, but what if he was? And what if the trend is to non-compiled languages overall? Python, Typescript + javascript, Dart, Clojure, Ruby, Julia. (Oh, what about Typescript's type system, BTW? A lot of simple web apps are being built on Node, perhaps some time will be on Deno, but Typescript is certainly much more popular than TIOBE indicates, since its mostly just javascript and many web searches would be for basic APIs). Actually, both Typescript and the current direction of Dart indicate that users like strong typing, but not necessarily only in compiled languages.

I use "compiled" in the loose "modern" sense, i.e., not necessarily to mean the classic sense of an emitted object file and no deployed source, but to mean all source is fully checked to the extent possible (within the semantics of the language) prior to any execution.

That means the real consideration is the extent to which errors can be identified before any or all execution, i.e., fully static type checking; and the extent to which all possible run-time errors must have explicit code paths (and this too is statically checked before any execution.)

I see.

I think Dart does that, not sure how well the type system would work, I'm guessing not. I would bet Dart is here to stay, despite resistance from the javascript crowd, and growing because it is really nice to work with and has a killer app, Flutter.

Julia is only just-in-time compilation so, no, not currently, but there are people asking about static analysis and it's not entirely impossible. I'm certain Julia will at least become firmly entrenched where it was targetted because it suits the use case being lightning-fast, sometimes faster than C, and is really good at large vector-operations, plus being able to substitute for Python as a quick-hack language.

Julia is only just-in-time compilation so, no, not currently, but there are people asking about static analysis and it's not entirely impossible. I'm certain Julia will at least become firmly entrenched where it was targetted because it suits the use case being lightning-fast, sometimes faster than C, and is really good at large vector-operations, plus being able to substitute for Python as a quick-hack language.

C is is the lingua franca of calling DLLs but a crap language for scientific computing, because it has no arrays, only pointers and memory blocks. Fortran optimises well, and has been compiled into vectorised code for 50 years that I know of, but hard work for other stuff. Python is a great 'glue' language for calling libraries to do the actual computing, but horribly slow if you actually want to compute something.

So I can definitely believe the attraction of a language that does those three things. But without a solid extensible static type system, it won't make a D.

Andl - A New Database Language - andl.org

As for compile-time checking, I agree that runtime checks are not acceptable in a compiled language, but should we really disqualify languages that aren't compiled? Users of these have developed methods for dealing with lack of compiler help, usually by them being really easy to just rinse-and-repeat, but also by testing. Uncle Bob used to be a huge fan of compile-time static type-checking but is now saying that we have already mucked about with all possible programming languages and should just pick "the last programming language" and that it should be Clojure. Not that he's right, IMO, but what if he was? And what if the trend is to non-compiled languages overall? Python, Typescript + javascript, Dart, Clojure, Ruby, Julia. (Oh, what about Typescript's type system, BTW? A lot of simple web apps are being built on Node, perhaps some time will be on Deno, but Typescript is certainly much more popular than TIOBE indicates, since its mostly just javascript and many web searches would be for basic APIs). Actually, both Typescript and the current direction of Dart indicate that users like strong typing, but not necessarily only in compiled languages.

The point about D and TTM is that it intends to design a language you can depend on for enterprise data processing. This is not the odd spreadsheet hack or quick throwaway query, this is the foundation of the enterprise data. The motto is: if it compiles, it will run.

C# and Java largely achieve that: type errors are not possible except where you explicitly allow them with casts or similar. Bounds errors, conversion errors etc can happen but are very manageable. An enterprise data language should be at least as safe as those two. Typescript and Dart do indeed indicate that many users prefer it that way. I know I do.

Andl - A New Database Language - andl.org
Quote from Dave Voorhis on January 18, 2021, 3:03 pm
Quote from tobega on January 18, 2021, 2:00 pm

...

The typical Java and C# programmer only uses basic crud operations moving data back-and-forth between database and web page, what interest would they have? (In Java, many also got mired in JPA and Hibernate, they will never be able to extricate themselves.)

...

I use "compiled" in the loose "modern" sense, i.e., not necessarily to mean the classic sense of an emitted object file and no deployed source, but to mean all source is fully checked to the extent possible (within the semantics of the language) prior to any execution.

Yes, that's what I look for.

That means the real consideration is the extent to which errors can be identified before any or all execution, i.e., fully static type checking; and the extent to which all possible run-time errors must have explicit code paths (and this too is statically checked before any execution.)

So what happens with these anything-but-SQL programming systems if they hit a compile error in the middle of calculating the tax on an invoice? (And I'm presuming they're cycling record-by-agonising-record through the item lines.) Do they just crash leaving the invoice half-processed? Do they leave it looking like it's processed but with incorrect tax? Are any sort of ACID properties enforced? Is there commitment control?

You might say there could be many kinds of crash -- divide by zero, numeric overflow, subscript out of array -- why pick on typing errors? But for me that argues for stronger static typing, to catch at least the possibility of those before executing anything/before touching the database.

Quote from AntC on January 18, 2021, 11:12 pm
Quote from Dave Voorhis on January 18, 2021, 3:03 pm
Quote from tobega on January 18, 2021, 2:00 pm

...

The typical Java and C# programmer only uses basic crud operations moving data back-and-forth between database and web page, what interest would they have? (In Java, many also got mired in JPA and Hibernate, they will never be able to extricate themselves.)

...

I use "compiled" in the loose "modern" sense, i.e., not necessarily to mean the classic sense of an emitted object file and no deployed source, but to mean all source is fully checked to the extent possible (within the semantics of the language) prior to any execution.

Yes, that's what I look for.

That means the real consideration is the extent to which errors can be identified before any or all execution, i.e., fully static type checking; and the extent to which all possible run-time errors must have explicit code paths (and this too is statically checked before any execution.)

So what happens with these anything-but-SQL programming systems if they hit a compile error in the middle of calculating the tax on an invoice? (And I'm presuming they're cycling record-by-agonising-record through the item lines.) Do they just crash leaving the invoice half-processed? Do they leave it looking like it's processed but with incorrect tax? Are any sort of ACID properties enforced? Is there commitment control?

They usually crash leaving half the invoices completely processed, and half the invoices not processed at all. They're running on top of ORM layers like Hibernate or Entity Framework so all the transaction-and-SQL-query unpleasantness is hidden, but it's still there under the covers issuing transactions at somewhat appropriate -- but not necessarily optimal -- places. Leaving a half-processed invoice requires a certain dedication to doing things badly, but if done properly per the ORM paradigm, even if an invoice is half-processed, it will be left with an "unprocessed" status set (or whatever) and will be picked up and sorted out (e.g, regenerated or the rest of the line items calculated or whatever) in the next processing run.

To my eyes, the code it takes to do this saves nothing and adds much unreasonable complexity and labyrinthine spaghetti, but the proponents of this approach proudly proclaim that it's "code first" and "layered" (and <insert other buzzwords here>) and exposes no SQL, and is thus excellent.

They seem happy with it.

They shudder at my advocacy of "SQL amplifiers" as an alternative to ORMs, which not only expose SQL, they facilitate using it -- i.e., amplify it -- by providing handy facilities to parametrise queries, turn query execution results into proper Java Streams (or C# LINQ whotsits), auto-generate update and insert methods, etc. I think that makes for much cleaner and simpler code, but critics disagree.

They say, "I can, ew, see the SQL!"

Well, yeah. That's the point. But I (kind of) understand the objection, too.

You might say there could be many kinds of crash -- divide by zero, numeric overflow, subscript out of array -- why pick on typing errors? But for me that argues for stronger static typing, to catch at least the possibility of those before executing anything/before touching the database.

Yes, stronger static typing. I didn't mean to pick on typing errors and exclude other potentially-static errors. Ideally, everything should be a static error or involve compiler obligations to write error handlers.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on January 19, 2021, 12:01 am
Quote from AntC on January 18, 2021, 11:12 pm
Quote from Dave Voorhis on January 18, 2021, 3:03 pm
Quote from tobega on January 18, 2021, 2:00 pm

...

The typical Java and C# programmer only uses basic crud operations moving data back-and-forth between database and web page, what interest would they have? (In Java, many also got mired in JPA and Hibernate, they will never be able to extricate themselves.)

...

I use "compiled" in the loose "modern" sense, i.e., not necessarily to mean the classic sense of an emitted object file and no deployed source, but to mean all source is fully checked to the extent possible (within the semantics of the language) prior to any execution.

Yes, that's what I look for.

That means the real consideration is the extent to which errors can be identified before any or all execution, i.e., fully static type checking; and the extent to which all possible run-time errors must have explicit code paths (and this too is statically checked before any execution.)

So what happens with these anything-but-SQL programming systems if they hit a compile error in the middle of calculating the tax on an invoice? (And I'm presuming they're cycling record-by-agonising-record through the item lines.) Do they just crash leaving the invoice half-processed? Do they leave it looking like it's processed but with incorrect tax? Are any sort of ACID properties enforced? Is there commitment control?

They usually crash leaving half the invoices completely processed, and half the invoices not processed at all. They're running on top of ORM layers like Hibernate or Entity Framework so all the transaction-and-SQL-query unpleasantness is hidden, but it's still there under the covers issuing transactions at somewhat appropriate -- but not necessarily optimal -- places. Leaving a half-processed invoice requires a certain dedication to doing things badly, but if done properly per the ORM paradigm, even if an invoice is half-processed, it will be left with an "unprocessed" status set (or whatever) and will be picked up and sorted out (e.g, regenerated or the rest of the line items calculated or whatever) in the next processing run.

To my eyes, the code it takes to do this saves nothing and adds much unreasonable complexity and labyrinthine spaghetti, but the proponents of this approach proudly proclaim that it's "code first" and "layered" (and <insert other buzzwords here>) and exposes no SQL, and is thus excellent.

They seem happy with it.

They shudder at my advocacy of "SQL amplifiers" as an alternative to ORMs, which not only expose SQL, they facilitate using it -- i.e., amplify it -- by providing handy facilities to parametrise queries, turn query execution results into proper Java Streams (or C# LINQ whotsits), auto-generate update and insert methods, etc. I think that makes for much cleaner and simpler code, but critics disagree.

ORMs were an easy sell on the (false) premise that you wouldn't need to learn SQL, nor would you need to do the mindless data-shuffling from result sets to domain objects, you just code up a domain object that gets magically connected to the database. Works beautifully for the "Hello World" examples and then you're hooked. Slowly you realize you have to learn a whole new language anyway with all the configuration needed, plus you find out its dangerous to spread those database-connected thingies around so you have to create duplicate data-transfer-objects and mindlessly copy the data items. Then you need to learn how to deal with performance problems and it just goes on, and usually you still have to be aware of the specific SQL for your database product.

Obviously people who spent years getting deeper in will usually have a large vested interested in keeping the ORM, but the rest of us are going "what is this huge pile of ****?" and certainly look for simpler solutions. Do you have any interesting "SQL amplifiers" on hand?

They say, "I can, ew, see the SQL!"

Well, yeah. That's the point. But I (kind of) understand the objection, too.

You might say there could be many kinds of crash -- divide by zero, numeric overflow, subscript out of array -- why pick on typing errors? But for me that argues for stronger static typing, to catch at least the possibility of those before executing anything/before touching the database.

Yes, stronger static typing. I didn't mean to pick on typing errors and exclude other potentially-static errors. Ideally, everything should be a static error or involve compiler obligations to write error handlers.

Dart brought to mind that Dart 1.0 had an interesting vision driven by Gilad Bracha that type systems should be optional, but that you should be able to use as many of them in your code as you like, while Dart 2.0 is unfortunately more mundane mandatory One True Type System (TM).

So what if it is not really or necessarily a built-in type system we need but a static analysis tool? We run plenty of them in our development pipeline, from intellij:s analyzers to Google's error-prone to SonarQube.

So an interface in java might have to jump a few hoops to satisfy the built-in type system, e.g. ABC result = join(ABC.class, a, b, c); and you would have to declare classes with relevant attributes for each step, but declaring records is about to become really easy one-liners in java and it could be used in the interface as e.g. rel.project(MyProjection.class);

Then you run a static analyzer that verifies that joining a, b and c really gives an ABC, and, even better, such a tool could be configured to check the table definitions in the database as well.

Quote from tobega on January 19, 2021, 5:12 am
Quote from Dave Voorhis on January 19, 2021, 12:01 am
Quote from AntC on January 18, 2021, 11:12 pm
Quote from Dave Voorhis on January 18, 2021, 3:03 pm
Quote from tobega on January 18, 2021, 2:00 pm

...

The typical Java and C# programmer only uses basic crud operations moving data back-and-forth between database and web page, what interest would they have? (In Java, many also got mired in JPA and Hibernate, they will never be able to extricate themselves.)

...

I use "compiled" in the loose "modern" sense, i.e., not necessarily to mean the classic sense of an emitted object file and no deployed source, but to mean all source is fully checked to the extent possible (within the semantics of the language) prior to any execution.

Yes, that's what I look for.

That means the real consideration is the extent to which errors can be identified before any or all execution, i.e., fully static type checking; and the extent to which all possible run-time errors must have explicit code paths (and this too is statically checked before any execution.)

So what happens with these anything-but-SQL programming systems if they hit a compile error in the middle of calculating the tax on an invoice? (And I'm presuming they're cycling record-by-agonising-record through the item lines.) Do they just crash leaving the invoice half-processed? Do they leave it looking like it's processed but with incorrect tax? Are any sort of ACID properties enforced? Is there commitment control?

They usually crash leaving half the invoices completely processed, and half the invoices not processed at all. They're running on top of ORM layers like Hibernate or Entity Framework so all the transaction-and-SQL-query unpleasantness is hidden, but it's still there under the covers issuing transactions at somewhat appropriate -- but not necessarily optimal -- places. Leaving a half-processed invoice requires a certain dedication to doing things badly, but if done properly per the ORM paradigm, even if an invoice is half-processed, it will be left with an "unprocessed" status set (or whatever) and will be picked up and sorted out (e.g, regenerated or the rest of the line items calculated or whatever) in the next processing run.

To my eyes, the code it takes to do this saves nothing and adds much unreasonable complexity and labyrinthine spaghetti, but the proponents of this approach proudly proclaim that it's "code first" and "layered" (and <insert other buzzwords here>) and exposes no SQL, and is thus excellent.

They seem happy with it.

They shudder at my advocacy of "SQL amplifiers" as an alternative to ORMs, which not only expose SQL, they facilitate using it -- i.e., amplify it -- by providing handy facilities to parametrise queries, turn query execution results into proper Java Streams (or C# LINQ whotsits), auto-generate update and insert methods, etc. I think that makes for much cleaner and simpler code, but critics disagree.

ORMs were an easy sell on the (false) premise that you wouldn't need to learn SQL, nor would you need to do the mindless data-shuffling from result sets to domain objects, you just code up a domain object that gets magically connected to the database. Works beautifully for the "Hello World" examples and then you're hooked. Slowly you realize you have to learn a whole new language anyway with all the configuration needed, plus you find out its dangerous to spread those database-connected thingies around so you have to create duplicate data-transfer-objects and mindlessly copy the data items. Then you need to learn how to deal with performance problems and it just goes on, and usually you still have to be aware of the specific SQL for your database product.

Obviously people who spent years getting deeper in will usually have a large vested interested in keeping the ORM, but the rest of us are going "what is this huge pile of ****?" and certainly look for simpler solutions. Do you have any interesting "SQL amplifiers" on hand?

Not yet. I'm going to be releasing one for Java shortly (for an undefined "shortly" which has been dragging on a bit because soooo busy), though I got the idea originally from a gig where I worked on an existing codebase that had almost nothing to do with databases, but needed to interact with a SQL database for certain operations.

It was clearly built by people -- long before I came along, and long moved on to other places -- who (a) had almost nothing to do with databases, but (b) weren't afraid of SQL.

It was little more than a thin skin over JDBC, but just the right amount of skin to make it easier to use SQL queries in Java.

That's it. That's all it did.

And it was great. No mucking about with ungainly ORMs, and no painful SQL-injection-prone JDBC-fest.

It wasn't perfect by any stretch, with much room for improvement and it was old -- it was written for something well before Java 8.

I took the essential idea -- don't hide SQL; instead make it bigger and better in the host language -- and wrote a new one from scratch.

It's still a work in progress, but it lets you do simple things like this:

database.query("SELECT * FROM $$tester WHERE x > ? AND x < ?", TestSelect.class, 3, 7)
      .forEach(tuple -> tuple.x + ", " + tuple.y));

At the point where the above does .forEach(...), you can do all the usual Java Streams goodness on the query result.

Of course, like all things, it is not perfect. It requires a code generation (dev) and/or database updating (dev+prod) phase, which can be integrated into unit testing for development or invoked programmatically for deployment.

They say, "I can, ew, see the SQL!"

Well, yeah. That's the point. But I (kind of) understand the objection, too.

You might say there could be many kinds of crash -- divide by zero, numeric overflow, subscript out of array -- why pick on typing errors? But for me that argues for stronger static typing, to catch at least the possibility of those before executing anything/before touching the database.

Yes, stronger static typing. I didn't mean to pick on typing errors and exclude other potentially-static errors. Ideally, everything should be a static error or involve compiler obligations to write error handlers.

Dart brought to mind that Dart 1.0 had an interesting vision driven by Gilad Bracha that type systems should be optional, but that you should be able to use as many of them in your code as you like, while Dart 2.0 is unfortunately more mundane mandatory One True Type System (TM).

So what if it is not really or necessarily a built-in type system we need but a static analysis tool? We run plenty of them in our development pipeline, from intellij:s analyzers to Google's error-prone to SonarQube.

So an interface in java might have to jump a few hoops to satisfy the built-in type system, e.g. ABC result = join(ABC.class, a, b, c); and you would have to declare classes with relevant attributes for each step, but declaring records is about to become really easy one-liners in java and it could be used in the interface as e.g. rel.project(MyProjection.class);

Then you run a static analyzer that verifies that joining a, b and c really gives an ABC, and, even better, such a tool could be configured to check the table definitions in the database as well.

Yes, separate static analysis is a step in the right direction, but obviously less than ideal.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

It was little more than a thin skin over JDBC, but just the right amount of skin to make it easier to use SQL queries in Java.

There is an issue here. One of the tasks of a serious-scale software project is to raise the level of the language by adding libraries, macros, reusable code. This greatly reduces the effort for incremental work on the application by those familiar with it, but greatly increases the work for new arrivals, or casual users/contributors. They want to be able to write code using principles learned elsewhere, with a relatively shallow layer of application-specific knowledge acquisition.

IMO this kind of 'thin layer over SQL' is well suited to ad hoc and hopeless at scale. It's good if you just want to drag out some data with a bit of SQL. It's bad if you want long-term durable code that can support rapid prototyping by core developers and deep test suites, and survive heavy refactoring and incremental requirements.

I would never use this thin layer approach at scale. I rather like the 'Helsinki' approach.

Andl - A New Database Language - andl.org
Quote from tobega on January 19, 2021, 5:12 am
Quote from Dave Voorhis on January 19, 2021, 12:01 am
Quote from AntC on January 18, 2021, 11:12 pm
Quote from Dave Voorhis on January 18, 2021, 3:03 pm
Quote from tobega on January 18, 2021, 2:00 pm

...

The typical Java and C# programmer only uses basic crud operations moving data back-and-forth between database and web page, what interest would they have? (In Java, many also got mired in JPA and Hibernate, they will never be able to extricate themselves.)

...

I use "compiled" in the loose "modern" sense, i.e., not necessarily to mean the classic sense of an emitted object file and no deployed source, but to mean all source is fully checked to the extent possible (within the semantics of the language) prior to any execution.

Yes, that's what I look for.

That means the real consideration is the extent to which errors can be identified before any or all execution, i.e., fully static type checking; and the extent to which all possible run-time errors must have explicit code paths (and this too is statically checked before any execution.)

So what happens with these anything-but-SQL programming systems if they hit a compile error in the middle of calculating the tax on an invoice? (And I'm presuming they're cycling record-by-agonising-record through the item lines.) Do they just crash leaving the invoice half-processed? Do they leave it looking like it's processed but with incorrect tax? Are any sort of ACID properties enforced? Is there commitment control?

They usually crash leaving half the invoices completely processed, and half the invoices not processed at all. They're running on top of ORM layers like Hibernate or Entity Framework so all the transaction-and-SQL-query unpleasantness is hidden, but it's still there under the covers issuing transactions at somewhat appropriate -- but not necessarily optimal -- places. Leaving a half-processed invoice requires a certain dedication to doing things badly, but if done properly per the ORM paradigm, even if an invoice is half-processed, it will be left with an "unprocessed" status set (or whatever) and will be picked up and sorted out (e.g, regenerated or the rest of the line items calculated or whatever) in the next processing run.

 

ORMs were an easy sell on the (false) premise that you wouldn't need to learn SQL, ...

 

You might say there could be many kinds of crash -- divide by zero, numeric overflow, subscript out of array -- why pick on typing errors? But for me that argues for stronger static typing, to catch at least the possibility of those before executing anything/before touching the database.

Yes, stronger static typing. I didn't mean to pick on typing errors and exclude other potentially-static errors. Ideally, everything should be a static error or involve compiler obligations to write error handlers.

Dart brought to mind that Dart 1.0 had an interesting vision driven by Gilad Bracha that type systems should be optional, but that you should be able to use as many of them in your code as you like, while Dart 2.0 is unfortunately more mundane mandatory One True Type System (TM).

So what if it is not really or necessarily a built-in type system we need but a static analysis tool? ...

So an interface in java might have to jump a few hoops to satisfy the built-in type system, e.g. ABC result = join(ABC.class, a, b, c); and you would have to declare classes with relevant attributes for each step, but declaring records is about to become really easy one-liners in java and it could be used in the interface as e.g. rel.project(MyProjection.class);

Then you run a static analyzer that verifies that joining a, b and c really gives an ABC, and, even better, such a tool could be configured to check the table definitions in the database as well.

[Sorry, but not much, to sound like a fuddy-duddy but ...] In the 1970's, on System/38, database schemas were visible at operating system level/to compilers. (This was no more surprising than using COPY statements to import table definitions.) The compiler built into the object code a) a hash of the schema definition; b) the timestamp of the sourcefile from which the schema was built. (The idea of using the sourcefile being you'd build tables in a test environment first; then release the sources to production and build there; thus using the timestamp of the built table wouldn't help.)

At run-time (or rather before allowing a program to run), the Operating System would grab the hash and timestamp from inside the program object, and compare to the metadata in the tables you were proposing to access. Any mismatch and the program got thrown out in short order, no harm done.

Also tables in the database had a flag saying whether they were to be opened under Commit Control. Views over any Commit-Controlled table inherited that flag. Any program opening any of those tables had to use Commit-Control.

This doesn't need anything fancy like a 'static analyzer'. Neither is it specific to any programming language. (Of course if your language didn't have strong typing for records/tables, or your code messed up Commitment logic, you could still mash the database; but it does considerably raise the bar for "dedication to doing things badly".)

All this got lost when System/38 'advanced' to AS/400, and then 'advanced' from native database to SQL. You can't validate nothing when all access requests are string-passing, no matter how many thin/fat layers of cruft you pass it through.

Then you run a static analyzer that verifies that joining a, b and c really gives an ABC, and, even better, such a tool could be configured to check the table definitions in the database as well.

[Sorry, but not much, to sound like a fuddy-duddy but ...] In the 1970's, on System/38, database schemas were visible at operating system level/to compilers.

You'll be right at home here. I remember System-R, and I remember Oracle when it was just a couple of guys hoping to make sales. On DEC VAX.

This doesn't need anything fancy like a 'static analyzer'. Neither is it specific to any programming language. (Of course if your language didn't have strong typing for records/tables, or your code messed up Commitment logic, you could still mash the database; but it does considerably raise the bar for "dedication to doing things badly".)

The key concept of TTM is the type system, with relations/tuples as typed values. The static analysis is precisely what the compiler does when it performs heading inference across RA operations. If the relvars used by a running program match (for type) the relvars in the database, you're good to go.

Andl - A New Database Language - andl.org