The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Where do y'all hang out?

PreviousPage 5 of 5
Quote from tobega on January 2, 2023, 11:31 am
Quote from Dave Voorhis on January 2, 2023, 11:29 am
Quote from tobega on January 2, 2023, 11:03 am

I create and consume Web-tech APIs a lot.

It's hard not to recognise that significant complexity in this area would vanish if we favoured remote-procedure-call facilities, so a relational API framework should ideally look and taste like local function calls.

Caveat emptor. It seems this would be repeating the mistake of DCOM where APIs did get easier to use, but all applications got bogged down in a morass of fine-grained remote calls.

But what's worse -- fine-grained remote calls which are no different from fine-grained local calls, or a morass of fine-grained laboriously-constructed obviously-Web-API calls?

The latter is typically constructed via builder syntax or similar at best; some arduous construction of Map<String, String>s of parameters at worst. I regularly see both, and it's obvious these are just crunchy surrogates for what could be cleaner and simpler as procedure calls.

But I suspect they'd be a lot cleaner if instead of being too fine-grained and mostly ad-hoc, they were mainly "standard" calls like insert(...), update(...), delete(...), project(...), select(...), join(...), etc.

Well, if you're going to design your REST APIs badly, as if they were local calls, anyway, sure. But why should you make it even easier to do the dumb thing?

I can't legislate what development teams might do -- and most do define API functionality as if they were local calls, and then use the most clunky machinery to invoke them. It doesn't get much less clunky for using REST APIs well -- and it's still far too much exposed machinery, but most of Web development forgot (and continues to ignore) the thirty plus years that preceded it -- but at least I can provide facilities that are less ugly than the usual ones.

Of course, there is an argument that none of it should be exposed, at least in our own end-to-end applications. We should write applications purely in terms of meeting functional requirements without need to specify or even consider locality -- which means everything looks and tastes like local calls -- and the automated optimiser invisibly decides what code runs where.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from tobega on January 2, 2023, 11:29 am

I use relational algebra to good effect in some of the Advent of Code problems and mention it once in a while in posts (blog, forum). That's pitifully little, but if an itch starts to develop within the developer community, RA will eventually be included in languages. Although I wouldn't hold my breath, I think it takes a bit of a mental climb to advance from just iterating on simpler datastructures to accomplishing everything at once with a few select operations. Mostly it tends to be "divide" that's required and most magical, but I've used "matching" and "notMatching" quite a bit, and an occasional "join" and "union".

FWIW, I wrote a blog post where relational algebra is presented as an example of "closeness of mapping" according to the Cognitive Dimensions of Notation. https://tobega.blogspot.com/2022/12/evaluating-tailspin-language-after.html

 

Fine grained is poison. REST is just fine as long as it's taken to encourage dealing in chunks, not atoms. The problem then is defining the chunk you want in the query string. And that's where the RA (and even better, an ERA or XRA) can really help.

But you've got to get past this type system hang-up. You need one for a TTM/D, but the basic 6-odd types of SQL or JSON get a long way in navigating other people's data.

 

Andl - A New Database Language - andl.org
Quote from dandl on January 2, 2023, 11:57 am
Quote from tobega on January 2, 2023, 11:29 am

I use relational algebra to good effect in some of the Advent of Code problems and mention it once in a while in posts (blog, forum). That's pitifully little, but if an itch starts to develop within the developer community, RA will eventually be included in languages. Although I wouldn't hold my breath, I think it takes a bit of a mental climb to advance from just iterating on simpler datastructures to accomplishing everything at once with a few select operations. Mostly it tends to be "divide" that's required and most magical, but I've used "matching" and "notMatching" quite a bit, and an occasional "join" and "union".

FWIW, I wrote a blog post where relational algebra is presented as an example of "closeness of mapping" according to the Cognitive Dimensions of Notation. https://tobega.blogspot.com/2022/12/evaluating-tailspin-language-after.html

 

Fine grained is poison. REST is just fine as long as it's taken to encourage dealing in chunks, not atoms. The problem then is defining the chunk you want in the query string. And that's where the RA (and even better, an ERA or XRA) can really help.

What's "the RA"?

Aren't there at least several (and possibly many) of them?

What's an ERA or an XRA?

I wish people would stop using their own acronyms as if they were well-known. If your new acronym isn't in common industry or field use, please take the extra hundred milliseconds or so to type out what you mean.

But you've got to get past this type system hang-up. You need one for a TTM/D, but the basic 6-odd types of SQL or JSON get a long way in navigating other people's data.

I think we've got to get past this notion that a few largely-arbitrary canonical types are enough and embrace typeful programming.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on January 2, 2023, 11:47 am
Quote from tobega on January 2, 2023, 11:31 am
Quote from Dave Voorhis on January 2, 2023, 11:29 am
Quote from tobega on January 2, 2023, 11:03 am

I create and consume Web-tech APIs a lot.

It's hard not to recognise that significant complexity in this area would vanish if we favoured remote-procedure-call facilities, so a relational API framework should ideally look and taste like local function calls.

Caveat emptor. It seems this would be repeating the mistake of DCOM where APIs did get easier to use, but all applications got bogged down in a morass of fine-grained remote calls.

But what's worse -- fine-grained remote calls which are no different from fine-grained local calls, or a morass of fine-grained laboriously-constructed obviously-Web-API calls?

The latter is typically constructed via builder syntax or similar at best; some arduous construction of Map<String, String>s of parameters at worst. I regularly see both, and it's obvious these are just crunchy surrogates for what could be cleaner and simpler as procedure calls.

But I suspect they'd be a lot cleaner if instead of being too fine-grained and mostly ad-hoc, they were mainly "standard" calls like insert(...), update(...), delete(...), project(...), select(...), join(...), etc.

Well, if you're going to design your REST APIs badly, as if they were local calls, anyway, sure. But why should you make it even easier to do the dumb thing?

I can't legislate what development teams might do -- and most do define API functionality as if they were local calls, and then use the most clunky machinery to invoke them. It doesn't get much less clunky for using REST APIs well -- and it's still far too much exposed machinery, but most of Web development forgot (and continues to ignore) the thirty plus years that preceded it -- but at least I can provide facilities that are less ugly than the usual ones.

Of course, there is an argument that none of it should be exposed, at least in our own end-to-end applications. We should write applications purely in terms of meeting functional requirements without need to specify or even consider locality -- which means everything looks and tastes like local calls -- and the automated optimiser invisibly decides what code runs where.

Or you go the Erlang way and make everything look and taste like a remote call, at least to the extent that you acknowledge that any call is a message send that might fail or take infinitely long.

On a tangent, I get reminded of https://postgrest.org/en/stable/ which exposes your database queries directly as REST calls, which is almost the opposite.

On another tangent, I get reminded of Peter Alvaro and his Dedalus extension to Datalog, where it is claimed to be possible to prove eventual consistency of network protocols. So maybe logic languages are the way to go, as Simon Peyton Jones is now doing with the Verse language https://simon.peytonjones.org/assets/pdfs/haskell-exchange-22.pdf

 

Quote from tobega on January 2, 2023, 12:35 pm
Quote from Dave Voorhis on January 2, 2023, 11:47 am
Quote from tobega on January 2, 2023, 11:31 am
Quote from Dave Voorhis on January 2, 2023, 11:29 am
Quote from tobega on January 2, 2023, 11:03 am

I create and consume Web-tech APIs a lot.

It's hard not to recognise that significant complexity in this area would vanish if we favoured remote-procedure-call facilities, so a relational API framework should ideally look and taste like local function calls.

Caveat emptor. It seems this would be repeating the mistake of DCOM where APIs did get easier to use, but all applications got bogged down in a morass of fine-grained remote calls.

But what's worse -- fine-grained remote calls which are no different from fine-grained local calls, or a morass of fine-grained laboriously-constructed obviously-Web-API calls?

The latter is typically constructed via builder syntax or similar at best; some arduous construction of Map<String, String>s of parameters at worst. I regularly see both, and it's obvious these are just crunchy surrogates for what could be cleaner and simpler as procedure calls.

But I suspect they'd be a lot cleaner if instead of being too fine-grained and mostly ad-hoc, they were mainly "standard" calls like insert(...), update(...), delete(...), project(...), select(...), join(...), etc.

Well, if you're going to design your REST APIs badly, as if they were local calls, anyway, sure. But why should you make it even easier to do the dumb thing?

I can't legislate what development teams might do -- and most do define API functionality as if they were local calls, and then use the most clunky machinery to invoke them. It doesn't get much less clunky for using REST APIs well -- and it's still far too much exposed machinery, but most of Web development forgot (and continues to ignore) the thirty plus years that preceded it -- but at least I can provide facilities that are less ugly than the usual ones.

Of course, there is an argument that none of it should be exposed, at least in our own end-to-end applications. We should write applications purely in terms of meeting functional requirements without need to specify or even consider locality -- which means everything looks and tastes like local calls -- and the automated optimiser invisibly decides what code runs where.

Or you go the Erlang way and make everything look and taste like a remote call, at least to the extent that you acknowledge that any call is a message send that might fail or take infinitely long.

In terms of messaging behaviour, definitely.

On a tangent, I get reminded of https://postgrest.org/en/stable/ which exposes your database queries directly as REST calls, which is almost the opposite.

Nice tangent. I like the idea -- at least in principle -- though I shudder at the idea of writing some of the stuff I do in Java in PL/SQL, or even PL/Java.

On another tangent, I get reminded of Peter Alvaro and his Dedalus extension to Datalog, where it is claimed to be possible to prove eventual consistency of network protocols. So maybe logic languages are the way to go, as Simon Peyton Jones is now doing with the Verse language https://simon.peytonjones.org/assets/pdfs/haskell-exchange-22.pdf

A proportion of the database research community decided Datalog was the reasonable conclusion of work on the relational model and largely moved on to other things.

Logic languages probably are the way to go, and have been for the last 50 years. In another 50 years, maybe we'll more widely embrace them.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Hugh on January 1, 2023, 5:01 pm
Quote from Erwin on December 31, 2022, 10:26 pm
Quote from Hugh on December 27, 2022, 2:44 pm

If Dave's trying to tempt me with his nice bird pics (robin, eagle owl), then he's succeeded.  This one's a nice video taken by my trailcam, fortuitously set to take 30 seconds per shot.

Hugh

I've had Roodstaart (gekraagde roodstaart, I think, because it was clearly a couple and neither of them were black as at least one of them should have been had they been zwarte roodstaart) in my little garden for some 2-3 months.  This was the first year I got to see them.

Thanks, Erwin.  I have never forgotten learning the song of the redstart on the Hoge Veluwe one during my time in The Netherlands working on Business System 12.

I hope the forum at large doesn't mind these little off-topic diversions.  Like you, I've had to find things to occupy me since my full retirement in 2013.  I joined our Parish Council and I write articles under the rubric Nature Notes (originally just Bird Notes).

Hugh

I like little off-topic diversions.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

But you've got to get past this type system hang-up. You need one for a TTM/D, but the basic 6-odd types of SQL or JSON get a long way in navigating other people's data.

I think we've got to get past this notion that a few largely-arbitrary canonical types are enough and embrace typeful programming.

Of course they're not, not since GWBASIC. That wasn't my point.

The thing that TTM ignores is its connection to a data programming ecosystem. SQL was designed as a query language for mainframes and muddled along until MS and Simba came up with ODBC to connect it to the world. That's more than 30 years ago, and going strong. Most SQL now is used in that context.

TTM envisages a uniform self-contained world, like the mainframe SQL of the 1970s, but that's no longer fit for purpose. If the programming language TTM/D has its own slightly weird type system, then good luck to its users, but if the database can contain types that are defined in TTM/D, then that data is absolutely inaccessible to others. The reason ODBC succeeds is not just SQL but the choice of a type system that is a lingua franca for programming languages. The reason JSON succeeds is not 'strings on the wire' because everyone does that, it's because of a (nearly) good enough type system.

So whatever argument you make for a better type system, it better deliver something to the ecosystem that is at least as good as ODBC or JSON (or a blend of both?).

Yes, you will want your rich typeful programming language, but if you share data with others, you'll need to dumb it down so everyone can use it.

Andl - A New Database Language - andl.org
Quote from dandl on January 3, 2023, 3:29 am

But you've got to get past this type system hang-up. You need one for a TTM/D, but the basic 6-odd types of SQL or JSON get a long way in navigating other people's data.

I think we've got to get past this notion that a few largely-arbitrary canonical types are enough and embrace typeful programming.

Of course they're not, not since GWBASIC. That wasn't my point.

The thing that TTM ignores is its connection to a data programming ecosystem.

It's out of scope. TTM describes language characteristics, not implementation-specific integrations. Implementations may connect to ecosystems, or not, as they see fit.

In Rel, for example, values are exchanged with external systems as strings, which encode the literals that denote their corresponding values in Rel.

Other systems may wish to use other approaches.

SQL was designed as a query language for mainframes and muddled along until MS and Simba came up with ODBC to connect it to the world. That's more than 30 years ago, and going strong. Most SQL now is used in that context.

TTM envisages a uniform self-contained world, like the mainframe SQL of the 1970s, but that's no longer fit for purpose.

TTM doesn't really envisage any world aside from the semantics it describes. It is no more or less "fit for purpose" than any theoretical work with practical implementations.

In other words, you're describing implementation concerns that are outside the scope of TTM.

If the programming language TTM/D has its own slightly weird type system, then good luck to its users, but if the database can contain types that are defined in TTM/D, then that data is absolutely inaccessible to others. The reason ODBC succeeds is not just SQL but the choice of a type system that is a lingua franca for programming languages. The reason JSON succeeds is not 'strings on the wire' because everyone does that, it's because of a (nearly) good enough type system.

So whatever argument you make for a better type system, it better deliver something to the ecosystem that is at least as good as ODBC or JSON (or a blend of both?).

Yes, you will want your rich typeful programming language, but if you share data with others, you'll need to dumb it down so everyone can use it.

No, what it needs are exchange mechanisms -- same as we use now between, say, typical Web backends and Web frontends. Both may be (and now often are) richly typeful in their respective environments, but with distinct type definitions in the frontend and backend, sharing only an agreed exchange format -- typically strings of some structured (and structure-specifying) format like XML or JSON, etc.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

The thing that TTM ignores is its connection to a data programming ecosystem.

It's out of scope. TTM describes language characteristics, not implementation-specific integrations. Implementations may connect to ecosystems, or not, as they see fit.

In Rel, for example, values are exchanged with external systems as strings, which encode the literals that denote their corresponding values in Rel.

Which in the general case you cannot do. You cannot generate literals without a full-blown implementation of the target language.

Other systems may wish to use other approaches.

SQL was designed as a query language for mainframes and muddled along until MS and Simba came up with ODBC to connect it to the world. That's more than 30 years ago, and going strong. Most SQL now is used in that context.

TTM envisages a uniform self-contained world, like the mainframe SQL of the 1970s, but that's no longer fit for purpose.

TTM doesn't really envisage any world aside from the semantics it describes. It is no more or less "fit for purpose" than any theoretical work with practical implementations.

Another way of saying the same thing.

In other words, you're describing implementation concerns that are outside the scope of TTM.

If the programming language TTM/D has its own slightly weird type system, then good luck to its users, but if the database can contain types that are defined in TTM/D, then that data is absolutely inaccessible to others. The reason ODBC succeeds is not just SQL but the choice of a type system that is a lingua franca for programming languages. The reason JSON succeeds is not 'strings on the wire' because everyone does that, it's because of a (nearly) good enough type system.

So whatever argument you make for a better type system, it better deliver something to the ecosystem that is at least as good as ODBC or JSON (or a blend of both?).

Yes, you will want your rich typeful programming language, but if you share data with others, you'll need to dumb it down so everyone can use it.

No, what it needs are exchange mechanisms -- same as we use now between, say, typical Web backends and Web frontends. Both may be (and now often are) richly typeful in their respective environments, but with distinct type definitions in the frontend and backend, sharing only an agreed exchange format -- typically strings of some structured (and structure-specifying) format like XML or JSON, etc.

Ah, I think you're starting to get it. So SQL staked out a piece of territory in the exchange world courtesy of CLI/ODBC, over which it holds a monopoly. But the more general case of exchange mechanisms is rich with competing offerings. We've had maybe 50 years of working on them, and the universal thing across all the ones I know is the dependence on a canonical type system, usually one with about the same 6-9 concrete types seen in SQL/ODBC/JSON. Exchange mechanisms never allow type systems to be exchanged or enriched, they depend on a basic type system to be agreed in advance.

So this is the existential question for TTM/D. It can provide a richly typeful programming model and put new types into relvars, but it then creates a walled garden, and cannot exchange data with others. Yes it could export serialised versions of its data and import literals, but only a client with a full implementation of the D language could use such an API.

Or it can simply restrict itself to the same types as everyone else, at least in public relvars and catalog, and thereby become a good citizen. Or even an SQL replacement, if that was ever the intention.

 

Andl - A New Database Language - andl.org
Quote from dandl on January 4, 2023, 3:28 am

The thing that TTM ignores is its connection to a data programming ecosystem.

It's out of scope. TTM describes language characteristics, not implementation-specific integrations. Implementations may connect to ecosystems, or not, as they see fit.

In Rel, for example, values are exchanged with external systems as strings, which encode the literals that denote their corresponding values in Rel.

Which in the general case you cannot do. You cannot generate literals without a full-blown implementation of the target language.

For Rel, I've written this and use it. You can generate and exchange literals with a tiny subset of the target language, i.e., its parser recognises:

  • Primitive literals of the language's built-in types (in Rel, that's INTEGER, CHARACTER, RATIONAL, BOOLEAN);
  • Selector invocations of the form <name>(<argument1> [ ... <argumentn>]) where every argumentn is a primitive literal or selector invocation.

Every value can be emitted as a literal as described above.

Every literal as described above can be parsed and recognised as either being a primitive literal or a selector invocation. Of course, selector invocations can nest selector invocations, but ultimately every such nesting terminates in primitive literals.

It works.

Other systems may wish to use other approaches.

SQL was designed as a query language for mainframes and muddled along until MS and Simba came up with ODBC to connect it to the world. That's more than 30 years ago, and going strong. Most SQL now is used in that context.

TTM envisages a uniform self-contained world, like the mainframe SQL of the 1970s, but that's no longer fit for purpose.

TTM doesn't really envisage any world aside from the semantics it describes. It is no more or less "fit for purpose" than any theoretical work with practical implementations.

Another way of saying the same thing.

No, it isn't. Your argument is akin to saying that functional programming is no longer fit for purpose because OCaml doesn't specify multi-threaded primitives, or whatever. TTM describes the semantics of an ideal (per TTM pre/pro-scriptions) database language.

Implementation details are exactly that: implementation details that are outside of scope. You are meant to address them in your implementation and/or your domain, not rely on Date & Darwen to anticipate all future IT trends and explicitly specify them.

In other words, you're describing implementation concerns that are outside the scope of TTM.

If the programming language TTM/D has its own slightly weird type system, then good luck to its users, but if the database can contain types that are defined in TTM/D, then that data is absolutely inaccessible to others. The reason ODBC succeeds is not just SQL but the choice of a type system that is a lingua franca for programming languages. The reason JSON succeeds is not 'strings on the wire' because everyone does that, it's because of a (nearly) good enough type system.

So whatever argument you make for a better type system, it better deliver something to the ecosystem that is at least as good as ODBC or JSON (or a blend of both?).

Yes, you will want your rich typeful programming language, but if you share data with others, you'll need to dumb it down so everyone can use it.

No, what it needs are exchange mechanisms -- same as we use now between, say, typical Web backends and Web frontends. Both may be (and now often are) richly typeful in their respective environments, but with distinct type definitions in the frontend and backend, sharing only an agreed exchange format -- typically strings of some structured (and structure-specifying) format like XML or JSON, etc.

Ah, I think you're starting to get it. So SQL staked out a piece of territory in the exchange world courtesy of CLI/ODBC, over which it holds a monopoly. But the more general case of exchange mechanisms is rich with competing offerings. We've had maybe 50 years of working on them, and the universal thing across all the ones I know is the dependence on a canonical type system, usually one with about the same 6-9 concrete types seen in SQL/ODBC/JSON. Exchange mechanisms never allow type systems to be exchanged or enriched, they depend on a basic type system to be agreed in advance.

So this is the existential question for TTM/D. It can provide a richly typeful programming model and put new types into relvars, but it then creates a walled garden, and cannot exchange data with others. Yes it could export serialised versions of its data and import literals, but only a client with a full implementation of the D language could use such an API.

Or it can simply restrict itself to the same types as everyone else, at least in public relvars and catalog, and thereby become a good citizen. Or even an SQL replacement, if that was ever the intention.

What I'm objecting to are the "6-9 concrete types seen in SQL/ODBC/JSON."

I'm suggesting there should be one and only one exchange type: string.

Though I have some sympathy for providing an explicit user-programmer-level distinction -- for ergonomic reasons, rather than technical requirement -- between explicit strings and boolean, some numeric type (digits and an optional decimal point only), and perhaps a 'blob' to distinguish human-readable strings from non-human-readable strings, along with mechanisms to define aggregates (arrays) and composites (structs), ideally.

Dates, times, datetimes, floating point types, sized types of any kind, should be explicitly and categorically excluded. They are a minefield of pitfalls, and so if needed should be defined by application endpoints agreement rather than provided by default.

That's essentially what GraphQL does (See https://graphql.org/learn/schema/) though I object to their 'Float' -- probably the most misused type, prone to the usual imprecision gotchas and almost never needed in business practice. It should be an optional-decimal numeric type.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
PreviousPage 5 of 5