The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Qualified name syntax/dot`.`tightfix

Page 1 of 2Next

I presume table.column style has been in SQL since the beginning(s)?

Where did that syntax come from? I'm having a hard time antedating it in computer languages or mathematical notation. There's some hints it was used in Iverson's APL (early '60's), but I can't find an example -- though APL exploited every possible syntax/glyph plus plenty that proved impossible.

Maybe from early machine-level languages? Byte-position within a machine word?

Principia Mathematica 1913 used 10.2.3 style for numbering propositions. Tractatus Logico-Philosophicus 1921 used 1.345. But I'm looking for prename.subname. So a symbolic name rather than a number.

Was it always possible in SQL to use sth like Supplier.SNO, Qty.SNO -- that is, same subname with different prename such that the prename in effect acted as a namespace qualifier? (And so that in some sense it was the 'same' SNO across different tables.)

PL/1 had it too : https://www.ibm.com/docs/en/pli-for-aix/3.1.0?topic=unions-structure-union-qualification

IBM's mainframe assembler had the construct too : https://www.ibm.com/docs/en/hla-and-tf/1.6.0?topic=addressing-qualified

Author of SIRA_PRISE
Quote from AntC on June 28, 2025, 4:22 am

I presume table.column style has been in SQL since the beginning(s)?

Where did that syntax come from? I'm having a hard time antedating it in computer languages or mathematical notation. There's some hints it was used in Iverson's APL (early '60's), but I can't find an example -- though APL exploited every possible syntax/glyph plus plenty that proved impossible.

Many years ago, I used APL at work (in a job that would now probably be called "data science" or "analytics" depending on the day) as a glorified desk calculator.  I don't recall there being a dotted notation, but there might have been and for whatever reason, I didn't use it.

I think the first mostly-unavoidable dotted notation would be in Simula 67, which is also arguably the first glance at notionally modern "object oriented" programming with classes, instances, and methods aka procedures invoked with a.b syntax, where a is a reference to a variable referencing an instance and b is the name of a procedure defined in a's class.

EDIT: I always thought PL/I came after Simula 67, but looks like it came a year or two before. I think Erwin's right -- blame PL/I.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

Thank you both.

(I was a little worried Erwin's link is to recently released documentation; but I find much the same wording in a 1965 IBM Manual, via wikip, page numbered 27 QUALIFIED NAMES.)

Quote from Dave Voorhis on June 30, 2025, 7:39 pm

EDIT: I always thought PL/I came after Simula 67, but looks like it came a year or two before. I think Erwin's right -- blame PL/I.

Aww I doubt if Dahl & Nygaard knew what was going on at IBM nor vice-versa. So probably they both got it from something earlier. (It seems to have spread pretty quickly. I can see an IBM Algol ref manual 1972 using it.)

Anyhoo, sufficiently ancient and in both academe and commercial IT that anybody designing a language in 1990 should have been very careful about where/how . might appear in the syntax.

use the same symbol to refer to data in different storage location [from the mainframe Assembler link]

(and that being deliberate, knowing Qualified Name would disambiguate) should have been by 1994 a familiar enough design pattern to not hobble a language by limiting a field name to only one data decl. Grrr.

Which language were you thinking of that was designed in 1994 and that was "hobbled by limiting a field name to only one data decl" ???

Author of SIRA_PRISE
Quote from Erwin on July 2, 2025, 8:44 pm

Which language were you thinking of that was designed in 1994 and that was "hobbled by limiting a field name to only one data decl" ???

It was designed in 1990. The field-handling bit (rubbish design) was added in 1994. But . had already been put in other syntax, so the field handling couldn't use it. An alternative and much better field-handling approach was built (experimentally) 1996 -- but still couldn't use .. A proposal to sort out the whole mess, and use . as <s>God</s> Dahl & Nygaard intended was written up in 1999. But never got anywhere (I guess because to use . properly would need breaking too much legacy code).

Starting ~2019 someone's been trying to sort it out. Essentially now there's two incompatible versions of the language, depending on the role of ..

The language is Haskell.

Quote from AntC on July 2, 2025, 9:27 pm
Quote from Erwin on July 2, 2025, 8:44 pm

Which language were you thinking of that was designed in 1994 and that was "hobbled by limiting a field name to only one data decl" ???

...

The language is Haskell.

No surprise there. The best thing about Haskell is that it's a playground for features that eventually wind up -- albeit probably in watered-down form -- in languages like C++, Java and C#.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on July 2, 2025, 9:30 pm
... The best thing about Haskell is that it's a playground for features ...
Usually, yes. And we might argue that qualified name syntax is old-hat and well understood so doesn't need any more playgrounding.
I argue that Haskell has nothing worthy of the name 'record' (other than that 1996 experiment); that Standard ML has something at least workable (dating from mid-1980's); that several Haskell-derived languages have at least the equivalent of SML, and have implemented record.field.subfield. (None of those attempts are underpinned by solid type theory or axiomatic semantics/they're all what I would call hacks.) So there's a great deal more playgrounding needed, rather than tinkering about [**] with a rubbish design.
For example: you can't implement in Haskell The Relational Model with projection/remove field/join/extend. So Haskellers subcontract that to SQL interface libraries and are back in the land of `NULL`. Or to various JSON/NoSQL tools.
The specific trigger for me asking now, is that the limitations have led to a data structuring style of deeply nested components -- which smells loudly of OOP and/or 1970's hierarchical databases -- but an inability to update a deeply-nested component. Also awkwardness in searching for a value, except by navigating in and out of the nesting. The bleedin-obvious to me  approach would be to vertically split the structure then if you need a view of all the stuff together, join it flat. But we don't have join except by a lot of brittle hard-coding.
JSON is all very well as a format for transmitting data. (Although if you wanted an especially human-unreadable format for transmission you could surely do better.) That doesn't mean IMO your program has to manipulate only in JSON format -- especially since the JSON data has almost certainly come from 'flat' SQL tables or spreadsheets.
[**] If you've seen the movie 'Clockwise': the tinkering has proceeded through a number of steps, each 'logical' under its own narrow constraints, to arrive at a Morris 1100 stuck in a muddy field with tattered suit and tie and bemused heifers about to charge.

 

Quote from AntC on July 3, 2025, 1:25 am
Quote from Dave Voorhis on July 2, 2025, 9:30 pm
... The best thing about Haskell is that it's a playground for features ...
Usually, yes. And we might argue that qualified name syntax is old-hat and well understood so doesn't need any more playgrounding.
I argue that Haskell has nothing worthy of the name 'record' (other than that 1996 experiment); that Standard ML has something at least workable (dating from mid-1980's); that several Haskell-derived languages have at least the equivalent of SML, and have implemented record.field.subfield. (None of those attempts are underpinned by solid type theory or axiomatic semantics/they're all what I would call hacks.) So there's a great deal more playgrounding needed, rather than tinkering about [**] with a rubbish design.
For example: you can't implement in Haskell The Relational Model with projection/remove field/join/extend. So Haskellers subcontract that to SQL interface libraries and are back in the land of `NULL`. Or to various JSON/NoSQL tools.
The specific trigger for me asking now, is that the limitations have led to a data structuring style of deeply nested components -- which smells loudly of OOP and/or 1970's hierarchical databases -- but an inability to update a deeply-nested component. Also awkwardness in searching for a value, except by navigating in and out of the nesting. The bleedin-obvious to me  approach would be to vertically split the structure then if you need a view of all the stuff together, join it flat. But we don't have join except by a lot of brittle hard-coding.
JSON is all very well as a format for transmitting data. (Although if you wanted an especially human-unreadable format for transmission you could surely do better.) That doesn't mean IMO your program has to manipulate only in JSON format -- especially since the JSON data has almost certainly come from 'flat' SQL tables or spreadsheets.
[**] If you've seen the movie 'Clockwise': the tinkering has proceeded through a number of steps, each 'logical' under its own narrow constraints, to arrive at a Morris 1100 stuck in a muddy field with tattered suit and tie and bemused heifers about to charge.

 

I've felt for a long time that to be successful as a language, it had to be concocted by programmers.  'By programmers, for programmers".  Except that for reasons that are obvious only to us few, it inevitably will prevent the language from embracing the RM in full.

Author of SIRA_PRISE
Quote from AntC on July 2, 2025, 9:27 pm
Quote from Erwin on July 2, 2025, 8:44 pm

Which language were you thinking of that was designed in 1994 and that was "hobbled by limiting a field name to only one data decl" ???

It was designed in 1990. The field-handling bit (rubbish design) was added in 1994. But . had already been put in other syntax, so the field handling couldn't use it. An alternative and much better field-handling approach was built (experimentally) 1996 -- but still couldn't use .. A proposal to sort out the whole mess, and use . as <s>God</s> Dahl & Nygaard intended was written up in 1999. But never got anywhere (I guess because to use . properly would need breaking too much legacy code).

Starting ~2019 someone's been trying to sort it out. Essentially now there's two incompatible versions of the language, depending on the role of ..

The language is Haskell.

It's baffling that they didn't come up with an alternative syntax when the . was taken, since it shouldn't take more than one or two example programs to realize the need for that kind of namespacing.

FWIW Stefik et al found that : or -> were more intuitive across the experience spectrum (although programmers are very well conditioned to .).

Side note: Java and Perl are apparently indistinguishable from line noise

https://www.vidarholen.net/~vidar/An_Empirical_Investigation_into_Programming_Language_Syntax.pdf

Side note 2: Now I understand why a Haskell programmer reacted so strongly to the idea that in Tailspin a field name needs to have the same datatype whatever record it is in. I never imagined it could be restricted to only one record type, though...

Page 1 of 2Next