The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Codd 1970 'domain' does not mean Date 2016 'type' [was: burble about Date's IM]

PreviousPage 21 of 22Next
Quote from Hugh on March 20, 2020, 12:54 pm
Quote from AntC on March 20, 2020, 12:15 am
Quote from AntC on March 19, 2020, 9:10 pm
Quote from Hugh on March 19, 2020, 12:39 pm
Quote from AntC on March 19, 2020, 12:10 am

Hugh, I put it to you that you never write bare TUP{ x 1 } in even the most ad hoc of ad hoc queries.

How dare you make such an accusation!  Here's the very first saved Rel script that I looked at, knowing that in fact I make extensive use of tuple literals.

/* PCNoRec% of cases with at least one operand = Digit have no recalcitrants */
rel{
tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith1) not matching Recalcitrant), count(Studied join CasesWith1))},
...
}
order(asc PCNoRecs)

Please let me know if you want to any more.  No, on second thoughts, don't.

I agree that code is in no way easy on the eye. Presumably it's within a finger-slip to get

tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith2) not matching Recalcitrant), count(Studied join CasesWith3))}

I'd count this amongst my earlier comments "I do want to make the coding sufficiently verbose for the programmer to stop and ask themselves whether they might be doing something daft." You are doing something daft.

 

Am I being too cruel in critiquing Hugh's schema and opining there's something daft? TTM-ers do this all the time with others' schemas: typically somebody has gone deep down a rabbit hole with a schema relying on Nullable columns, and then asks how to express a query that seems impossible. We had on the forum recently an example 'Company Cars' of a schema where the designers head was full of Nulls and SQL's inability to express Exclusion Dependencies. It sometimes takes a lot of careful explaining to back the designer out of the rabbit hole.

So I can only guess how Hugh got down this rabbit hole. Why on earth are there nine separate relvars with (presumably) the same schema, differing only in the relvar name? Why on earth is there any expression at all with nine near-identical lines of code differing only in which relvar name they're accessing? Clearly Digit is reference data in this application (dare I say a DOMAIN) and the nine Digits should be in a reference relvar (yes even though we all know what the nine digits are). Clearly whatever these CasesWithn relvars are, their content should be in one relvar with an extra attribute Digit. Clearly that 9-line expression should be written as one line with the Digit drawn from the Digit reference relvar, and Joined to the CasesWithn relvar.

I'll speculate (remembering from when Hugh earlier described his 'four fours' experiment) there's a relvar holding formulas as CHAR. A formula might include many appearances of some digit(s). Then CasesWithn should be a Virtual:

CasesWithn := (Digits TIMES Cases) WHERE Cast_to_CHAR( Digit ) isSubStringOf( Formula );

And Hugh's ugly nine lines should be a straightforward SUMMARIZE CasesWithn ... GROUP BY Digit ....

I'll speculate further that Hugh originally designed a schema purely for the 'four fours' cases, where there was no need for a Digits reference relvar, because it would only have contained a four. So when he expanded the exercise to other digits, he merely cloned the Cases schema to CasesWith3, CasesWith2, etc.

Sorry, but I find your arrogant speculations unbearable.  My expression serves its purpose and that's all there is to it.

Hugh if you really think your repeated nine lines are unproblematic as code, and your nine relvars distinguished only by name, then we have nothing left to say to each other.

I did sketch how I would code it. But of course I was taking several guesses at what was the 'business domain' and how you'd organised the schema.

Frankly I found your code appalling, as indeed most of the code you bring forward from your pet projects. You just don't understand programming. Showing me some results as if that proved something is no different to an SQLer showing that Nulls and outer joins 'work'.

Quote from Dave Voorhis on March 20, 2020, 3:24 pm
Quote from dandl on March 20, 2020, 1:16 pm

I guess you could build a compiler that way, but it's not usual AFAIK.

When I taught compiler/interpreter construction, something I often pointed out was that although there are certain conventions for implementing lexxers, parsers, interpreters, optimisers and code generators -- and there is a body of academic and practical work around these -- there are no compiler police who will arrest you for deviating from common practice, and there are marvelous examples of interesting deviation like FORTH and LISP. In short, there are no rules, and there is a lot of variety in the conventions. So what's usual or not doesn't really matter, as long as the result meets performance and code quality requirements.

True. But I have written compilers for both FORTH and LISP, and they both tokenise their inputs. They both recognise numbers, strings and symbols as tokens of particular types, use a symbol table, and have a syntax that can be represented in BNF. Ditto for macro pre-processors. The only compiler I recall writing that dealt with source text as a character stream was TRAC, but I do know there are a few toy languages around as well.

But to describe the behaviour of any modern compiler in terms of operations on raw characters or substrings of program text, other than in the construction of tokens, is to seriously misunderstand and misrepresent the entire body of modern compiler theory. It's just wrong.

Andl - A New Database Language - andl.org

What are the selectors for INTEGER, CHARACTER, RATIONAL, and BOOLEAN?

They are exactly whatever they have been defined to be. Except that...

One answer is, "they're built-in... it's compiler magic" and that would be sufficient for most purposes.

Alternatively, we can assume that the lexical parsing phase will identify literals in the source code (which is a single string of characters) and represent them as strings of characters (i.e., substrings of the string of source code) along with their lexical type, such as integer, floating_point, string, identifier, true, false.

Then we assume the following selectors: INTEGER(CHARACTER), CHARACTER(CHARACTER), RATIONAL(CHARACTER), BOOLEAN(CHARACTER).

Each lexical type is mapped to a corresponding selector, so integer maps to INTEGER(CHARACTER), floating_point maps to RATIONAL(CHARACTER), string maps to CHARACTER(CHARACTER), and both true and false map to BOOLEAN(CHARACTER).

Each selector parses its CHARACTER parameter and returns an appropriate value of the given type.

Using SELECTOR here is misleading. The compiler lexical phase can be arbitrarily complicated, especially if it includes a macro pre-processor and compile time arithmetic. All that is certain is that the output from the lexical phase is a stream of tokens; each token has a type; and some of those types may map directly into the language type system. The lexer may do string-to-integer conversion, but there is no requirement it be the same function or follow the same rules as the SELECTOR visible in the language. Simple example: the compiler intrinsic probably only converts ASCII characters (as per the BNF); the SELECTOR that does it at runtime might well come from a library that supports a full range of Unicode characters.

This 'compiler magic', mapping token types into language types, is simply part of compiler writing.

There no visible POSSREPs, though -- that's compiler magic, or at least all hidden representation, and thus "zero possreps".

Though you can make THE_CHARACTER(...) available for each type, which returns the canonical string representation of that type and is used for display/output purposes.

Indeed, the canonical representation but usually not the source code that contributed to the original token, especially if these was a pre-processor phase.

 

Andl - A New Database Language - andl.org
Quote from AntC on March 20, 2020, 9:34 pm
Quote from Hugh on March 20, 2020, 12:54 pm
Quote from AntC on March 20, 2020, 12:15 am
Quote from AntC on March 19, 2020, 9:10 pm
Quote from Hugh on March 19, 2020, 12:39 pm
Quote from AntC on March 19, 2020, 12:10 am

Hugh, I put it to you that you never write bare TUP{ x 1 } in even the most ad hoc of ad hoc queries.

How dare you make such an accusation!  Here's the very first saved Rel script that I looked at, knowing that in fact I make extensive use of tuple literals.

/* PCNoRec% of cases with at least one operand = Digit have no recalcitrants */
rel{
tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith1) not matching Recalcitrant), count(Studied join CasesWith1))},
...
}
order(asc PCNoRecs)

Please let me know if you want to any more.  No, on second thoughts, don't.

I agree that code is in no way easy on the eye. Presumably it's within a finger-slip to get

tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith2) not matching Recalcitrant), count(Studied join CasesWith3))}

I'd count this amongst my earlier comments "I do want to make the coding sufficiently verbose for the programmer to stop and ask themselves whether they might be doing something daft." You are doing something daft.

 

Am I being too cruel in critiquing Hugh's schema and opining there's something daft? TTM-ers do this all the time with others' schemas: typically somebody has gone deep down a rabbit hole with a schema relying on Nullable columns, and then asks how to express a query that seems impossible. We had on the forum recently an example 'Company Cars' of a schema where the designers head was full of Nulls and SQL's inability to express Exclusion Dependencies. It sometimes takes a lot of careful explaining to back the designer out of the rabbit hole.

So I can only guess how Hugh got down this rabbit hole. Why on earth are there nine separate relvars with (presumably) the same schema, differing only in the relvar name? Why on earth is there any expression at all with nine near-identical lines of code differing only in which relvar name they're accessing? Clearly Digit is reference data in this application (dare I say a DOMAIN) and the nine Digits should be in a reference relvar (yes even though we all know what the nine digits are). Clearly whatever these CasesWithn relvars are, their content should be in one relvar with an extra attribute Digit. Clearly that 9-line expression should be written as one line with the Digit drawn from the Digit reference relvar, and Joined to the CasesWithn relvar.

I'll speculate (remembering from when Hugh earlier described his 'four fours' experiment) there's a relvar holding formulas as CHAR. A formula might include many appearances of some digit(s). Then CasesWithn should be a Virtual:

CasesWithn := (Digits TIMES Cases) WHERE Cast_to_CHAR( Digit ) isSubStringOf( Formula );

And Hugh's ugly nine lines should be a straightforward SUMMARIZE CasesWithn ... GROUP BY Digit ....

I'll speculate further that Hugh originally designed a schema purely for the 'four fours' cases, where there was no need for a Digits reference relvar, because it would only have contained a four. So when he expanded the exercise to other digits, he merely cloned the Cases schema to CasesWith3, CasesWith2, etc.

Sorry, but I find your arrogant speculations unbearable.  My expression serves its purpose and that's all there is to it.

Hugh if you really think your repeated nine lines are unproblematic as code, and your nine relvars distinguished only by name, then we have nothing left to say to each other.

I did sketch how I would code it. But of course I was taking several guesses at what was the 'business domain' and how you'd organised the schema.

Frankly I found your code appalling, as indeed most of the code you bring forward from your pet projects. You just don't understand programming. Showing me some results as if that proved something is no different to an SQLer showing that Nulls and outer joins 'work'.

Code naturally evolves over time from beautiful and elegant -- or hacked together in haste -- to ugly. The strength of a model or approach or language is its ability to remain usable and maintainable despite this almost inevitable evolution. If Tutorial D code has to meet some arbitrary standard of perfection in order to be considered here, then that isn't a realistic reflection of the real coding world in general, let alone the real work people are doing with it.

Some of my personal code is beautiful and some of it is hideous. It's a tribute to the viability of Tutorial D -- despite being a pedagogical language not intended for "real" work -- that it works well for both.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from dandl on March 20, 2020, 11:24 pm
Quote from Dave Voorhis on March 20, 2020, 3:24 pm
Quote from dandl on March 20, 2020, 1:16 pm

I guess you could build a compiler that way, but it's not usual AFAIK.

When I taught compiler/interpreter construction, something I often pointed out was that although there are certain conventions for implementing lexxers, parsers, interpreters, optimisers and code generators -- and there is a body of academic and practical work around these -- there are no compiler police who will arrest you for deviating from common practice, and there are marvelous examples of interesting deviation like FORTH and LISP. In short, there are no rules, and there is a lot of variety in the conventions. So what's usual or not doesn't really matter, as long as the result meets performance and code quality requirements.

True. But I have written compilers for both FORTH and LISP, and they both tokenise their inputs. They both recognise numbers, strings and symbols as tokens of particular types, use a symbol table, and have a syntax that can be represented in BNF. Ditto for macro pre-processors. The only compiler I recall writing that dealt with source text as a character stream was TRAC, but I do know there are a few toy languages around as well.

But to describe the behaviour of any modern compiler in terms of operations on raw characters or substrings of program text, other than in the construction of tokens, is to seriously misunderstand and misrepresent the entire body of modern compiler theory. It's just wrong.

Lexical parsing -- lexxing -- is fundamentally the process of categorising substrings. It's all about substrings.

The output of a typical lexxer is a stream of token/substring pairs. For some operations, the substring is ignored or elided. E.g., we see '+' as a substring of the input stream and categorise it as token type PLUS; we can throw away the '+' substring. Keyword tokens don't need to preserve their corresponding source substrings, nor do comment tokens unless the language supports "smart" comments, but the substrings associated with most other tokens are crucial for obtaining specific identifiers and values.

Language parsers are indeed typically defined in terms of token types rather than substrings (because the lexical parser is defined in terms of substring recognition, and has done that work for us) but that doesn't mean it's not about substrings. It is. It's all about substrings. We simply abstract them away at the appropriate level, but keep them as needed for identifiers, selecting values, etc.

Though some languages do more lexical parser heavy-lifting than others. In a language like FORTH, the global lexical parser only (as I recall) effectively identifies lexical types 'number' and 'word'. Everything else is up to the individual words to handle.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from dandl on March 21, 2020, 12:47 am

What are the selectors for INTEGER, CHARACTER, RATIONAL, and BOOLEAN?

They are exactly whatever they have been defined to be. Except that...

One answer is, "they're built-in... it's compiler magic" and that would be sufficient for most purposes.

Alternatively, we can assume that the lexical parsing phase will identify literals in the source code (which is a single string of characters) and represent them as strings of characters (i.e., substrings of the string of source code) along with their lexical type, such as integer, floating_point, string, identifier, true, false.

Then we assume the following selectors: INTEGER(CHARACTER), CHARACTER(CHARACTER), RATIONAL(CHARACTER), BOOLEAN(CHARACTER).

Each lexical type is mapped to a corresponding selector, so integer maps to INTEGER(CHARACTER), floating_point maps to RATIONAL(CHARACTER), string maps to CHARACTER(CHARACTER), and both true and false map to BOOLEAN(CHARACTER).

Each selector parses its CHARACTER parameter and returns an appropriate value of the given type.

Using SELECTOR here is misleading.

No, using SELECTOR here is correct, because that's what it is and what it does. There is a selection of a value given a token-text string literal representation of it. In TTM terms, that is a selector.

The compiler lexical phase can be arbitrarily complicated, especially if it includes a macro pre-processor and compile time arithmetic. All that is certain is that the output from the lexical phase is a stream of tokens; each token has a type; and some of those types may map directly into the language type system. The lexer may do string-to-integer conversion, but there is no requirement it be the same function or follow the same rules as the SELECTOR visible in the language. Simple example: the compiler intrinsic probably only converts ASCII characters (as per the BNF); the SELECTOR that does it at runtime might well come from a library that supports a full range of Unicode characters.

For preprocessing we might indeed use a separate set of preprocessor types -- particularly if the preprocessor is a distinct sublanguage -- but exactly the same process applies. Lexical types are simply mapped to selectors for the preprocessor types rather than the end language types.

This 'compiler magic', mapping token types into language types, is simply part of compiler writing.

Yes, reverting to "it's compiler magic" is fine if the semantics are not exposed in the end language; it's putting certain operations in a black box.

But it's interesting to look at what happens inside the box.

There no visible POSSREPs, though -- that's compiler magic, or at least all hidden representation, and thus "zero possreps".

Though you can make THE_CHARACTER(...) available for each type, which returns the canonical string representation of that type and is used for display/output purposes.

Indeed, the canonical representation but usually not the source code that contributed to the original token, especially if these was a pre-processor phase.

In Rel, there is no preprocessor phase and the canonical representation is a factored representation of the source code that could have contributed the original token. This allows round-tripping -- the output literal representation of any value is precisely the literal that can be input to select it.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from AntC on March 20, 2020, 9:34 pm
Quote from Hugh on March 20, 2020, 12:54 pm
Quote from AntC on March 20, 2020, 12:15 am
Quote from AntC on March 19, 2020, 9:10 pm
Quote from Hugh on March 19, 2020, 12:39 pm
Quote from AntC on March 19, 2020, 12:10 am

Hugh, I put it to you that you never write bare TUP{ x 1 } in even the most ad hoc of ad hoc queries.

How dare you make such an accusation!  Here's the very first saved Rel script that I looked at, knowing that in fact I make extensive use of tuple literals.

/* PCNoRec% of cases with at least one operand = Digit have no recalcitrants */
rel{
tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith1) not matching Recalcitrant), count(Studied join CasesWith1))},
...
}
order(asc PCNoRecs)

Please let me know if you want to any more.  No, on second thoughts, don't.

I agree that code is in no way easy on the eye. Presumably it's within a finger-slip to get

tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith2) not matching Recalcitrant), count(Studied join CasesWith3))}

I'd count this amongst my earlier comments "I do want to make the coding sufficiently verbose for the programmer to stop and ask themselves whether they might be doing something daft." You are doing something daft.

 

Am I being too cruel in critiquing Hugh's schema and opining there's something daft? TTM-ers do this all the time with others' schemas: typically somebody has gone deep down a rabbit hole with a schema relying on Nullable columns, and then asks how to express a query that seems impossible. We had on the forum recently an example 'Company Cars' of a schema where the designers head was full of Nulls and SQL's inability to express Exclusion Dependencies. It sometimes takes a lot of careful explaining to back the designer out of the rabbit hole.

So I can only guess how Hugh got down this rabbit hole. Why on earth are there nine separate relvars with (presumably) the same schema, differing only in the relvar name? Why on earth is there any expression at all with nine near-identical lines of code differing only in which relvar name they're accessing? Clearly Digit is reference data in this application (dare I say a DOMAIN) and the nine Digits should be in a reference relvar (yes even though we all know what the nine digits are). Clearly whatever these CasesWithn relvars are, their content should be in one relvar with an extra attribute Digit. Clearly that 9-line expression should be written as one line with the Digit drawn from the Digit reference relvar, and Joined to the CasesWithn relvar.

I'll speculate (remembering from when Hugh earlier described his 'four fours' experiment) there's a relvar holding formulas as CHAR. A formula might include many appearances of some digit(s). Then CasesWithn should be a Virtual:

CasesWithn := (Digits TIMES Cases) WHERE Cast_to_CHAR( Digit ) isSubStringOf( Formula );

And Hugh's ugly nine lines should be a straightforward SUMMARIZE CasesWithn ... GROUP BY Digit ....

I'll speculate further that Hugh originally designed a schema purely for the 'four fours' cases, where there was no need for a Digits reference relvar, because it would only have contained a four. So when he expanded the exercise to other digits, he merely cloned the Cases schema to CasesWith3, CasesWith2, etc.

Sorry, but I find your arrogant speculations unbearable.  My expression serves its purpose and that's all there is to it.

Hugh if you really think your repeated nine lines are unproblematic as code, and your nine relvars distinguished only by name, then we have nothing left to say to each other.

I did sketch how I would code it. But of course I was taking several guesses at what was the 'business domain' and how you'd organised the schema.

Frankly I found your code appalling, as indeed most of the code you bring forward from your pet projects. You just don't understand programming. Showing me some results as if that proved something is no different to an SQLer showing that Nulls and outer joins 'work'.

First, the code is utterly private.  Or was so, until I exposed it in this forum.  In such privacy I feel entitled not to bother about what other people might think of my code.  Personally, I was quite pleased with it.

Secondly, here's antc's suggestion for CasesWithin, which I've just tested to my satisfaction:

with (Digits :=
rel{tup{dig 1}, tup{dig 2}, tup{dig 3}, tup{dig 4}, tup{dig 5}, tup{dig 6}, tup{dig 7}, tup{dig 8}, tup{dig 9}},
CasesWithin := (Digits join Cases) where dig = i or dig = j or dig = k or dig = l
) : CasesWithin

Notice that it inevitably uses tuple literals.  Recall that antc originally "put it to [me]" that I would never have any use for these constructs,

Thirdly, I have no idea how to proceed along the lines of antc's SUMMARIZE suggestion to achieve the desired result.  No doubt it is possible but my solution was easy to work out and write down.

Fourthly, antc's speculation regarding why I don't have a Digits relvar is 100% wrong; and if I did it would be defined as virtual using the coding above.  I question the propriety of writing such speculation and troubling me to counter it, rather than politely asking a question to which I could answer yes or no.  I hope others can understand why antc is making me so angry and I'm sorry for any offence caused.

Hugh

Coauthor of The Third Manifesto and related books.
Quote from Hugh on March 21, 2020, 1:19 pm
Quote from AntC on March 20, 2020, 9:34 pm
Quote from Hugh on March 20, 2020, 12:54 pm
Quote from AntC on March 20, 2020, 12:15 am
Quote from AntC on March 19, 2020, 9:10 pm
Quote from Hugh on March 19, 2020, 12:39 pm
Quote from AntC on March 19, 2020, 12:10 am

Hugh, I put it to you that you never write bare TUP{ x 1 } in even the most ad hoc of ad hoc queries.

How dare you make such an accusation!  Here's the very first saved Rel script that I looked at, knowing that in fact I make extensive use of tuple literals.

/* PCNoRec% of cases with at least one operand = Digit have no recalcitrants */
rel{
tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith1) not matching Recalcitrant), count(Studied join CasesWith1))},
...
}
order(asc PCNoRecs)

Please let me know if you want to any more.  No, on second thoughts, don't.

I agree that code is in no way easy on the eye. Presumably it's within a finger-slip to get

tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith2) not matching Recalcitrant), count(Studied join CasesWith3))}

I'd count this amongst my earlier comments "I do want to make the coding sufficiently verbose for the programmer to stop and ask themselves whether they might be doing something daft." You are doing something daft.

 

Am I being too cruel in critiquing Hugh's schema and opining there's something daft? TTM-ers do this all the time with others' schemas: typically somebody has gone deep down a rabbit hole with a schema relying on Nullable columns, and then asks how to express a query that seems impossible. We had on the forum recently an example 'Company Cars' of a schema where the designers head was full of Nulls and SQL's inability to express Exclusion Dependencies. It sometimes takes a lot of careful explaining to back the designer out of the rabbit hole.

So I can only guess how Hugh got down this rabbit hole. Why on earth are there nine separate relvars with (presumably) the same schema, differing only in the relvar name? Why on earth is there any expression at all with nine near-identical lines of code differing only in which relvar name they're accessing? Clearly Digit is reference data in this application (dare I say a DOMAIN) and the nine Digits should be in a reference relvar (yes even though we all know what the nine digits are). Clearly whatever these CasesWithn relvars are, their content should be in one relvar with an extra attribute Digit. Clearly that 9-line expression should be written as one line with the Digit drawn from the Digit reference relvar, and Joined to the CasesWithn relvar.

I'll speculate (remembering from when Hugh earlier described his 'four fours' experiment) there's a relvar holding formulas as CHAR. A formula might include many appearances of some digit(s). Then CasesWithn should be a Virtual:

CasesWithn := (Digits TIMES Cases) WHERE Cast_to_CHAR( Digit ) isSubStringOf( Formula );

And Hugh's ugly nine lines should be a straightforward SUMMARIZE CasesWithn ... GROUP BY Digit ....

I'll speculate further that Hugh originally designed a schema purely for the 'four fours' cases, where there was no need for a Digits reference relvar, because it would only have contained a four. So when he expanded the exercise to other digits, he merely cloned the Cases schema to CasesWith3, CasesWith2, etc.

Sorry, but I find your arrogant speculations unbearable.  My expression serves its purpose and that's all there is to it.

Hugh if you really think your repeated nine lines are unproblematic as code, and your nine relvars distinguished only by name, then we have nothing left to say to each other.

I did sketch how I would code it. But of course I was taking several guesses at what was the 'business domain' and how you'd organised the schema.

Frankly I found your code appalling, as indeed most of the code you bring forward from your pet projects. You just don't understand programming. Showing me some results as if that proved something is no different to an SQLer showing that Nulls and outer joins 'work'.

First, the code is utterly private.  Or was so, until I exposed it in this forum.  In such privacy I feel entitled not to bother about what other people might think of my code.  Personally, I was quite pleased with it.

Secondly, here's antc's suggestion for CasesWithin, which I've just tested to my satisfaction:

with (Digits :=
rel{tup{dig 1}, tup{dig 2}, tup{dig 3}, tup{dig 4}, tup{dig 5}, tup{dig 6}, tup{dig 7}, tup{dig 8}, tup{dig 9}},
CasesWithin := (Digits join Cases) where dig = i or dig = j or dig = k or dig = l
) : CasesWithin

Notice that it inevitably uses tuple literals.  Recall that antc originally "put it to [me]" that I would never have any use for these constructs,

Thirdly, I have no idea how to proceed along the lines of antc's SUMMARIZE suggestion to achieve the desired result.  No doubt it is possible but my solution was easy to work out and write down.

Fourthly, antc's speculation regarding why I don't have a Digits relvar is 100% wrong; and if I did it would be defined as virtual using the coding above.  I question the propriety of writing such speculation and troubling me to counter it, rather than politely asking a question to which I could answer yes or no.  I hope others can understand why antc is making me so angry and I'm sorry for any offence caused.

Hugh

I understand why antc is making you so angry, and no offence has been caused.

I've noticed antc occasionally likes to wind people up.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on March 21, 2020, 2:36 pm
Quote from Hugh on March 21, 2020, 1:19 pm
Quote from AntC on March 20, 2020, 9:34 pm
Quote from Hugh on March 20, 2020, 12:54 pm
Quote from AntC on March 20, 2020, 12:15 am
Quote from AntC on March 19, 2020, 9:10 pm
Quote from Hugh on March 19, 2020, 12:39 pm
Quote from AntC on March 19, 2020, 12:10 am

Hugh, I put it to you that you never write bare TUP{ x 1 } in even the most ad hoc of ad hoc queries.

How dare you make such an accusation!  Here's the very first saved Rel script that I looked at, knowing that in fact I make extensive use of tuple literals.

/* PCNoRec% of cases with at least one operand = Digit have no recalcitrants */
rel{
tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith1) not matching Recalcitrant), count(Studied join CasesWith1))},
...
}
order(asc PCNoRecs)

Please let me know if you want to any more.  No, on second thoughts, don't.

I agree that code is in no way easy on the eye. Presumably it's within a finger-slip to get

tup{Digit 1, PCNoRecs Percent(Count((Studied join CasesWith2) not matching Recalcitrant), count(Studied join CasesWith3))}

I'd count this amongst my earlier comments "I do want to make the coding sufficiently verbose for the programmer to stop and ask themselves whether they might be doing something daft." You are doing something daft.

 

Am I being too cruel in critiquing Hugh's schema and opining there's something daft? TTM-ers do this all the time with others' schemas: typically somebody has gone deep down a rabbit hole with a schema relying on Nullable columns, and then asks how to express a query that seems impossible. We had on the forum recently an example 'Company Cars' of a schema where the designers head was full of Nulls and SQL's inability to express Exclusion Dependencies. It sometimes takes a lot of careful explaining to back the designer out of the rabbit hole.

So I can only guess how Hugh got down this rabbit hole. Why on earth are there nine separate relvars with (presumably) the same schema, differing only in the relvar name? Why on earth is there any expression at all with nine near-identical lines of code differing only in which relvar name they're accessing? Clearly Digit is reference data in this application (dare I say a DOMAIN) and the nine Digits should be in a reference relvar (yes even though we all know what the nine digits are). Clearly whatever these CasesWithn relvars are, their content should be in one relvar with an extra attribute Digit. Clearly that 9-line expression should be written as one line with the Digit drawn from the Digit reference relvar, and Joined to the CasesWithn relvar.

I'll speculate (remembering from when Hugh earlier described his 'four fours' experiment) there's a relvar holding formulas as CHAR. A formula might include many appearances of some digit(s). Then CasesWithn should be a Virtual:

CasesWithn := (Digits TIMES Cases) WHERE Cast_to_CHAR( Digit ) isSubStringOf( Formula );

And Hugh's ugly nine lines should be a straightforward SUMMARIZE CasesWithn ... GROUP BY Digit ....

I'll speculate further that Hugh originally designed a schema purely for the 'four fours' cases, where there was no need for a Digits reference relvar, because it would only have contained a four. So when he expanded the exercise to other digits, he merely cloned the Cases schema to CasesWith3, CasesWith2, etc.

Sorry, but I find your arrogant speculations unbearable.  My expression serves its purpose and that's all there is to it.

Hugh if you really think your repeated nine lines are unproblematic as code, and your nine relvars distinguished only by name, then we have nothing left to say to each other.

I did sketch how I would code it. But of course I was taking several guesses at what was the 'business domain' and how you'd organised the schema.

Frankly I found your code appalling, as indeed most of the code you bring forward from your pet projects. You just don't understand programming. Showing me some results as if that proved something is no different to an SQLer showing that Nulls and outer joins 'work'.

First, the code is utterly private.  Or was so, until I exposed it in this forum.  In such privacy I feel entitled not to bother about what other people might think of my code.  Personally, I was quite pleased with it.

Secondly, here's antc's suggestion for CasesWithin, which I've just tested to my satisfaction:

with (Digits :=
rel{tup{dig 1}, tup{dig 2}, tup{dig 3}, tup{dig 4}, tup{dig 5}, tup{dig 6}, tup{dig 7}, tup{dig 8}, tup{dig 9}},
CasesWithin := (Digits join Cases) where dig = i or dig = j or dig = k or dig = l
) : CasesWithin

Notice that it inevitably uses tuple literals.  Recall that antc originally "put it to [me]" that I would never have any use for these constructs,

Thirdly, I have no idea how to proceed along the lines of antc's SUMMARIZE suggestion to achieve the desired result.  No doubt it is possible but my solution was easy to work out and write down.

Fourthly, antc's speculation regarding why I don't have a Digits relvar is 100% wrong; and if I did it would be defined as virtual using the coding above.  I question the propriety of writing such speculation and troubling me to counter it, rather than politely asking a question to which I could answer yes or no.  I hope others can understand why antc is making me so angry and I'm sorry for any offence caused.

Hugh

I understand why antc is making you so angry, and no offence has been caused.

I've noticed antc occasionally likes to wind people up.

AntC thinks that if we're going to critique a DBMS for using nulls, and critique SQL for the convoluted code it needs to handle them (or the back-arsed way you have to express relational DIVIDE, for example), we'd better show we know what is 'good language design' and good coding design. Otherwise we don't have a leg to stand on.

Hugh (I suppose) is entitled to commit whatever horrors he likes in the privacy of his own pet projects. (My own pet projects I tend to use as opportunities for better coding design than whatever mind-numbing 'coding standards' I'm compelled to use at work. Never the less I wouldn't volunteer them here.) But as soon as Hugh makes them public here, he should expect evaluation. BTW it was Hugh who introduced the confrontational tone: "How dare you make such an accusation!"

I find Hugh's re-working not much easier on the eye -- certainly it's not "antc's suggestion" : I said/I repeat Digits is reference data, so must be a relvar.

Addit: To expand on that last comment, and connect it back to the earlier discussion: With a relvar Digits there is a declaration giving a type for attribute x/dig. In the 'data entry' for Digits, yes do that programmatically rather than screen entry, and yes there'll be tuple literals. But it won't be the "bare" TUP{ dig 1} of Hugh's initial claim: it'll be an assignment not a query; it'll be embedded in a REL{ } on rhs of an :=; and the lhs gives a type for the DOMAIN dig.

What I originally "put" to Hugh, Hugh has mis-represented; egregiously so because my original is still in the thread. I said

you never write bare TUP{ x 1 } in even the most ad hoc of ad hoc queries

 

Quote from AntC on March 21, 2020, 10:49 pm
Quote from Dave Voorhis on March 21, 2020, 2:36 pm

I understand why antc is making you so angry, and no offence has been caused.

I've noticed antc occasionally likes to wind people up.

AntC thinks that if we're going to critique a DBMS for using nulls, and critique SQL for the convoluted code it needs to handle them (or the back-arsed way you have to express relational DIVIDE, for example), we'd better show we know what is 'good language design' and good coding design. Otherwise we don't have a leg to stand on.

The measure of a better language is not that you can write good code in it, as there is some notion of "good code" in every language, no matter how awful the language is considered to be.

Though one measure of a worse language is that you must write good code in it.

A measure of a better language is that you can write any code in it -- good, bad, hasty, beautiful, awful -- and it will work without unpleasant surprises.

Hugh (I suppose) is entitled to commit whatever horrors he likes in the privacy of his own pet projects. (My own pet projects I tend to use as opportunities for better coding design than whatever mind-numbing 'coding standards' I'm compelled to use at work. Never the less I wouldn't volunteer them here.) But as soon as Hugh makes them public here, he should expect evaluation.

Why?

As far as I can tell, code style or the elegance of the solution wasn't in question. What was in question was this:

you never write bare TUP{ x 1 } in even the most ad hoc of ad hoc queries

All anyone has to show is that there is a reason to write TUP {x 1} -- or a similar tuple literal, I presume -- to refute the claim.

So Hugh has shown that, in fact, it is not the case that you never write bare tuple literals. Sometimes Hugh does write bare tuple literals.

I've sometimes written bare tuple literals. I don't recall why, off-hand, but I know I have.

In either case, the style of the code we've written is utterly and completely irrelevant.

I'm sure some of my code that used bare tuples was appallingly bad. I'm sure some of it was beautiful. Sometimes my personal projects have a goal of better coding design, and sometimes I just need a result. Sure, they all start out elegantly, but a few tweaks and turns in and I have to decide whether to rewrite or forge ahead. Often, "forge ahead" wins, and I wind up stapling bits together and hammering on scrap planks and using glue and using expanding foam to fill gaps and tying it up with twine until the whole mess coughs up a set of numbers.

And that's absolutely fine.

Furthermore, not everyone agrees on what elegant means. There is no objective measure of code beauty. I've seen some freakish abominations that my colleagues thought were wonderful, and vice versa.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
PreviousPage 21 of 22Next