The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Heading inference, not type inference

PreviousPage 2 of 5Next
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators. They are an ideal way to define all relational operators (including assignment). I'm in the process of doing that.

I disagree with the concept of relational constants. These are relational functions, not constants, and they too are best specified in terms of a heading and a (computed) body. I'm doing that too.

More soon.

 

In point of fact, that section

Andl - A New Database Language - andl.org
Quote from dandl on May 25, 2020, 3:59 am
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

<OR>, <NOT> are very far from familiar operators, as at the time of writing DTATRM.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators.

The operators are claimed to be (more than) relationally complete. The translation from Tutorial D to the core of operators is defined in Appendix A. I would have thought an easy exercise for the reader to give setbuilder specs for any derived operator.

They are an ideal way to define all relational operators (including assignment). I'm in the process of doing that.

I disagree with the concept of relational constants. These are relational functions, not constants, and they too are best specified in terms of a heading and a (computed) body. I'm doing that too.

'relational constants' are a reasonable explanatory notion. You haven't defined 'relational functions' so I've no idea what you're trying to say. But anyway functions are usually treated as constants -- except in weird programming languages in which you can destructively assign to function names. (As in BCPL -- but that was widely regarded as a mis-feature.)

If you mean 'algorithmic relations', as per HHT 1975, then please call them 'algorithmic relations'. And note that Hugh didn't 'twig' to the significance of HHT 1975 until around 2014, so that sort of thinking wouldn't have appeared in DTATRM/Appendix A.

Is what you're "doing that too" any more than interpreting HHT 1975's 'effectiveness' into C#?

Quote from AntC on May 25, 2020, 9:29 am
Quote from dandl on May 25, 2020, 3:59 am
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

<OR>, <NOT> are very far from familiar operators, as at the time of writing DTATRM.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators.

The operators are claimed to be (more than) relationally complete. The translation from Tutorial D to the core of operators is defined in Appendix A. I would have thought an easy exercise for the reader to give setbuilder specs for any derived operator.

These goals, while of some interest in their own right, are irrelevant to my current pursuit. And no, it's not trivial. You try and do that, particularly for SUMMARIZE and TCLOSE.

They are an ideal way to define all relational operators (including assignment). I'm in the process of doing that.

I disagree with the concept of relational constants. These are relational functions, not constants, and they too are best specified in terms of a heading and a (computed) body. I'm doing that too.

'relational constants' are a reasonable explanatory notion. You haven't defined 'relational functions' so I've no idea what you're trying to say. But anyway functions are usually treated as constants -- except in weird programming languages in which you can destructively assign to function names. (As in BCPL -- but that was widely regarded as a mis-feature.)

In maths, a function is a binary mapping from one set of values to another. That perfectly describes the PLUS 'relational constant'.

In computing, PLUS is widely known as a library function, not a constant (the function identifier is constant, but the values are computed on demand).

In HHT, there is no mention of  'relation constants'. They use 'algorithmic relation' to refer to a similar concept, but there are important differences. They do mention a 'procedure' to calculate the SINE, which is getting close.

You are free to use whatever you like, but you should now be in a position to understand what I mean. A function relation is a function (in the computing sense) represented relationally by a heading, allowing it to be used in a relational expression (within certain prescribed limits).

If you mean 'algorithmic relations', as per HHT 1975, then please call them 'algorithmic relations'. And note that Hugh didn't 'twig' to the significance of HHT 1975 until around 2014, so that sort of thinking wouldn't have appeared in DTATRM/Appendix A.

Is what you're "doing that too" any more than interpreting HHT 1975's 'effectiveness' into C#?

Not at all. They use that term in discussing the 'generation properties' and I'm not interested in that. Theirs is a research paper without practical use; my interest is purely practical.

 

Andl - A New Database Language - andl.org
Quote from dandl on May 25, 2020, 2:20 pm
Quote from AntC on May 25, 2020, 9:29 am
Quote from dandl on May 25, 2020, 3:59 am
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

<OR>, <NOT> are very far from familiar operators, as at the time of writing DTATRM.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators.

The operators are claimed to be (more than) relationally complete. The translation from Tutorial D to the core of operators is defined in Appendix A. I would have thought an easy exercise for the reader to give setbuilder specs for any derived operator.

These goals, while of some interest in their own right, are irrelevant to my current pursuit. And no, it's not trivial. You try and do that, particularly for SUMMARIZE and TCLOSE.

SUMMARIZE and TCLOSE are both "short hand" definitions, i.e., functions defined in terms of more primitive operators. There's no need to formally derive the type formula for SUMMARIZE or TCLOSE when a working compiler will do it for you.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from dandl on May 25, 2020, 2:20 pm
Quote from AntC on May 25, 2020, 9:29 am
Quote from dandl on May 25, 2020, 3:59 am
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

<OR>, <NOT> are very far from familiar operators, as at the time of writing DTATRM.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators.

The operators are claimed to be (more than) relationally complete. The translation from Tutorial D to the core of operators is defined in Appendix A. I would have thought an easy exercise for the reader to give setbuilder specs for any derived operator.

These goals, while of some interest in their own right, are irrelevant to my current pursuit. And no, it's not trivial. You try and do that, particularly for SUMMARIZE and TCLOSE.

They are an ideal way to define all relational operators (including assignment). I'm in the process of doing that.

I disagree with the concept of relational constants. These are relational functions, not constants, and they too are best specified in terms of a heading and a (computed) body. I'm doing that too.

'relational constants' are a reasonable explanatory notion. You haven't defined 'relational functions' so I've no idea what you're trying to say. But anyway functions are usually treated as constants -- except in weird programming languages in which you can destructively assign to function names. (As in BCPL -- but that was widely regarded as a mis-feature.)

In maths, a function is a binary mapping from one set of values to another.

That is, a constant mapping. In maths there's a great deal more than scalars and TTM's 'non-scalars' -- for example, functions, quantifiers, the whole of category theory. In particular, I would criticise TTM's type model for not adequately categorising function types, and thereby being far too hand-wavey about generic relational operators.

That perfectly describes the PLUS 'relational constant'.

In computing, PLUS is widely known as a library function, not a constant (the function identifier is constant, but the values are computed on demand).

The library function is a constant yes full stop. Values are not computed if you're using it on rhs of a MATCHING -- they're checked. But anyhow how a function (constant) behaves wrt its arguments and result doesn't stop it being a constant.

In HHT, there is no mention of  'relation constants'. They use 'algorithmic relation' to refer to a similar concept,

We might be toying with semantics but I'd say: Appendix A's relcons are meeting the same objective as HHT's algorithmic relations. The mechanisms to achieve that objective are markedly different.

but there are important differences. They do mention a 'procedure' to calculate the SINE, which is getting close.

In a procedural language (which is what most of their audience would be thinking about), an algorithm would be expressed as a 'procedure'. No mystery there. They also give hints they're thinking about lambda-defined functions -- maybe even something LISP-like or ISWIM-like [Landin 1966] -- but I guess that would scare their audience.

You are free to use whatever you like, but you should now be in a position to understand what I mean.

Nope no clearer.

A function relation is a function (in the computing sense) represented relationally by a heading, allowing it to be used in a relational expression (within certain prescribed limits).

Nope, doesn't clarify any of the points I made earlier. You've not even taken on board the point about key(s). Note that key(s) are a direct consequence of Functional Dependencies, so called because they reflect possible 'functions' -- just not expressed algorithmically.

If you mean 'algorithmic relations', as per HHT 1975, then please call them 'algorithmic relations'. And note that Hugh didn't 'twig' to the significance of HHT 1975 until around 2014, so that sort of thinking wouldn't have appeared in DTATRM/Appendix A.

Is what you're "doing that too" any more than interpreting HHT 1975's 'effectiveness' into C#?

Not at all. They use that term in discussing the 'generation properties' and I'm not interested in that. Theirs is a research paper without practical use; my interest is purely practical.

Looks ever-so like you are interested in 'generation properties' -- as needed for implementing a function in EXTEND, COMPOSE.

I found the paper eminently practical/implementable for my POC of algorithmic relations in Haskell. Since algorithmic relations don't contain scalar values, I made their headings 'phantom types' (something else TTM doesn't cater for). I made the key(s) -- or actually the corresponding FDs -- also phantom types. I paired each FD with a non-phantom constant, being the lambda-expression implementing the mapping from determining attributes to dependent. Type inference for relational operator invocations over algrels and/or physrels -- which means inferring not only the heading of the result but also the embedded FDs and composing the lambda expressions -- needed exactly HHT's rules for 'effectiveness'. No need to re-invent the wheel.

Quote from Dave Voorhis on May 25, 2020, 4:10 pm
Quote from dandl on May 25, 2020, 2:20 pm
Quote from AntC on May 25, 2020, 9:29 am
Quote from dandl on May 25, 2020, 3:59 am
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

<OR>, <NOT> are very far from familiar operators, as at the time of writing DTATRM.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators.

The operators are claimed to be (more than) relationally complete. The translation from Tutorial D to the core of operators is defined in Appendix A. I would have thought an easy exercise for the reader to give setbuilder specs for any derived operator.

These goals, while of some interest in their own right, are irrelevant to my current pursuit. And no, it's not trivial. You try and do that, particularly for SUMMARIZE and TCLOSE.

SUMMARIZE and TCLOSE are both "short hand" definitions, i.e., functions defined in terms of more primitive operators. There's no need to formally derive the type formula for SUMMARIZE or TCLOSE when a working compiler will do it for you.

TCLOSE is not a shorthand; alteratively, it is a shorthand for while. In either case it is a primitive operation. There is formal treatment of the other 5 operators, but not TCLOSE. I don't think it's at all easy to complete the set. This is an omission.

SUMMARIZE as a shorthand rests on an aggregation primitive that IMO cannot be reproduced by those 5 operators. To be compliant with TTM OO Pre 6 it would have to make use of an iterated aggregate operator; or it would be one of the operators in TD. No attempt has been made to derive either of these.

I have implemented both of these using function relations.

Andl - A New Database Language - andl.org
Quote from AntC on May 26, 2020, 5:12 am
Quote from dandl on May 25, 2020, 2:20 pm
Quote from AntC on May 25, 2020, 9:29 am
Quote from dandl on May 25, 2020, 3:59 am
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

<OR>, <NOT> are very far from familiar operators, as at the time of writing DTATRM.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators.

The operators are claimed to be (more than) relationally complete. The translation from Tutorial D to the core of operators is defined in Appendix A. I would have thought an easy exercise for the reader to give setbuilder specs for any derived operator.

These goals, while of some interest in their own right, are irrelevant to my current pursuit. And no, it's not trivial. You try and do that, particularly for SUMMARIZE and TCLOSE.

They are an ideal way to define all relational operators (including assignment). I'm in the process of doing that.

I disagree with the concept of relational constants. These are relational functions, not constants, and they too are best specified in terms of a heading and a (computed) body. I'm doing that too.

'relational constants' are a reasonable explanatory notion. You haven't defined 'relational functions' so I've no idea what you're trying to say. But anyway functions are usually treated as constants -- except in weird programming languages in which you can destructively assign to function names. (As in BCPL -- but that was widely regarded as a mis-feature.)

In maths, a function is a binary mapping from one set of values to another.

That is, a constant mapping.

The library function is a constant yes full stop. Values are not computed if you're using it on rhs of a MATCHING -- they're checked. But anyhow how a function (constant) behaves wrt its arguments and result doesn't stop it being a constant.

You've lost me. The App-A description of relation constant means (a) something just like a relvar that does not change (b) has exactly one tuple for each x,y,z that satisfied the predicate. As such, it has keys for each pair and can be used in reverse (to do division).

A function relation is nothing like that. It has no tuples; it implements a mapping from arguments to return value; it returns a value in a tuple form so that it can act as an argument to a relational operator (such as a join) but only in one direction, it cannot be reversed.

In HHT, there is no mention of  'relation constants'. They use 'algorithmic relation' to refer to a similar concept,

We might be toying with semantics but I'd say: Appendix A's relcons are meeting the same objective as HHT's algorithmic relations. The mechanisms to achieve that objective are markedly different.

If you like to look at it that way then function relations are my take on the same concept, with a different mechanism.

A function relation is a function (in the computing sense) represented relationally by a heading, allowing it to be used in a relational expression (within certain prescribed limits).

Nope, doesn't clarify any of the points I made earlier. You've not even taken on board the point about key(s). Note that key(s) are a direct consequence of Functional Dependencies, so called because they reflect possible 'functions' -- just not expressed algorithmically.

Sorry, missed the question. No, keys are not relevant here (and IMO keys are not used for accessing relvars either). This is quite simply a mapping function, arguments to return value, dressed up as a relation by means of a heading. Fullfils the promise of App-A and HHT with something that can be implemented.

If you mean 'algorithmic relations', as per HHT 1975, then please call them 'algorithmic relations'. And note that Hugh didn't 'twig' to the significance of HHT 1975 until around 2014, so that sort of thinking wouldn't have appeared in DTATRM/Appendix A.

Is what you're "doing that too" any more than interpreting HHT 1975's 'effectiveness' into C#?

Not at all. They use that term in discussing the 'generation properties' and I'm not interested in that. Theirs is a research paper without practical use; my interest is purely practical.

Looks ever-so like you are interested in 'generation properties' -- as needed for implementing a function in EXTEND, COMPOSE.

Generation here means taking the return value and generating the arguments that would produce that return. In terms of PLUS, it means joining on Y,Z to derive X. I have no interest in doing that.

I found the paper eminently practical/implementable for my POC of algorithmic relations in Haskell. Since algorithmic relations don't contain scalar values, I made their headings 'phantom types' (something else TTM doesn't cater for). I made the key(s) -- or actually the corresponding FDs -- also phantom types. I paired each FD with a non-phantom constant, being the lambda-expression implementing the mapping from determining attributes to dependent. Type inference for relational operator invocations over algrels and/or physrels -- which means inferring not only the heading of the result but also the embedded FDs and composing the lambda expressions -- needed exactly HHT's rules for 'effectiveness'. No need to re-invent the wheel.

Interesting. You're describing something almost identical to my proposal, dressed up in Haskell language. My headings are your phantom types. My functions are your lambda expressions. But I didn't pursue 'effectiveness' -- no need.

Andl - A New Database Language - andl.org
Quote from dandl on May 26, 2020, 6:00 am
Quote from Dave Voorhis on May 25, 2020, 4:10 pm
Quote from dandl on May 25, 2020, 2:20 pm
Quote from AntC on May 25, 2020, 9:29 am
Quote from dandl on May 25, 2020, 3:59 am
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

<OR>, <NOT> are very far from familiar operators, as at the time of writing DTATRM.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators.

The operators are claimed to be (more than) relationally complete. The translation from Tutorial D to the core of operators is defined in Appendix A. I would have thought an easy exercise for the reader to give setbuilder specs for any derived operator.

These goals, while of some interest in their own right, are irrelevant to my current pursuit. And no, it's not trivial. You try and do that, particularly for SUMMARIZE and TCLOSE.

SUMMARIZE and TCLOSE are both "short hand" definitions, i.e., functions defined in terms of more primitive operators. There's no need to formally derive the type formula for SUMMARIZE or TCLOSE when a working compiler will do it for you.

TCLOSE is not a shorthand; alteratively, it is a shorthand for while. In either case it is a primitive operation. There is formal treatment of the other 5 operators, but not TCLOSE. I don't think it's at all easy to complete the set. This is an omission.

SUMMARIZE as a shorthand rests on an aggregation primitive that IMO cannot be reproduced by those 5 operators. To be compliant with TTM OO Pre 6 it would have to make use of an iterated aggregate operator; or it would be one of the operators in TD. No attempt has been made to derive either of these.

I have implemented both of these using function relations.

TCLOSE is defined in terms of more primitive operators in Chapter 6 of DTATRM, at the end of the RM Prescription 19 section.

SUMMARIZE is defined in terms of more primitive operators in Appendix A of DTATRM, starting with the heading "Summarization."

The (type) signatures for aggregation operators like COUNT, SUM, AVG, etc. are not explicitly given but should be self-evident.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on May 26, 2020, 9:10 am
Quote from dandl on May 26, 2020, 6:00 am
Quote from Dave Voorhis on May 25, 2020, 4:10 pm
Quote from dandl on May 25, 2020, 2:20 pm
Quote from AntC on May 25, 2020, 9:29 am
Quote from dandl on May 25, 2020, 3:59 am
Quote from Dave Voorhis on May 24, 2020, 9:22 am
Quote from dandl on May 24, 2020, 2:13 am

As far as the point I am making in this thread, a grouped attribute value (such as PQ on p23 of DTATRM) is an argument to a selector operator and since both value and selector specify a heading, those headings must be checked (recursively in this case). But it is not necessary to generate a unique type in order to do that. I'm proposing to distinguish the concepts of (strict) type safety from (rule-based) heading compatibility.

I'm continuing to not see what's at issue here. "Heading inference, not type inference". This seems like a distinction without a difference. If the compiler is inferring the Heading for all intermediate sub-terms within an expression; what is it not doing that means it's not inferring the type?

In a sense I agree. My proposal is to consider separately the conventional type checking used for the scalar types from the rather unconventional heading inference of the RA operators (which, BTW are never spelled out in TTM or DTATRM, except by reference to TD).

Heading transformations are spelled out in Appendix A of DTATRM, starting with "Now we can define the operators per se."

<NOT>, <REMOVE>, <RENAME>, <AND> and <OR> are formally defined, each explicitly specifying a heading transformation in the form Hs = ...

Then the Tutorial D relational operators are defined in terms of the Appendix A relational algebra, so the semantics of heading transformations are clear.

You're right of course. Obviously I was referring to the main text of DTATRM, but that omission is indeed addressed in App-A. I find it confusing that its main purpose is to define a 'new RA', but its greatest value is these careful definitions of familiar operators.

<OR>, <NOT> are very far from familiar operators, as at the time of writing DTATRM.

It's disappointing that the heading and body definitions are only provided for the 5 proposed new operators.

The operators are claimed to be (more than) relationally complete. The translation from Tutorial D to the core of operators is defined in Appendix A. I would have thought an easy exercise for the reader to give setbuilder specs for any derived operator.

These goals, while of some interest in their own right, are irrelevant to my current pursuit. And no, it's not trivial. You try and do that, particularly for SUMMARIZE and TCLOSE.

SUMMARIZE and TCLOSE are both "short hand" definitions, i.e., functions defined in terms of more primitive operators. There's no need to formally derive the type formula for SUMMARIZE or TCLOSE when a working compiler will do it for you.

TCLOSE is not a shorthand; alteratively, it is a shorthand for while. In either case it is a primitive operation. There is formal treatment of the other 5 operators, but not TCLOSE. I don't think it's at all easy to complete the set. This is an omission.

SUMMARIZE as a shorthand rests on an aggregation primitive that IMO cannot be reproduced by those 5 operators. To be compliant with TTM OO Pre 6 it would have to make use of an iterated aggregate operator; or it would be one of the operators in TD. No attempt has been made to derive either of these.

I have implemented both of these using function relations.

TCLOSE is defined in terms of more primitive operators in Chapter 6 of DTATRM, at the end of the RM Prescription 19 section.

No, that's wrong. The definition of TCLOSE in DTATRM p184 is written as a recursive operator in TD. Recursion is a primitive operation not available from any combination of relational operators. App-A does not acknowledge this. The heading and body in set-builder notation should have been included.

SUMMARIZE is defined in terms of more primitive operators in Appendix A of DTATRM, starting with the heading "Summarization."

The (type) signatures for aggregation operators like COUNT, SUM, AVG, etc. are not explicitly given but should be self-evident.

App-A proposes that all such operators be replaced by relcons. I can see how to construct the relcon PLUS (leaving aside the issue that it is self-evidently infinite), but I do not see how to construct relcons to fill the roles of COUNT or SUM. App-A does not address this. Perhaps you could show me.

Andl - A New Database Language - andl.org
PreviousPage 2 of 5Next