# Tuple visibility through a view

Quote from p c on October 25, 2019, 12:56 amSo you say that the use of that truth table is not obvious. I’ll have to take your word for that because I would have thought it obvious that truth table represents propositional logic. I didn’t form the table very well for my purpose and I admit I got confused myself when trying to explain it further, so I’ll do something simpler so you don’t have to look at truth tables and with a little more detail about usage as opposed to purpose. At the end I’ll include a few argument results.

Codd described the qualities of “a universal data sub-language based on an applied predicate calculus.” That should be enough to conclude that plain English can’t prove dbms correctness. The seemingly endless clarifications and limitations the 1000+ page SQL spec is subject to plus fifteen years of inconclusive online discussions misunderstandings here in plain English about TTM would seem to support that conclusion.. TTM is more carefully thought out and written than the SQL spec and admirably succinct but it doesn’t address all logically possible updates. Nor did Codd 1970 but he’s the obvious relational starting point.

I realize that modern software development philosophy is “make it bigger” but Codd didn’t intend for a dbms to implement predicate logic he just wanted a logical application, a limited specialize application to support a data language as opposed to a host language. No doubt Datalog could be used to “make it bigger”.

By logically possible, I mean logically valid, in other words logically unambiguous updates. Logical implication alone doesn’t guarantee non-ambiguity. With a few minutes’ thought it should be obvious that predicate calculus proofs aren’t needed because they can be replaced with elementary propositional logic and basic set theory equivalents when several easy simplifications are applied to an argument.

Such a proof has the advantage that it can also serve as a simulation validation of actual dbms operation which could have advantages that aren’t discussed here and are far away from the mundane interests of this group.

Whether such a proof can be improved, eg., simplified, to handle more data possibilities than I mention by replacing basic sets with programming type systems is up for grabs as far as I know but I won’t try. Codd assumed domains, I assume partly because of the importance of functions but that doesn’t matter at the moment. An obvious simplification for definition purposes excludes type questions when an argument proof needs only identical domains so that when a single domain is assumed, mentioning it in a formal argument is logically redundant.

It’s astounding to read “...The structure, types and constraints of the database should be sufficient to ensure that the data in it is always valid, and thus the specific values should be irrelevant”. In other words, this is saying that specific answers don’t matter for deciding validity of updates. As executive management bumpf in the commercial world, it might pass scrutiny but it doesn’t tell a dbms developer or data designer how to implement a dbms or data design. Executives might not ask “does valid mean logically valid?”

The first simplification assumes a tautology due to the definition of natural join: A Join B = (A{common domains} Join B{common domains}) Join A Join B. Since the common attributes in this Invoices example as I apply them below use identical domains, domain name(s) can be dropped when an argument avoids expressing the second and third joins. This allows a propositional argument to use only the projections and postpone the second and third joins.

Another obvious simplification eliminates types and constraints from that quote. Assuming type means a TTM-type it doesn’t need to matter in a logical update argument. Secondly, since all constraints are equivalent to update constraints they only need to be reflected in a logical update argument..That argument is what determines validity.

After removing types and constraints from the quote, a logical structure is left. It is an arrangement of relations that are subsets of cartesian products of domains. The minimal arrangement, in other words the actual logical-only structure without external semantics other than the explicit assumptions, This structure is more than those subsets because it logically connects them based on identical domains and a minimal number of logical connectives that correspond in a certain logical interpretation of set operators such as complement, union, difference and intersection. I assume those operators correspond with combinations of proposition connectives negation,disjunction and conjunction.

The structure that is left is at once an algebraic expression and an argument that is expressed solely in terms of propositional logic A dbms language could reflect many such logical structures, either by associating one structure to one type of relation or one structure to one operator or just by associating each logical connective to one operator. In any of these ways a logical argument can be understood as a simulation of actual dbms behaviour.

Apparently it’s either not obvious or not understood how I’ve been applying truth tables as a kind of simulation of arguments and dbms behaviour, so I’ll try to explain even further. All along, I’ve assumed that given a very few general and constant simplifications, a formal spelling out of my interpretation of Codd 1970 is possible without predicate logic notation, allowing truth tables that reflect a data design and propositional logic plus logical validity checks to serve as a formal equivalent of predicate manipulations or endless plain English. Such tables are logical structures that theoretically could reflect a very extensive database but I think representing smaller database fragments when they can be logically isolated goes a long way, maybe as long a way as logically necessary. There is nothing new in any of this, it’s just an application of widely known elementary techniques.

For a logician, truth tables might not be needed, propositional arguments could suffice. I happen to use them solely because I’m not confident enough in my logical ability to do that so I use them to double-check argument validity. I don’t write them out by hand because that is too error-prone and needs more paper than I have, instead I use a tiny generator (Android app Logic++, it has its flaws but for me it’s better than the others) so I only need to type in propositional expessions.

To reinforce the above a little, the main simplifications are:

..1 Substitute projections of relation values on common attributes in order to decide values of joins. This is logically possible when A (Natural)Join B is rewritten with three joins because of the tautology::

A Join B = (A{common attributes} Join B{common attributes}) Join A Join B.

A simulation used as an update argument can postpone the second and third joins, simulating the dbms, provided the argument assumes that A and B are fixed values and not variables. They and their projections remain fixed as far as the argument is concerned. If necessary a conclusion in terms of real values can be determined after the argument by applying the second and third joins. Beyond a paper argument, this also applies to a dbms because operating on extensions it always has A and B available when it applies an update argument.

..2 The common attribute of the three relations declared by the Invoices deletion scenarios is Invoice Number. Let the symbols a, b and c stand for the respective projections InvoiceHeader{Invoice Number}, InvoiceDetail{Invoice Number} and their join Invoices{Invoice Number}, ie., c = a Join b.. In Codd’s terms, a, b and c can be understood in any of several ways besides wff’s that assume the common domain of Invoice Number so type theory is irrelevant to a logical argument. In dbms terms they can also be understood as standing for relation values. In algebraic terms they can be understood as boolean values or set values. If you insist, in TTM terms they could even be understood as relvars provided that it’s remembered that within this kind of simulation their values don’t vary, are fixed as far as this kind of simulation is concerned..

..3 When the common attributes mention primary keys, such a simulation doesn’t need to embody those explicitly because they are inherent in the substitute projections. In the Invoices scenario, the b and c projections don’t mention the whole key of InvoiceDetail and Invoices but they will be embodied by the second and third post-argument joins.

..4 It’s very important here to note that simple doesn’t mean simplistic. Many casual database arguments in the guise of ordinary discussions are simplistic because they exclude relevant information. When simulating a join for example, an argument must mention in one or more ways, the whole database fragment that its value affects or a sufficient representative. For example if several foreign key constraints in a database mention one primary key and several relations, only one of those relations needs to be reflected in the argument.

..5 An overall effect of these simplifications is that the logical structure of the simulation and argument if any is that it they are not affected by whether a relation is base or derived. This is because only the logical relationships are specified and it means for example that alternate physical designs could be checked for equivalence against given relationships.

..6 The proposition terms and arguments can be read n several ways, eg., ‘-a’ has the simultaneus meanngs of ‘negative a boolean’ and ‘empty a set’, ‘b-a’ means ‘b and not a’ as well as ‘b minus a’.

Following are logical expressions for several arguments/simulations based on the declarations in the Invoices database fragment.

( I have to mention the instructions for the generator i’m using: “Language propositional letters (A,B, .., p,q, …,), connectives (and ‘&’, or ‘+, conditional ‘>’, biconditional ‘=’, not ‘-’) and the parentheses. …… To perform an analysis on an inference, type in a list of premises separated by commas, then a ‘/’ following by the conclusion.” Personally, I don’t bother with the commas, I just write my desired conclusion as the rightmost wff surrounded by parentheses if necessary. Although not stated, the generator appears to follow customary precedence, ordering -, &,+ and >,= from highest to lowest. For readability, I also use more parentheses than are strictly necessary.)

A simulation or argument using this particular generator assumes that x>y is a logical implication akin to x->y or x implies y and in set terms it means that x represents a subset of y in the sense that if t is a member of x, then t is a member of y. Depending on other wffs in an argument it may also mean that x is an empty subset of y.

An elementary simulation expression for the a,b fragment of the Invoices fragment could be a > a + b + a & b. It is not very useful because the > implication is both premise and conclusion and is always true. It’s a tautology just like de Morgan’s laws. The truth table does show that it is logically valid because there is only one conclusion regardless of the truth of the a,b values.

A more accurate simulation connects a and b in the form of a different conclusion. Since there is a constraint on the Invoice Number projection in the Invoices database fragment such that b is a subset of a, the connection can be written as implication b>a. This is not only needed for a simulation but necessary too because all constraints are effectively update constraints. There are two possible conclusions from the implication so it’s not a logically valid argument.

Now simulate the a,b fragment including the subset implication with deletion from a simulating that a is to become empty with a conjunction expression having an implication: (b>a) & (-a > -a ). The intended conclusion is that when I intend a to be made empty, it can be concluded that a will always become empty. When I tell this generator to treat the second conjunct as an argument conclusion as in b>a / -a > -a it tells me the simulation is logically valid which is another way of saying it’s not ambiguous. But I could have told the same from a truth table for (b>a) > (-a > -a ) (two implications, not one) because the conclusion is a tautology. Likewise (b>a) & (x=a) > (-a > -x) is valid but (b>a) & (-a > -b) is not valid because a truth table shows that its conjunction is not true unless b is empty but (b>a) > (-a > -b) is valid because it’s always true.

Now introduce c in a simulation that uses the fragment a,b,c and simulates a multiple assignment as if a and b are base relvars ‘delete b from a&b giving empty a, empty b and empty a&b ’: a > b & c = a&b / - a > - a & -b > -b & -(a&b). The generator says this deletion simulation is a valid argument, but notice that the conclusion doesn’t actually mention the join by the name c. This is because unlke D-language relvars, prop logic doesn’t have variables, that lack being one of the reasons that makes prop logic logically complete, unlike D-languages. The last time I looked, Appendix A defined all deletions in terms of a blunt set minus operator.

(I have to wonder if this is part of what HW Buff meant n a letter Codd mentioned but which I’ve never seen a copy of.)

So sometimes in a logically complete language relvars need to be simulated. One way to do this is to replace the argument with a > b & c = a&b / - a > - a & -b > -b > -(a & b) which is a logically valid simulation for reflecting base deletion with an additional implication > -(a&b) using the same wff a&b that’s equivalent to a join expression such as a join b..

But now when I revise the simulation replacing the rightmost term of the argument conclusion by -c, as in: a > b & c = a&b / - a > - a & -b > -b > -c, this argument is logically invalid! This means that any relational language that defines deletion according to this particular argument form is not capable of reflecting a change to the join as a result of base deletion.

The first obvious change to simulate a deletion from c is to use Codd’s natural join defnition instead of Appendex A’s/TTM’s.. This is because values matter, values matter, values matter. They matter in this scenario because TD allows an a operand that has tuples that don’t match tuples in b. Add a couple of conjuncts, (a=c),(b=c) to the premises to reflect the fact that the scenario involves equal projections. This adds Codd’s natural join conditions. In this case I’ll just add one. Secondly, simulate the choosing of a subset of c in the conclusion by simulating the joining of it with one of the projections,as in the wff (c & b ). This gives (b>a) & (a=c) & (c = a&b) / -(c & b) > -a & - b which the generator says is valid. (The worth of a generator really shows as the arguments get longer, so do the validity tableaux.)

This arguments’s conclusion also shows that the conclusion -(c & b) > -a & - b & -c is valid too. In effect, a projection has been deleted which begs the question “who needs base relvars?”

This argument handles the join deletion case where c has just one tuple or many.

Tfie first Invoices deletion scenario is a round-about description of another scenario that is much more frequent in practice than expecting union workers to continue their session through lunch, namely foreign key references from Invoice Number to relations that were left out of the Invoices schema for so-called “simplicity”. It’s very likely that most applications won’t want a{Invoice Number} nor a{all attributes} to be deleted most of the time. To handle this or indeed the equivalent the SQL CASCADES keyword, the argument developed so far could be a specific simulation defined with a view of A Join B restricted to specific users or user groups.

For the everyday join deletions, an argument that introduces union in the form of ‘+’ such as the following argument could serve as the common join deletion proof and would prevent deletions from relation a:

(b>a) & (a=c) & (d=a&x + a&y) & (c = b&d&y) & (x > y) & (d>y) & -(d&x = d&y ) / -(c & b&y) > a&x & -b & -c &-(-a).

It is also logically valid (and the truth tableau is much longer). It introduces a fourth relation, d. Note that d used by a language needn’t be a declared relation, it could be a literal and its name not known outside the argument. Or it needn’t be mentioned at all because there is really no need to implement the argument in the sense that it is already implemeted, it can be used just as a definition of a language operator. I won’t bother explaining it in detail but a clue is that it introduces set union, aka the ‘+’ connective for the first time in this post. It should also apply to using another simulation proof rather than the above deletion simulation for defining and proving insertions.

So you say that the use of that truth table is not obvious. I’ll have to take your word for that because I would have thought it obvious that truth table represents propositional logic. I didn’t form the table very well for my purpose and I admit I got confused myself when trying to explain it further, so I’ll do something simpler so you don’t have to look at truth tables and with a little more detail about usage as opposed to purpose. At the end I’ll include a few argument results.

Codd described the qualities of “a universal data sub-language based on an applied predicate calculus.” That should be enough to conclude that plain English can’t prove dbms correctness. The seemingly endless clarifications and limitations the 1000+ page SQL spec is subject to plus fifteen years of inconclusive online discussions misunderstandings here in plain English about TTM would seem to support that conclusion.. TTM is more carefully thought out and written than the SQL spec and admirably succinct but it doesn’t address all logically possible updates. Nor did Codd 1970 but he’s the obvious relational starting point.

I realize that modern software development philosophy is “make it bigger” but Codd didn’t intend for a dbms to implement predicate logic he just wanted a logical application, a limited specialize application to support a data language as opposed to a host language. No doubt Datalog could be used to “make it bigger”.

By logically possible, I mean logically valid, in other words logically unambiguous updates. Logical implication alone doesn’t guarantee non-ambiguity. With a few minutes’ thought it should be obvious that predicate calculus proofs aren’t needed because they can be replaced with elementary propositional logic and basic set theory equivalents when several easy simplifications are applied to an argument.

Such a proof has the advantage that it can also serve as a simulation validation of actual dbms operation which could have advantages that aren’t discussed here and are far away from the mundane interests of this group.

Whether such a proof can be improved, eg., simplified, to handle more data possibilities than I mention by replacing basic sets with programming type systems is up for grabs as far as I know but I won’t try. Codd assumed domains, I assume partly because of the importance of functions but that doesn’t matter at the moment. An obvious simplification for definition purposes excludes type questions when an argument proof needs only identical domains so that when a single domain is assumed, mentioning it in a formal argument is logically redundant.

It’s astounding to read “...The structure, types and constraints of the database should be sufficient to ensure that the data in it is always valid, and thus the specific values should be irrelevant”. In other words, this is saying that specific answers don’t matter for deciding validity of updates. As executive management bumpf in the commercial world, it might pass scrutiny but it doesn’t tell a dbms developer or data designer how to implement a dbms or data design. Executives might not ask “does valid mean logically valid?”

The first simplification assumes a tautology due to the definition of natural join: A Join B = (A{common domains} Join B{common domains}) Join A Join B. Since the common attributes in this Invoices example as I apply them below use identical domains, domain name(s) can be dropped when an argument avoids expressing the second and third joins. This allows a propositional argument to use only the projections and postpone the second and third joins.

Another obvious simplification eliminates types and constraints from that quote. Assuming type means a TTM-type it doesn’t need to matter in a logical update argument. Secondly, since all constraints are equivalent to update constraints they only need to be reflected in a logical update argument..That argument is what determines validity.

After removing types and constraints from the quote, a logical structure is left. It is an arrangement of relations that are subsets of cartesian products of domains. The minimal arrangement, in other words the actual logical-only structure without external semantics other than the explicit assumptions, This structure is more than those subsets because it logically connects them based on identical domains and a minimal number of logical connectives that correspond in a certain logical interpretation of set operators such as complement, union, difference and intersection. I assume those operators correspond with combinations of proposition connectives negation,disjunction and conjunction.

The structure that is left is at once an algebraic expression and an argument that is expressed solely in terms of propositional logic A dbms language could reflect many such logical structures, either by associating one structure to one type of relation or one structure to one operator or just by associating each logical connective to one operator. In any of these ways a logical argument can be understood as a simulation of actual dbms behaviour.

Apparently it’s either not obvious or not understood how I’ve been applying truth tables as a kind of simulation of arguments and dbms behaviour, so I’ll try to explain even further. All along, I’ve assumed that given a very few general and constant simplifications, a formal spelling out of my interpretation of Codd 1970 is possible without predicate logic notation, allowing truth tables that reflect a data design and propositional logic plus logical validity checks to serve as a formal equivalent of predicate manipulations or endless plain English. Such tables are logical structures that theoretically could reflect a very extensive database but I think representing smaller database fragments when they can be logically isolated goes a long way, maybe as long a way as logically necessary. There is nothing new in any of this, it’s just an application of widely known elementary techniques.

For a logician, truth tables might not be needed, propositional arguments could suffice. I happen to use them solely because I’m not confident enough in my logical ability to do that so I use them to double-check argument validity. I don’t write them out by hand because that is too error-prone and needs more paper than I have, instead I use a tiny generator (Android app Logic++, it has its flaws but for me it’s better than the others) so I only need to type in propositional expessions.

To reinforce the above a little, the main simplifications are:

..1 Substitute projections of relation values on common attributes in order to decide values of joins. This is logically possible when A (Natural)Join B is rewritten with three joins because of the tautology::

A Join B = (A{common attributes} Join B{common attributes}) Join A Join B.

A simulation used as an update argument can postpone the second and third joins, simulating the dbms, provided the argument assumes that A and B are fixed values and not variables. They and their projections remain fixed as far as the argument is concerned. If necessary a conclusion in terms of real values can be determined after the argument by applying the second and third joins. Beyond a paper argument, this also applies to a dbms because operating on extensions it always has A and B available when it applies an update argument.

..2 The common attribute of the three relations declared by the Invoices deletion scenarios is Invoice Number. Let the symbols a, b and c stand for the respective projections InvoiceHeader{Invoice Number}, InvoiceDetail{Invoice Number} and their join Invoices{Invoice Number}, ie., c = a Join b.. In Codd’s terms, a, b and c can be understood in any of several ways besides wff’s that assume the common domain of Invoice Number so type theory is irrelevant to a logical argument. In dbms terms they can also be understood as standing for relation values. In algebraic terms they can be understood as boolean values or set values. If you insist, in TTM terms they could even be understood as relvars provided that it’s remembered that within this kind of simulation their values don’t vary, are fixed as far as this kind of simulation is concerned..

..3 When the common attributes mention primary keys, such a simulation doesn’t need to embody those explicitly because they are inherent in the substitute projections. In the Invoices scenario, the b and c projections don’t mention the whole key of InvoiceDetail and Invoices but they will be embodied by the second and third post-argument joins.

..4 It’s very important here to note that simple doesn’t mean simplistic. Many casual database arguments in the guise of ordinary discussions are simplistic because they exclude relevant information. When simulating a join for example, an argument must mention in one or more ways, the whole database fragment that its value affects or a sufficient representative. For example if several foreign key constraints in a database mention one primary key and several relations, only one of those relations needs to be reflected in the argument.

..5 An overall effect of these simplifications is that the logical structure of the simulation and argument if any is that it they are not affected by whether a relation is base or derived. This is because only the logical relationships are specified and it means for example that alternate physical designs could be checked for equivalence against given relationships.

..6 The proposition terms and arguments can be read n several ways, eg., ‘-a’ has the simultaneus meanngs of ‘negative a boolean’ and ‘empty a set’, ‘b-a’ means ‘b and not a’ as well as ‘b minus a’.

Following are logical expressions for several arguments/simulations based on the declarations in the Invoices database fragment.

( I have to mention the instructions for the generator i’m using: “Language propositional letters (A,B, .., p,q, …,), connectives (and ‘&’, or ‘+, conditional ‘>’, biconditional ‘=’, not ‘-’) and the parentheses. …… To perform an analysis on an inference, type in a list of premises separated by commas, then a ‘/’ following by the conclusion.” Personally, I don’t bother with the commas, I just write my desired conclusion as the rightmost wff surrounded by parentheses if necessary. Although not stated, the generator appears to follow customary precedence, ordering -, &,+ and >,= from highest to lowest. For readability, I also use more parentheses than are strictly necessary.)

A simulation or argument using this particular generator assumes that x>y is a logical implication akin to x->y or x implies y and in set terms it means that x represents a subset of y in the sense that if t is a member of x, then t is a member of y. Depending on other wffs in an argument it may also mean that x is an empty subset of y.

An elementary simulation expression for the a,b fragment of the Invoices fragment could be a > a + b + a & b. It is not very useful because the > implication is both premise and conclusion and is always true. It’s a tautology just like de Morgan’s laws. The truth table does show that it is logically valid because there is only one conclusion regardless of the truth of the a,b values.

A more accurate simulation connects a and b in the form of a different conclusion. Since there is a constraint on the Invoice Number projection in the Invoices database fragment such that b is a subset of a, the connection can be written as implication b>a. This is not only needed for a simulation but necessary too because all constraints are effectively update constraints. There are two possible conclusions from the implication so it’s not a logically valid argument.

Now simulate the a,b fragment including the subset implication with deletion from a simulating that a is to become empty with a conjunction expression having an implication: (b>a) & (-a > -a ). The intended conclusion is that when I intend a to be made empty, it can be concluded that a will always become empty. When I tell this generator to treat the second conjunct as an argument conclusion as in b>a / -a > -a it tells me the simulation is logically valid which is another way of saying it’s not ambiguous. But I could have told the same from a truth table for (b>a) > (-a > -a ) (two implications, not one) because the conclusion is a tautology. Likewise (b>a) & (x=a) > (-a > -x) is valid but (b>a) & (-a > -b) is not valid because a truth table shows that its conjunction is not true unless b is empty but (b>a) > (-a > -b) is valid because it’s always true.

Now introduce c in a simulation that uses the fragment a,b,c and simulates a multiple assignment as if a and b are base relvars ‘delete b from a&b giving empty a, empty b and empty a&b ’: a > b & c = a&b / - a > - a & -b > -b & -(a&b). The generator says this deletion simulation is a valid argument, but notice that the conclusion doesn’t actually mention the join by the name c. This is because unlke D-language relvars, prop logic doesn’t have variables, that lack being one of the reasons that makes prop logic logically complete, unlike D-languages. The last time I looked, Appendix A defined all deletions in terms of a blunt set minus operator.

(I have to wonder if this is part of what HW Buff meant n a letter Codd mentioned but which I’ve never seen a copy of.)

So sometimes in a logically complete language relvars need to be simulated. One way to do this is to replace the argument with a > b & c = a&b / - a > - a & -b > -b > -(a & b) which is a logically valid simulation for reflecting base deletion with an additional implication > -(a&b) using the same wff a&b that’s equivalent to a join expression such as a join b..

But now when I revise the simulation replacing the rightmost term of the argument conclusion by -c, as in: a > b & c = a&b / - a > - a & -b > -b > -c, this argument is logically invalid! This means that any relational language that defines deletion according to this particular argument form is not capable of reflecting a change to the join as a result of base deletion.

The first obvious change to simulate a deletion from c is to use Codd’s natural join defnition instead of Appendex A’s/TTM’s.. This is because values matter, values matter, values matter. They matter in this scenario because TD allows an a operand that has tuples that don’t match tuples in b. Add a couple of conjuncts, (a=c),(b=c) to the premises to reflect the fact that the scenario involves equal projections. This adds Codd’s natural join conditions. In this case I’ll just add one. Secondly, simulate the choosing of a subset of c in the conclusion by simulating the joining of it with one of the projections,as in the wff (c & b ). This gives (b>a) & (a=c) & (c = a&b) / -(c & b) > -a & - b which the generator says is valid. (The worth of a generator really shows as the arguments get longer, so do the validity tableaux.)

This arguments’s conclusion also shows that the conclusion -(c & b) > -a & - b & -c is valid too. In effect, a projection has been deleted which begs the question “who needs base relvars?”

This argument handles the join deletion case where c has just one tuple or many.

Tfie first Invoices deletion scenario is a round-about description of another scenario that is much more frequent in practice than expecting union workers to continue their session through lunch, namely foreign key references from Invoice Number to relations that were left out of the Invoices schema for so-called “simplicity”. It’s very likely that most applications won’t want a{Invoice Number} nor a{all attributes} to be deleted most of the time. To handle this or indeed the equivalent the SQL CASCADES keyword, the argument developed so far could be a specific simulation defined with a view of A Join B restricted to specific users or user groups.

For the everyday join deletions, an argument that introduces union in the form of ‘+’ such as the following argument could serve as the common join deletion proof and would prevent deletions from relation a:

(b>a) & (a=c) & (d=a&x + a&y) & (c = b&d&y) & (x > y) & (d>y) & -(d&x = d&y ) / -(c & b&y) > a&x & -b & -c &-(-a).

It is also logically valid (and the truth tableau is much longer). It introduces a fourth relation, d. Note that d used by a language needn’t be a declared relation, it could be a literal and its name not known outside the argument. Or it needn’t be mentioned at all because there is really no need to implement the argument in the sense that it is already implemeted, it can be used just as a definition of a language operator. I won’t bother explaining it in detail but a clue is that it introduces set union, aka the ‘+’ connective for the first time in this post. It should also apply to using another simulation proof rather than the above deletion simulation for defining and proving insertions.

Quote from Dave Voorhis on October 25, 2019, 8:22 amQuote from p c on October 25, 2019, 12:56 amSo you say that the use of that truth table is not obvious. I’ll have to take your word for that because I would have thought it obvious that truth table represents propositional logic. I didn’t form the table very well for my purpose and I admit I got confused myself when trying to explain it further, so I’ll do something simpler so you don’t have to look at truth tables and with a little more detail about usage as opposed to purpose. At the end I’ll include a few argument results.

Codd described the qualities of “a universal data sub-language based on an applied predicate calculus.” That should be enough to conclude that plain English can’t prove dbms correctness. The seemingly endless clarifications and limitations the 1000+ page SQL spec is subject to plus fifteen years of inconclusive online discussions misunderstandings here in plain English about TTM would seem to support that conclusion.. TTM is more carefully thought out and written than the SQL spec and admirably succinct but it doesn’t address all logically possible updates. Nor did Codd 1970 but he’s the obvious relational starting point.

I realize that modern software development philosophy is “make it bigger” but Codd didn’t intend for a dbms to implement predicate logic he just wanted a logical application, a limited specialize application to support a data language as opposed to a host language. No doubt Datalog could be used to “make it bigger”.

By logically possible, I mean logically valid, in other words logically unambiguous updates. Logical implication alone doesn’t guarantee non-ambiguity. With a few minutes’ thought it should be obvious that predicate calculus proofs aren’t needed because they can be replaced with elementary propositional logic and basic set theory equivalents when several easy simplifications are applied to an argument.

Such a proof has the advantage that it can also serve as a simulation validation of actual dbms operation which could have advantages that aren’t discussed here and are far away from the mundane interests of this group.

Whether such a proof can be improved, eg., simplified, to handle more data possibilities than I mention by replacing basic sets with programming type systems is up for grabs as far as I know but I won’t try. Codd assumed domains, I assume partly because of the importance of functions but that doesn’t matter at the moment. An obvious simplification for definition purposes excludes type questions when an argument proof needs only identical domains so that when a single domain is assumed, mentioning it in a formal argument is logically redundant.

It’s astounding to read “...The structure, types and constraints of the database should be sufficient to ensure that the data in it is always valid, and thus the specific values should be irrelevant”. In other words, this is saying that specific answers don’t matter for deciding validity of updates. As executive management bumpf in the commercial world, it might pass scrutiny but it doesn’t tell a dbms developer or data designer how to implement a dbms or data design. Executives might not ask “does valid mean logically valid?”

The first simplification assumes a tautology due to the definition of natural join: A Join B = (A{common domains} Join B{common domains}) Join A Join B. Since the common attributes in this Invoices example as I apply them below use identical domains, domain name(s) can be dropped when an argument avoids expressing the second and third joins. This allows a propositional argument to use only the projections and postpone the second and third joins.

Another obvious simplification eliminates types and constraints from that quote. Assuming type means a TTM-type it doesn’t need to matter in a logical update argument. Secondly, since all constraints are equivalent to update constraints they only need to be reflected in a logical update argument..That argument is what determines validity.

After removing types and constraints from the quote, a logical structure is left. It is an arrangement of relations that are subsets of cartesian products of domains. The minimal arrangement, in other words the actual logical-only structure without external semantics other than the explicit assumptions, This structure is more than those subsets because it logically connects them based on identical domains and a minimal number of logical connectives that correspond in a certain logical interpretation of set operators such as complement, union, difference and intersection. I assume those operators correspond with combinations of proposition connectives negation,disjunction and conjunction.

The structure that is left is at once an algebraic expression and an argument that is expressed solely in terms of propositional logic A dbms language could reflect many such logical structures, either by associating one structure to one type of relation or one structure to one operator or just by associating each logical connective to one operator. In any of these ways a logical argument can be understood as a simulation of actual dbms behaviour.

Apparently it’s either not obvious or not understood how I’ve been applying truth tables as a kind of simulation of arguments and dbms behaviour, so I’ll try to explain even further. All along, I’ve assumed that given a very few general and constant simplifications, a formal spelling out of my interpretation of Codd 1970 is possible without predicate logic notation, allowing truth tables that reflect a data design and propositional logic plus logical validity checks to serve as a formal equivalent of predicate manipulations or endless plain English. Such tables are logical structures that theoretically could reflect a very extensive database but I think representing smaller database fragments when they can be logically isolated goes a long way, maybe as long a way as logically necessary. There is nothing new in any of this, it’s just an application of widely known elementary techniques.

For a logician, truth tables might not be needed, propositional arguments could suffice. I happen to use them solely because I’m not confident enough in my logical ability to do that so I use them to double-check argument validity. I don’t write them out by hand because that is too error-prone and needs more paper than I have, instead I use a tiny generator (Android app Logic++, it has its flaws but for me it’s better than the others) so I only need to type in propositional expessions.

To reinforce the above a little, the main simplifications are:

..1 Substitute projections of relation values on common attributes in order to decide values of joins. This is logically possible when A (Natural)Join B is rewritten with three joins because of the tautology::

A Join B = (A{common attributes} Join B{common attributes}) Join A Join B.

A simulation used as an update argument can postpone the second and third joins, simulating the dbms, provided the argument assumes that A and B are fixed values and not variables. They and their projections remain fixed as far as the argument is concerned. If necessary a conclusion in terms of real values can be determined after the argument by applying the second and third joins. Beyond a paper argument, this also applies to a dbms because operating on extensions it always has A and B available when it applies an update argument.

..2 The common attribute of the three relations declared by the Invoices deletion scenarios is Invoice Number. Let the symbols a, b and c stand for the respective projections InvoiceHeader{Invoice Number}, InvoiceDetail{Invoice Number} and their join Invoices{Invoice Number}, ie., c = a Join b.. In Codd’s terms, a, b and c can be understood in any of several ways besides wff’s that assume the common domain of Invoice Number so type theory is irrelevant to a logical argument. In dbms terms they can also be understood as standing for relation values. In algebraic terms they can be understood as boolean values or set values. If you insist, in TTM terms they could even be understood as relvars provided that it’s remembered that within this kind of simulation their values don’t vary, are fixed as far as this kind of simulation is concerned..

..3 When the common attributes mention primary keys, such a simulation doesn’t need to embody those explicitly because they are inherent in the substitute projections. In the Invoices scenario, the b and c projections don’t mention the whole key of InvoiceDetail and Invoices but they will be embodied by the second and third post-argument joins.

..4 It’s very important here to note that simple doesn’t mean simplistic. Many casual database arguments in the guise of ordinary discussions are simplistic because they exclude relevant information. When simulating a join for example, an argument must mention in one or more ways, the whole database fragment that its value affects or a sufficient representative. For example if several foreign key constraints in a database mention one primary key and several relations, only one of those relations needs to be reflected in the argument.

..5 An overall effect of these simplifications is that the logical structure of the simulation and argument if any is that it they are not affected by whether a relation is base or derived. This is because only the logical relationships are specified and it means for example that alternate physical designs could be checked for equivalence against given relationships.

..6 The proposition terms and arguments can be read n several ways, eg., ‘-a’ has the simultaneus meanngs of ‘negative a boolean’ and ‘empty a set’, ‘b-a’ means ‘b and not a’ as well as ‘b minus a’.

Following are logical expressions for several arguments/simulations based on the declarations in the Invoices database fragment.

( I have to mention the instructions for the generator i’m using: “Language propositional letters (A,B, .., p,q, …,), connectives (and ‘&’, or ‘+, conditional ‘>’, biconditional ‘=’, not ‘-’) and the parentheses. …… To perform an analysis on an inference, type in a list of premises separated by commas, then a ‘/’ following by the conclusion.” Personally, I don’t bother with the commas, I just write my desired conclusion as the rightmost wff surrounded by parentheses if necessary. Although not stated, the generator appears to follow customary precedence, ordering -, &,+ and >,= from highest to lowest. For readability, I also use more parentheses than are strictly necessary.)

A simulation or argument using this particular generator assumes that x>y is a logical implication akin to x->y or x implies y and in set terms it means that x represents a subset of y in the sense that if t is a member of x, then t is a member of y. Depending on other wffs in an argument it may also mean that x is an empty subset of y.

An elementary simulation expression for the a,b fragment of the Invoices fragment could be a > a + b + a & b. It is not very useful because the > implication is both premise and conclusion and is always true. It’s a tautology just like de Morgan’s laws. The truth table does show that it is logically valid because there is only one conclusion regardless of the truth of the a,b values.

A more accurate simulation connects a and b in the form of a different conclusion. Since there is a constraint on the Invoice Number projection in the Invoices database fragment such that b is a subset of a, the connection can be written as implication b>a. This is not only needed for a simulation but necessary too because all constraints are effectively update constraints. There are two possible conclusions from the implication so it’s not a logically valid argument.

Now simulate the a,b fragment including the subset implication with deletion from a simulating that a is to become empty with a conjunction expression having an implication: (b>a) & (-a > -a ). The intended conclusion is that when I intend a to be made empty, it can be concluded that a will always become empty. When I tell this generator to treat the second conjunct as an argument conclusion as in b>a / -a > -a it tells me the simulation is logically valid which is another way of saying it’s not ambiguous. But I could have told the same from a truth table for (b>a) > (-a > -a ) (two implications, not one) because the conclusion is a tautology. Likewise (b>a) & (x=a) > (-a > -x) is valid but (b>a) & (-a > -b) is not valid because a truth table shows that its conjunction is not true unless b is empty but (b>a) > (-a > -b) is valid because it’s always true.

Now introduce c in a simulation that uses the fragment a,b,c and simulates a multiple assignment as if a and b are base relvars ‘delete b from a&b giving empty a, empty b and empty a&b ’: a > b & c = a&b / - a > - a & -b > -b & -(a&b). The generator says this deletion simulation is a valid argument, but notice that the conclusion doesn’t actually mention the join by the name c. This is because unlke D-language relvars, prop logic doesn’t have variables, that lack being one of the reasons that makes prop logic logically complete, unlike D-languages. The last time I looked, Appendix A defined all deletions in terms of a blunt set minus operator.

(I have to wonder if this is part of what HW Buff meant n a letter Codd mentioned but which I’ve never seen a copy of.)

So sometimes in a logically complete language relvars need to be simulated. One way to do this is to replace the argument with a > b & c = a&b / - a > - a & -b > -b > -(a & b) which is a logically valid simulation for reflecting base deletion with an additional implication > -(a&b) using the same wff a&b that’s equivalent to a join expression such as a join b..

But now when I revise the simulation replacing the rightmost term of the argument conclusion by -c, as in: a > b & c = a&b / - a > - a & -b > -b > -c, this argument is logically invalid! This means that any relational language that defines deletion according to this particular argument form is not capable of reflecting a change to the join as a result of base deletion.

The first obvious change to simulate a deletion from c is to use Codd’s natural join defnition instead of Appendex A’s/TTM’s.. This is because values matter, values matter, values matter. They matter in this scenario because TD allows an a operand that has tuples that don’t match tuples in b. Add a couple of conjuncts, (a=c),(b=c) to the premises to reflect the fact that the scenario involves equal projections. This adds Codd’s natural join conditions. In this case I’ll just add one. Secondly, simulate the choosing of a subset of c in the conclusion by simulating the joining of it with one of the projections,as in the wff (c & b ). This gives (b>a) & (a=c) & (c = a&b) / -(c & b) > -a & - b which the generator says is valid. (The worth of a generator really shows as the arguments get longer, so do the validity tableaux.)

This arguments’s conclusion also shows that the conclusion -(c & b) > -a & - b & -c is valid too. In effect, a projection has been deleted which begs the question “who needs base relvars?”

This argument handles the join deletion case where c has just one tuple or many.

Tfie first Invoices deletion scenario is a round-about description of another scenario that is much more frequent in practice than expecting union workers to continue their session through lunch, namely foreign key references from Invoice Number to relations that were left out of the Invoices schema for so-called “simplicity”. It’s very likely that most applications won’t want a{Invoice Number} nor a{all attributes} to be deleted most of the time. To handle this or indeed the equivalent the SQL CASCADES keyword, the argument developed so far could be a specific simulation defined with a view of A Join B restricted to specific users or user groups.

For the everyday join deletions, an argument that introduces union in the form of ‘+’ such as the following argument could serve as the common join deletion proof and would prevent deletions from relation a:

(b>a) & (a=c) & (d=a&x + a&y) & (c = b&d&y) & (x > y) & (d>y) & -(d&x = d&y ) / -(c & b&y) > a&x & -b & -c &-(-a).

It is also logically valid (and the truth tableau is much longer). It introduces a fourth relation, d. Note that d used by a language needn’t be a declared relation, it could be a literal and its name not known outside the argument. Or it needn’t be mentioned at all because there is really no need to implement the argument in the sense that it is already implemeted, it can be used just as a definition of a language operator. I won’t bother explaining it in detail but a clue is that it introduces set union, aka the ‘+’ connective for the first time in this post. It should also apply to using another simulation proof rather than the above deletion simulation for defining and proving insertions.

You've obviously put a lot of work into this, but I struggle to follow much of it. Am I correct that in this...

(b>a) & (a=c) & (d=a&x + a&y) & (c = b&d&y) & (x > y) & (d>y) & -(d&x = d&y ) / -(c & b&y) > a&x & -b & -c &-(-a)

...

ais InvoiceHeading {InvoiceNumber} ?...

bis InvoiceDetail {InvoiceNumber} ?...

cis InvoiceHeading {InvoiceNumber} JOIN InvoiceDetail {InvoiceNumber} ?...

dis... What?If so, what am I (or any database developer, or DBMS implementor) expected to do with that expression?

In other words, how does it impact the schema I gave in post #20?

How, and where, do I turn it into

Tutorial D?I find this statement of yours to be somewhat curious:

It’s astounding to read “...The structure, types and constraints of the database should be sufficient to ensure that the data in it is always valid, and thus the specific values should be irrelevant”. In other words, this is saying that specific answers don’t matter for deciding validity of updates.Do you really find it astounding to read that a database should be a

database, capable of correctly representing all the data in the relevant domain without having to specifically concern ourselves -- as database designers -- with each and every record that our users might enter?Is it not a fundamental property of a database to be able to ensure the integrity of data without individually considering each datum?

I should think it is

absolutelythe case that specific answers (answers?) don't matter -- andcan'tmatter -- for deciding the validity of updates. Theyhaveto be abstracted, or considered as variables rather than values. Otherwise, do you think it reasonable that corporate database systems be redesigned on, say, an invoice by invoice basis?

Quote from p c on October 25, 2019, 12:56 am

To reinforce the above a little, the main simplifications are:

A Join B = (A{common attributes} Join B{common attributes}) Join A Join B.

This argument handles the join deletion case where c has just one tuple or many.

You've obviously put a lot of work into this, but I struggle to follow much of it. Am I correct that in this...

(b>a) & (a=c) & (d=a&x + a&y) & (c = b&d&y) & (x > y) & (d>y) & -(d&x = d&y ) / -(c & b&y) > a&x & -b & -c &-(-a)

...*a *is InvoiceHeading {InvoiceNumber} ?

...*b* is InvoiceDetail {InvoiceNumber} ?

...*c* is InvoiceHeading {InvoiceNumber} JOIN InvoiceDetail {InvoiceNumber} ?

...*d* is... What?

If so, what am I (or any database developer, or DBMS implementor) expected to do with that expression?

In other words, how does it impact the schema I gave in post #20?

How, and where, do I turn it into **Tutorial D**?

I find this statement of yours to be somewhat curious:

*It’s astounding to read “...The structure, types and constraints of the database should be sufficient to ensure that the data in it is always valid, and thus the specific values should be irrelevant”. In other words, this is saying that specific answers don’t matter for deciding validity of updates.*

Do you really find it astounding to read that a database should be a *database*, capable of correctly representing all the data in the relevant domain without having to specifically concern ourselves -- as database designers -- with each and every record that our users might enter?

Is it not a fundamental property of a database to be able to ensure the integrity of data without individually considering each datum?

I should think it is *absolutely* the case that specific answers (answers?) don't matter -- and *can't* matter -- for deciding the validity of updates. They *have* to be abstracted, or considered as variables rather than values. Otherwise, do you think it reasonable that corporate database systems be redesigned on, say, an invoice by invoice basis?

*I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org*

Quote from p c on October 25, 2019, 11:05 amYou can never verify correctness without getting down to cases.

When talking about logical technique to do this it's best to avoid porridge words such as integrity.

They don't clarify a specific technique nor situation, they only make the problem bigger and amorphous.

It's not productive thought to question preconditions before trying to understand an argument.

You can never verify correctness without getting down to cases.

When talking about logical technique to do this it's best to avoid porridge words such as integrity.

They don't clarify a specific technique nor situation, they only make the problem bigger and amorphous.

It's not productive thought to question preconditions before trying to understand an argument.

Quote from Dave Voorhis on October 25, 2019, 11:11 amQuote from p c on October 25, 2019, 11:05 amYou can never verify correctness without getting down to cases.

When talking about logical technique to do this it's best to avoid porridge words such as integrity.

They don't clarify a specific technique nor situation, they only make the problem bigger and amorphous.

It's not productive thought to question preconditions before trying to understand an argument.

Individual cases may certainly be used to test a system. ("Let's see if it works when we do

this.") You appeared to suggest that they be considered whilst it's running. I don't see any problem with operations being conditional. E.g., "if InvoiceHeading has related InvoiceDetail tuples dothis, otherwise dothat."Is that what you're suggesting?

If so, that's not a specific case but a general condition.

We still don't seem to be any closer to identifying your suggested alternative to the schema I gave in post #20, or your alternative to TTM's approaches in general.

Quote from p c on October 25, 2019, 11:05 amYou can never verify correctness without getting down to cases.

When talking about logical technique to do this it's best to avoid porridge words such as integrity.

It's not productive thought to question preconditions before trying to understand an argument.

Individual cases may certainly be used to test a system. ("Let's see if it works when we do *this*.") You appeared to suggest that they be considered whilst it's running. I don't see any problem with operations being conditional. E.g., "if InvoiceHeading has related InvoiceDetail tuples do *this*, otherwise do *that*."

Is that what you're suggesting?

If so, that's not a specific case but a general condition.

We still don't seem to be any closer to identifying your suggested alternative to the schema I gave in post #20, or your alternative to TTM's approaches in general.

*I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org*

Quote from p c on October 25, 2019, 12:14 pm“...

dis... What?” d is a logical device only needed by some arguments. It introduces deliberate ambiguity, the ambiguity of union. Logically it is a relation, outside the argument it is nothing more than synthetic syntax. So are x and y. Logically d is a projection of d&x union d&y. Since the argument only expresses deletions from from d&y, d&x simulates the preservation of the original a relation value because the argument deletes from d&y not d&x union d&y. in other words the argument doesn’t reflect any deletions from a. It might reflect insertions but I haven’t gone so far as to check that.

“ If so, what am I (or any database developer, or DBMS implementor) expected to do with that expression?

“In other words, how does it impact the schema I gave in post #20?

“How, and where, do I turn it into Tutorial D?...”

These are amazing, unexpected questions, assuming you’ve absorbed the post they are extremely imaginary. The replies in order are 1) understand it, 2) read the post, a, b and c correspond to your schemas given relations, the arguments reflect logically valid replacements of them or if you prefer changes to their relvars. 3) whatever makes you think a logical argument needs to be turned into Tutorial D? For the topic at hand, the question should be does TD reflect/respect this logical argument or that logical argument?

There seem to be three kinds of posters here. I’d guess there are many more kinds of readers. There are posters who have fixed, ossified approaches, posters who want to be told what to do and how to do it and posters who are neither..

Without intending any criticism of TTM, it does seem to attract people of the second kind who often also think implementing TD is equivalent to dbms implementation. As if implementing one aspect of relational theory is equivalent to complete dbms development. One needs to know which kind one is and if necessary face the facts of life.

Codd ended the 1970 paper with this: “ … the material should be adequate for experienced systems programmers to visualize several approaches”.

Not everybody is suited to be a system programmer. That’s not a bad thing, actually it’s probably a good thing because there’s more demand for other talents.

A system programmer will think about dbms aspects far beyond TD. An industrial TD might have different modes of operation, for example an isolated mode for regression tests or a compatibility mode for comparing different schemas or different dbms’es. These might be candidates for physical implementation of logical update arguments. As I keep saying, system programmers are different because they don't just apply known solutions to make systems bigger, they re-define problems into better problems. The very few I've known seemed always to turn big problems into small problems.

“...*d* is... What?” d is a logical device only needed by some arguments. It introduces deliberate ambiguity, the ambiguity of union. Logically it is a relation, outside the argument it is nothing more than synthetic syntax. So are x and y. Logically d is a projection of d&x union d&y. Since the argument only expresses deletions from from d&y, d&x simulates the preservation of the original a relation value because the argument deletes from d&y not d&x union d&y. in other words the argument doesn’t reflect any deletions from a. It might reflect insertions but I haven’t gone so far as to check that.

“ If so, what am I (or any database developer, or DBMS implementor) expected to do with that expression?

“In other words, how does it impact the schema I gave in post #20?

“How, and where, do I turn it into Tutorial D?...”

These are amazing, unexpected questions, assuming you’ve absorbed the post they are extremely imaginary. The replies in order are 1) understand it, 2) read the post, a, b and c correspond to your schemas given relations, the arguments reflect logically valid replacements of them or if you prefer changes to their relvars. 3) whatever makes you think a logical argument needs to be turned into Tutorial D? For the topic at hand, the question should be does TD reflect/respect this logical argument or that logical argument?

There seem to be three kinds of posters here. I’d guess there are many more kinds of readers. There are posters who have fixed, ossified approaches, posters who want to be told what to do and how to do it and posters who are neither..

Without intending any criticism of TTM, it does seem to attract people of the second kind who often also think implementing TD is equivalent to dbms implementation. As if implementing one aspect of relational theory is equivalent to complete dbms development. One needs to know which kind one is and if necessary face the facts of life.

Codd ended the 1970 paper with this: “ … the material should be adequate for experienced systems programmers to visualize several approaches”.

Not everybody is suited to be a system programmer. That’s not a bad thing, actually it’s probably a good thing because there’s more demand for other talents.

A system programmer will think about dbms aspects far beyond TD. An industrial TD might have different modes of operation, for example an isolated mode for regression tests or a compatibility mode for comparing different schemas or different dbms’es. These might be candidates for physical implementation of logical update arguments. As I keep saying, system programmers are different because they don't just apply known solutions to make systems bigger, they re-define problems into better problems. The very few I've known seemed always to turn big problems into small problems.

Quote from p c on October 25, 2019, 12:38 pmForgot to say that to me the last argument is much more interesting than the simpler ones. That's because the first ones amount to abstractions that parallel dbms behaviour. But fundamentally the last one is more than abstraction, it's also a synthesis. Most of us can abstract but its a fairly small minority than can synthesize with ease. Usually mental synthesis applied to practical problems treated as a kind of direct translation as if it amount to "making it bigger" but that's not its purpose at all. I find that talent hard which is why it interests me. As they say, sometimes you have to make an equation bigger before you can make it smaller.

Although it's logically valid and so could form part of the definition of a deletion operator, I'm not 100% sure when going into questions beyond such a definition whether it contains all the premises it ought to for complete understanding. That's why I offered only hints about its difference and assumed that anybody with true interest would think about it for themselves. I mean think about the argument, not assume translation for implementation!

Forgot to say that to me the last argument is much more interesting than the simpler ones. That's because the first ones amount to abstractions that parallel dbms behaviour. But fundamentally the last one is more than abstraction, it's also a synthesis. Most of us can abstract but its a fairly small minority than can synthesize with ease. Usually mental synthesis applied to practical problems treated as a kind of direct translation as if it amount to "making it bigger" but that's not its purpose at all. I find that talent hard which is why it interests me. As they say, sometimes you have to make an equation bigger before you can make it smaller.

Although it's logically valid and so could form part of the definition of a deletion operator, I'm not 100% sure when going into questions beyond such a definition whether it contains all the premises it ought to for complete understanding. That's why I offered only hints about its difference and assumed that anybody with true interest would think about it for themselves. I mean think about the argument, not assume translation for implementation!

Quote from Dave Voorhis on October 25, 2019, 12:44 pmQuote from p c on October 25, 2019, 12:14 pm“...

dis... What?” d is a logical device only needed by some arguments. It introduces deliberate ambiguity, the ambiguity of union. Logically it is a relation, outside the argument it is nothing more than synthetic syntax. So are x and y. Logically d is a projection of d&x union d&y. Since the argument only expresses deletions from from d&y, d&x simulates the preservation of the original a relation value because the argument deletes from d&y not d&x union d&y. in other words the argument doesn’t reflect any deletions from a. It might reflect insertions but I haven’t gone so far as to check that.“ If so, what am I (or any database developer, or DBMS implementor) expected to do with that expression?

“In other words, how does it impact the schema I gave in post #20?

“How, and where, do I turn it into Tutorial D?...”

These are amazing questions, assuming you’ve absorbed the post they are extremely imaginary. The replies in order are 1) understand it, 2) read the post, a, b and c correspond to your schemas given relations, the arguments reflect logically valid replacements of them or if you prefer changes to their relvars. 3) whatever makes you think a logical argument needs to be turned into Tutorial D? For the topic at hand, the question should be does TD reflect/respect this logical argument or that logical argument?

There seem to be three kinds of posters here. I’d guess there are many more kinds of readers. There are posters who have fixed, ossified approaches, posters who want to be told what to do and how to do it and posters who are neither..

Without intending any criticism of TTM, it does seem to attract people of the second kind who often also think implementing TD is equivalent to dbms implementation. As if implementing one aspect of relational theory is equivalent to complete dbms development. One needs to know which kind one is and if necessary face the facts of life.

Codd ended the 1970 paper with this: “ … the material should be adequate for experienced systems programmers to visualize several approaches”.

Not everybody is suited to be a system programmer. That’s not a bad thing, actually it’s probably a good thing because there’s more demand for other talents.

A system programmer will think about dbms aspects far beyond TD. An industrial TD might have different modes of operation, for example an isolated mode for regression tests or a compatibility mode for comparing different schemas or different dbms’es. These might be candidates for physical implementation of logical update arguments.

Quote from p c on October 25, 2019, 12:38 pmForgot to say that to me the last argument is much more interesting than the simpler ones. That's because the first ones amount to abstractions that parallel dbms behaviour. But fundamentally the last one is more than abstraction, it's also a synthesis. Most of us can abstract but its a fairly small minority than can synthesize with ease. Usually mental synthesis applied to practical problems treated as a kind of direct translation as if it amount to "making it bigger" but that's not its purpose at all. I find that talent hard which is why it interests me. As they say, sometimes you have to make an equation bigger before you can make it smaller.

Although it's logically valid and so could form part of the definition of a deletion operator, I'm not 100% sure when going into questions beyond such a definition whether it contains all the premises it ought to for complete understanding. That's why I offered only hints about its difference and assumed that anybody with true interest would think about it for themselves. I mean think about the argument, not assume translation for implementation!

Sorry, as usual I find your responses baffling. I

ama systems programmer, or at least that's what I've been called on a few occasions.Does

anyonehere understand @p-c's replies?I'm left wondering if you're the database equivalent to Irwin Corey.

Quote from p c on October 25, 2019, 12:14 pmdis... What?” d is a logical device only needed by some arguments. It introduces deliberate ambiguity, the ambiguity of union. Logically it is a relation, outside the argument it is nothing more than synthetic syntax. So are x and y. Logically d is a projection of d&x union d&y. Since the argument only expresses deletions from from d&y, d&x simulates the preservation of the original a relation value because the argument deletes from d&y not d&x union d&y. in other words the argument doesn’t reflect any deletions from a. It might reflect insertions but I haven’t gone so far as to check that.“In other words, how does it impact the schema I gave in post #20?

“How, and where, do I turn it into Tutorial D?...”

These are amazing questions, assuming you’ve absorbed the post they are extremely imaginary. The replies in order are 1) understand it, 2) read the post, a, b and c correspond to your schemas given relations, the arguments reflect logically valid replacements of them or if you prefer changes to their relvars. 3) whatever makes you think a logical argument needs to be turned into Tutorial D? For the topic at hand, the question should be does TD reflect/respect this logical argument or that logical argument?

A system programmer will think about dbms aspects far beyond TD. An industrial TD might have different modes of operation, for example an isolated mode for regression tests or a compatibility mode for comparing different schemas or different dbms’es. These might be candidates for physical implementation of logical update arguments.

Quote from p c on October 25, 2019, 12:38 pm

Sorry, as usual I find your responses baffling. I *am* a systems programmer, or at least that's what I've been called on a few occasions.

Does *anyone* here understand @p-c's replies?

I'm left wondering if you're the database equivalent to Irwin Corey.

*I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org*

Quote from p c on October 25, 2019, 1:04 pmI've been called a system programmer too but I knew I wasn't except on rare occasions when I had help from one so to outsiders it might have looked like I was. But I've known people who could take what I wrote and refine to make it practical, at one time all I wrote for several years were nothing but proofs of concepts that customers never saw. A very few other people would redefine some of them into reality or replace them with better ideas but most went into the biodegradable software bin. Without getting too personal, I would say you are a talented application analyst and programmer.

You only need to look at linkedin resumes to see that so many people couldn't possibly be what they say they are.

I've been called a system programmer too but I knew I wasn't except on rare occasions when I had help from one so to outsiders it might have looked like I was. But I've known people who could take what I wrote and refine to make it practical, at one time all I wrote for several years were nothing but proofs of concepts that customers never saw. A very few other people would redefine some of them into reality or replace them with better ideas but most went into the biodegradable software bin. Without getting too personal, I would say you are a talented application analyst and programmer.

You only need to look at linkedin resumes to see that so many people couldn't possibly be what they say they are.

Quote from Dave Voorhis on October 25, 2019, 1:16 pmQuote from p c on October 25, 2019, 1:04 pmI've been called a system programmer too but I knew I wasn't except on rare occasions when I had help from one so to outsiders it might have looked like I was. But I've known people who could take what I wrote and refine to make it practical, at one time all I wrote for several years were nothing but proofs of concepts that customers never saw. A very few other people would refine some of them into reality or replace them with better ideas but most went into the biodegradable software bin. Without getting too personal, I would say you are a talented application analyst and programmer.

I was called a systems programmer because I am one.

To be perfectly blunt, you are either deliberately writing gibberish -- hence the Irwin Corey reference -- or you're incomprehensible without being aware of it.

If you're not aware of it, then it's not clear whether that's because you haven't thought your ideas through, or because you have thought them through and can't explain them. Either way, the result is the same: your writing is baffling.

If you

havethought your ideas through, please, please, please make more effort to explain them clearly. You may think you've been leading your readers through some Socratic exercise, but you haven't.If you

haven'tthought your ideas through, please, please, please make more effort to think them through.Thenplease, please, please make an effort to explain them clearly.Otherwise, these discussions accomplish nothing but waste our time and yours.

Quote from p c on October 25, 2019, 1:04 pmI've been called a system programmer too but I knew I wasn't except on rare occasions when I had help from one so to outsiders it might have looked like I was. But I've known people who could take what I wrote and refine to make it practical, at one time all I wrote for several years were nothing but proofs of concepts that customers never saw. A very few other people would refine some of them into reality or replace them with better ideas but most went into the biodegradable software bin. Without getting too personal, I would say you are a talented application analyst and programmer.

I was called a systems programmer because I am one.

To be perfectly blunt, you are either deliberately writing gibberish -- hence the Irwin Corey reference -- or you're incomprehensible without being aware of it.

If you're not aware of it, then it's not clear whether that's because you haven't thought your ideas through, or because you have thought them through and can't explain them. Either way, the result is the same: your writing is baffling.

If you *have* thought your ideas through, please, please, please make more effort to explain them clearly. You may think you've been leading your readers through some Socratic exercise, but you haven't.

If you *haven't* thought your ideas through, please, please, please make more effort to think them through. *Then* please, please, please make an effort to explain them clearly.

Otherwise, these discussions accomplish nothing but waste our time and yours.

Quote from p c on October 25, 2019, 1:19 pmQuote from Dave Voorhis on October 25, 2019, 12:44 pmQuote from p c on October 25, 2019, 12:14 pmdis... What?” d is a logical device only needed by some arguments. It introduces deliberate ambiguity, the ambiguity of union. Logically it is a relation, outside the argument it is nothing more than synthetic syntax. So are x and y. Logically d is a projection of d&x union d&y. Since the argument only expresses deletions from from d&y, d&x simulates the preservation of the original a relation value because the argument deletes from d&y not d&x union d&y. in other words the argument doesn’t reflect any deletions from a. It might reflect insertions but I haven’t gone so far as to check that.“In other words, how does it impact the schema I gave in post #20?

“How, and where, do I turn it into Tutorial D?...”

These are amazing questions, assuming you’ve absorbed the post they are extremely imaginary. The replies in order are 1) understand it, 2) read the post, a, b and c correspond to your schemas given relations, the arguments reflect logically valid replacements of them or if you prefer changes to their relvars. 3) whatever makes you think a logical argument needs to be turned into Tutorial D? For the topic at hand, the question should be does TD reflect/respect this logical argument or that logical argument?

A system programmer will think about dbms aspects far beyond TD. An industrial TD might have different modes of operation, for example an isolated mode for regression tests or a compatibility mode for comparing different schemas or different dbms’es. These might be candidates for physical implementation of logical update arguments.

Quote from p c on October 25, 2019, 12:38 pmSorry, as usual I find your responses baffling. I

ama systems programmer, or at least that's what I've been called on a few occasions.Does

anyonehere understand @p-c's replies?I'm left wondering if you're the database equivalent to Irwin Corey.

Don't apologize. You are letting comments intended for perspective prevent you from discerning the meat. Jumping to implementation even when implementation doesn't mean code doesn't exhibit any attempt to understand the meat. More productive would be trying to show the meat is wrong.

Try to focus on the meat which is half a dozen one-liners that are formal arguments. They are very brief which means succinct precisely because they are formal. Understand them with truth tables or better still with proofs of logical validity, say by using truth tableau to show the detailed logical steps that obey established logical rules for reasoning.

Quote from Dave Voorhis on October 25, 2019, 12:44 pmQuote from p c on October 25, 2019, 12:14 pmdis... What?” d is a logical device only needed by some arguments. It introduces deliberate ambiguity, the ambiguity of union. Logically it is a relation, outside the argument it is nothing more than synthetic syntax. So are x and y. Logically d is a projection of d&x union d&y. Since the argument only expresses deletions from from d&y, d&x simulates the preservation of the original a relation value because the argument deletes from d&y not d&x union d&y. in other words the argument doesn’t reflect any deletions from a. It might reflect insertions but I haven’t gone so far as to check that.“In other words, how does it impact the schema I gave in post #20?

“How, and where, do I turn it into Tutorial D?...”

Quote from p c on October 25, 2019, 12:38 pmama systems programmer, or at least that's what I've been called on a few occasions.Does

anyonehere understand @p-c's replies?I'm left wondering if you're the database equivalent to Irwin Corey.

Don't apologize. You are letting comments intended for perspective prevent you from discerning the meat. Jumping to implementation even when implementation doesn't mean code doesn't exhibit any attempt to understand the meat. More productive would be trying to show the meat is wrong.

Try to focus on the meat which is half a dozen one-liners that are formal arguments. They are very brief which means succinct precisely because they are formal. Understand them with truth tables or better still with proofs of logical validity, say by using truth tableau to show the detailed logical steps that obey established logical rules for reasoning.