Relational Lattice Axiomatizations in various signatures
Quote from Vadim on July 8, 2021, 10:34 pmQuote from Vadim on July 7, 2021, 5:29 pmQuote from AntC on July 7, 2021, 7:42 am(How) are you expressing distributivity? (#47, #49 and #74, #76 in my post.)
Unfortunately, LMH algebra doesn't feature R11 constant to express #74 and 76.
The last statement warrants some clarification. It is quite easy to add one equation to LMH so that both #74 and #76 become theorems. I hope the summary below of the axioms that are already known would help to quickly dismiss weaker proposals, while continue working on extending the equational basis discovering genuine independent axioms. Which might someday help fixing the operations, such as R00.
1. We start with the most basic signature [latex]\{\wedge,\vee,R_{00}\}[/latex]. (LMH consider even more restrictive pure lattice signature[latex]\{\wedge,\vee \}[/latex], but we'll ignore it here.) The equational basis is:
x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))).
(R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y).
As you might have already noticed, I sloppily referred to it as LMH, while little more precise term for it is AxRH1+AxRH2.
There are the two more candidates, discussed here and here.
In this basis various conditional distributivity assertions such as distributivity of the headers sublattice are theorems.
2. [latex]\{\wedge,\vee,R_{00},R_{11}\}[/latex]. The axiomatization for it is AxRH1+AxRH2+FDI
x = (x ^ R00) v (x ^ R11).
The aforementioned conditional distributivity assertions #74 and 76 are theorems:
% -------- Comments from original proof -------- % Proof 1 at 8.31 (+ 0.14) seconds. % Length of proof is 314. % Level of proof is 27. % Maximum clause weight is 65. % Given clauses 346. 1 (x v R11) v ((y v R11) ^ (z v R11)) = ((x v R11) v (y v R11)) ^ ((x v R11) v (z v R11)) # label(non_clause) # label(goal). [goal]. 2 x ^ y = y ^ x. [assumption]. 3 (x ^ y) ^ z = x ^ (y ^ z). [assumption]. 4 x ^ (x v y) = x. [assumption]. 5 x v y = y v x. [assumption]. 6 (x v y) v z = x v (y v z). [assumption]. 7 x v (x ^ y) = x. [assumption]. 8 x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))). [assumption]. 9 (x ^ (y v (R00 ^ z))) v (x ^ (z v (R00 ^ y))) = x ^ (z v y). [copy(8),flip(a)]. 10 (x ^ y) v (x ^ z) = x ^ (((x v z) ^ y) v ((x v y) ^ z)). [assumption]. 11 x ^ (((x v y) ^ z) v ((x v z) ^ y)) = (x ^ z) v (x ^ y). [copy(10),flip(a)]. 12 (R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y). [assumption]. 13 (x ^ y) v (R00 ^ (z ^ (x v y))) = ((R00 ^ (z ^ x)) v y) ^ ((R00 ^ (z ^ y)) v x). [copy(12),rewrite([5(6)])]. 14 x = (x ^ R00) v (x ^ R11). [assumption]. ...
1 (x v R11) ^ ((y v R11) v (z v R11)) = ((x v R11) ^ (y v R11)) v ((x v R11) ^ (z v R11)) # label(non_clause) # label(goal). [goal]. 2 x ^ y = y ^ x. [assumption]. 3 (x ^ y) ^ z = x ^ (y ^ z). [assumption]. 4 x ^ (x v y) = x. [assumption]. 5 x v y = y v x. [assumption]. 6 (x v y) v z = x v (y v z). [assumption]. 7 x v (x ^ y) = x. [assumption]. 8 x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))). [assumption]. 9 (x ^ (y v (R00 ^ z))) v (x ^ (z v (R00 ^ y))) = x ^ (z v y). [copy(8),flip(a)]. 10 (x ^ y) v (x ^ z) = x ^ (((x v z) ^ y) v ((x v y) ^ z)). [assumption]. 11 x ^ (((x v y) ^ z) v ((x v z) ^ y)) = (x ^ z) v (x ^ y). [copy(10),flip(a)]. 12 (R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y). [assumption]. 13 (x ^ y) v (R00 ^ (z ^ (x v y))) = ((R00 ^ (z ^ x)) v y) ^ ((R00 ^ (z ^ y)) v x). [copy(12),rewrite([5(6)])]. 14 x = (x ^ R00) v (x ^ R11). [assumption]. ...where in both proofs step #10 is the dependent RL1 equality from the LMH paper.
3. [latex]\{\wedge,\vee,',`\}[/latex] which I call the 2x2 system:
x = (x ^ y') v (x ^ y`).
x ^ (y' ^ z')' = ((x ^ y)' ^ (x ^ z)')'.
(x' ^ (y ^ z)')' = (x' ^ y')' ^ (x' ^ z')'.
(y` ^ x') v (y' ^ x `) = (y' v x') ^ (y` v x `).x' ^ x = x ^ R00.
x' v x = x v R11.x` ^ x = R11 ^ x.
x` v x = R00 v x.The following expressions for the [latex]R_{00},R_{01},R_{10}[/latex] and [latex]R_{11}[/latex] are theorems:
R00 = (x' v x`)'.
R01 = (x' v x`).
R10 = (x' v x`)'`.
R11 = (x' v x`)`.
Conjecture, AxR1 and AxR2 are theorems in this system:
x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))).
(R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y).
Quote from Vadim on July 7, 2021, 5:29 pmQuote from AntC on July 7, 2021, 7:42 am(How) are you expressing distributivity? (#47, #49 and #74, #76 in my post.)
Unfortunately, LMH algebra doesn't feature R11 constant to express #74 and 76.
The last statement warrants some clarification. It is quite easy to add one equation to LMH so that both #74 and #76 become theorems. I hope the summary below of the axioms that are already known would help to quickly dismiss weaker proposals, while continue working on extending the equational basis discovering genuine independent axioms. Which might someday help fixing the operations, such as R00.
1. We start with the most basic signature \{\wedge,\vee,R_{00}\}. (LMH consider even more restrictive pure lattice signature\{\wedge,\vee \}, but we'll ignore it here.) The equational basis is:
x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))).
(R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y).
As you might have already noticed, I sloppily referred to it as LMH, while little more precise term for it is AxRH1+AxRH2.
There are the two more candidates, discussed here and here.
In this basis various conditional distributivity assertions such as distributivity of the headers sublattice are theorems.
2. \{\wedge,\vee,R_{00},R_{11}\}. The axiomatization for it is AxRH1+AxRH2+FDI
x = (x ^ R00) v (x ^ R11).
The aforementioned conditional distributivity assertions #74 and 76 are theorems:
% -------- Comments from original proof -------- % Proof 1 at 8.31 (+ 0.14) seconds. % Length of proof is 314. % Level of proof is 27. % Maximum clause weight is 65. % Given clauses 346. 1 (x v R11) v ((y v R11) ^ (z v R11)) = ((x v R11) v (y v R11)) ^ ((x v R11) v (z v R11)) # label(non_clause) # label(goal). [goal]. 2 x ^ y = y ^ x. [assumption]. 3 (x ^ y) ^ z = x ^ (y ^ z). [assumption]. 4 x ^ (x v y) = x. [assumption]. 5 x v y = y v x. [assumption]. 6 (x v y) v z = x v (y v z). [assumption]. 7 x v (x ^ y) = x. [assumption]. 8 x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))). [assumption]. 9 (x ^ (y v (R00 ^ z))) v (x ^ (z v (R00 ^ y))) = x ^ (z v y). [copy(8),flip(a)]. 10 (x ^ y) v (x ^ z) = x ^ (((x v z) ^ y) v ((x v y) ^ z)). [assumption]. 11 x ^ (((x v y) ^ z) v ((x v z) ^ y)) = (x ^ z) v (x ^ y). [copy(10),flip(a)]. 12 (R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y). [assumption]. 13 (x ^ y) v (R00 ^ (z ^ (x v y))) = ((R00 ^ (z ^ x)) v y) ^ ((R00 ^ (z ^ y)) v x). [copy(12),rewrite([5(6)])]. 14 x = (x ^ R00) v (x ^ R11). [assumption]. ...
1 (x v R11) ^ ((y v R11) v (z v R11)) = ((x v R11) ^ (y v R11)) v ((x v R11) ^ (z v R11)) # label(non_clause) # label(goal). [goal]. 2 x ^ y = y ^ x. [assumption]. 3 (x ^ y) ^ z = x ^ (y ^ z). [assumption]. 4 x ^ (x v y) = x. [assumption]. 5 x v y = y v x. [assumption]. 6 (x v y) v z = x v (y v z). [assumption]. 7 x v (x ^ y) = x. [assumption]. 8 x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))). [assumption]. 9 (x ^ (y v (R00 ^ z))) v (x ^ (z v (R00 ^ y))) = x ^ (z v y). [copy(8),flip(a)]. 10 (x ^ y) v (x ^ z) = x ^ (((x v z) ^ y) v ((x v y) ^ z)). [assumption]. 11 x ^ (((x v y) ^ z) v ((x v z) ^ y)) = (x ^ z) v (x ^ y). [copy(10),flip(a)]. 12 (R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y). [assumption]. 13 (x ^ y) v (R00 ^ (z ^ (x v y))) = ((R00 ^ (z ^ x)) v y) ^ ((R00 ^ (z ^ y)) v x). [copy(12),rewrite([5(6)])]. 14 x = (x ^ R00) v (x ^ R11). [assumption]. ...
where in both proofs step #10 is the dependent RL1 equality from the LMH paper.
3. \{\wedge,\vee,',`\} which I call the 2x2 system:
x = (x ^ y') v (x ^ y`).
x ^ (y' ^ z')' = ((x ^ y)' ^ (x ^ z)')'.
(x' ^ (y ^ z)')' = (x' ^ y')' ^ (x' ^ z')'.
(y` ^ x') v (y' ^ x `) = (y' v x') ^ (y` v x `).
x' ^ x = x ^ R00.
x' v x = x v R11.
x` ^ x = R11 ^ x.
x` v x = R00 v x.
The following expressions for the R_{00},R_{01},R_{10} and R_{11} are theorems:
R00 = (x' v x`)'.
R01 = (x' v x`).
R10 = (x' v x`)'`.
R11 = (x' v x`)`.
Conjecture, AxR1 and AxR2 are theorems in this system:
x ^ (y v z) = (x ^ (z v (R00 ^ y))) v (x ^ (y v (R00 ^ z))).
(R00 ^ (x ^ (y v z))) v (y ^ z) = ((R00 ^ (x ^ y)) v z) ^ ((R00 ^ (x ^ z)) v y).
Quote from AntC on November 22, 2023, 11:06 amQuote from Vadim on July 8, 2021, 10:34 pmQuote from Vadim on July 7, 2021, 5:29 pmQuote from AntC on July 7, 2021, 7:42 am(How) are you expressing distributivity? (#47, #49 and #74, #76 in my post.)
...
In this basis various conditional distributivity assertions such as distributivity of the headers sublattice are theorems.
I seem to have been labouring under a misunderstanding all this time: Distributivity is only half the story -- and indeed LMH seem to kinda know it. Section 5 'Concept Structure':
It is worth noting that R(D,A) naturally divides into what we may call boolean H-fibres ... additional boolean slices, which we may
call the lower attribute slice and ... [and several other uses of 'boolean' in that section]I now realise this is saying a relation lattice contains two varieties of **Boolean** sub-lattices:
- All relation values with the same Heading form a Boolean sub-lattice for that Heading. That is, the values form a powerset of all tuples with that Heading.
- The empty relations form a Boolean sub-lattice/powerset of all possible Attributes.
- (I'm focusing on the empty relations as what LMH call the 'lower attribute slice'; there's a parallel powerset of the maximally full relations, which I won't mention further.)
The last sentence in the paragraph:
The intention of our definition should be clear then: ...
Nope. Nothing was clear to me. I plead stupidity. Wikipedia:
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice.
The articles on 'complemented' and on 'distributed' are talking only about whole-lattices. We've investigated in these posts the Distributivity of those two varieties of sub-lattice. AFAICT LMH have not addressed the Complementation. We need some notion of Complement within a sub-lattice aka 'Sectionally complemented lattice' -- which Wikipedia seems not to have heard of, that link is to planetmath.
It would appear you can't express that complementation structure in the 'quasi-equational' style LMH are restricting themselves to. And yet gorblimey Appendix A has them oven-ready:
- If you want a relative difference/sectional complement within relvals of the same Heading, that's
NOT MATCHING
. The applicable lattice interval is[a JOIN DUM, a]
-- that is from 'emptifieda
' up toa
. We don't have to worry about the sub-lattice top -- which would be a relval with same Heading asa
, all possible tuples, and which would introduce Domain Dependence.- If you want a relative difference/sectional complement within a heading, that's
REMOVE
(where we express the subtrahend as a relval). The applicable lattice interval is[DUM, a JOIN DUM]
-- that is froma
's Heading up toDUM
. This is the opposite way round, so we're taking advantage of the 'dually sectionally complemented lattice' on that planetmath entry.Or have I misunderstood, and LMH are axiomatising the complementation? How?
I now have a bunch of (ugly) axioms to express the complements; and am getting some progress on finding equivalents amongst formulas. I'm expressing the complements using Existential quant (for all minuends, subtrahends, there exists a difference such that ...). Solution search is horrendously slow.
Quote from Vadim on July 8, 2021, 10:34 pmQuote from Vadim on July 7, 2021, 5:29 pmQuote from AntC on July 7, 2021, 7:42 am(How) are you expressing distributivity? (#47, #49 and #74, #76 in my post.)
...
In this basis various conditional distributivity assertions such as distributivity of the headers sublattice are theorems.
I seem to have been labouring under a misunderstanding all this time: Distributivity is only half the story -- and indeed LMH seem to kinda know it. Section 5 'Concept Structure':
It is worth noting that R(D,A) naturally divides into what we may call boolean H-fibres ... additional boolean slices, which we may
call the lower attribute slice and ... [and several other uses of 'boolean' in that section]
I now realise this is saying a relation lattice contains two varieties of **Boolean** sub-lattices:
- All relation values with the same Heading form a Boolean sub-lattice for that Heading. That is, the values form a powerset of all tuples with that Heading.
- The empty relations form a Boolean sub-lattice/powerset of all possible Attributes.
- (I'm focusing on the empty relations as what LMH call the 'lower attribute slice'; there's a parallel powerset of the maximally full relations, which I won't mention further.)
The last sentence in the paragraph:
The intention of our definition should be clear then: ...
Nope. Nothing was clear to me. I plead stupidity. Wikipedia:
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice.
The articles on 'complemented' and on 'distributed' are talking only about whole-lattices. We've investigated in these posts the Distributivity of those two varieties of sub-lattice. AFAICT LMH have not addressed the Complementation. We need some notion of Complement within a sub-lattice aka 'Sectionally complemented lattice' -- which Wikipedia seems not to have heard of, that link is to planetmath.
It would appear you can't express that complementation structure in the 'quasi-equational' style LMH are restricting themselves to. And yet gorblimey Appendix A has them oven-ready:
- If you want a relative difference/sectional complement within relvals of the same Heading, that's
NOT MATCHING
. The applicable lattice interval is[a JOIN DUM, a]
-- that is from 'emptifieda
' up toa
. We don't have to worry about the sub-lattice top -- which would be a relval with same Heading asa
, all possible tuples, and which would introduce Domain Dependence. - If you want a relative difference/sectional complement within a heading, that's
REMOVE
(where we express the subtrahend as a relval). The applicable lattice interval is[DUM, a JOIN DUM]
-- that is froma
's Heading up toDUM
. This is the opposite way round, so we're taking advantage of the 'dually sectionally complemented lattice' on that planetmath entry.
Or have I misunderstood, and LMH are axiomatising the complementation? How?
I now have a bunch of (ugly) axioms to express the complements; and am getting some progress on finding equivalents amongst formulas. I'm expressing the complements using Existential quant (for all minuends, subtrahends, there exists a difference such that ...). Solution search is horrendously slow.