Shortest, the most intuitive proof of R00 uniqueness
Quote from Vadim on October 16, 2019, 4:45 pmQuote from AntC on October 16, 2019, 5:00 am
No that goal is not one of "those" I was asking about. Those last two equations from me use only lattice top and bottom, not R00. If they're provable, it tells us something about tuple negation and header inversion, not about R00.
Certainly, if
R00 = R01'. R00 = R10`.
whereR00
is actually a shorthand for something likeR00 = ((x' ^ x)` ^ x)`.
, then the system would have no trouble deducingR01' = R10`.
Please note that uniquiness of a constant is not needed for this kind of deduction.Here is the formal proof:
% Proof 1 at 0.39 (+ 0.01) seconds. % Length of proof is 90. % Level of proof is 18. % Maximum clause weight is 35. % Given clauses 178. 1 R01' = R10 ` # label(non_clause) # label(goal). [goal]. 2 x ^ y = y ^ x. [assumption]. 3 (x ^ y) ^ z = x ^ (y ^ z). [assumption]. 4 x ^ (x v y) = x. [assumption]. 5 x v y = y v x. [assumption]. 6 (x v y) v z = x v (y v z). [assumption]. 7 x v (x ^ y) = x. [assumption]. 8 x = (x ^ y') v (x ^ y `). [assumption]. 9 (x ^ y') v (x ^ y `) = x. [copy(8),flip(a)]. 10 x ^ (y' ^ z')' = ((x ^ y)' ^ (x ^ z)')'. [assumption]. 11 ((x ^ y)' ^ (x ^ z)')' = x ^ (y' ^ z')'. [copy(10),flip(a)]. 12 (x' ^ (y ^ z)')' = (x' ^ y')' ^ (x' ^ z')'. [assumption]. 14 x' ^ x = x ^ ((x' ^ x) ` ^ x) `. [assumption]. 15 x ^ (x ^ (x ^ x') `) ` = x ^ x'. [copy(14),rewrite([2(2),2(4),2(6)]),flip(a)]. 16 x' v x = x v (x' v x `) `. [assumption]. 17 x v (x' v x `) ` = x v x'. [copy(16),rewrite([5(2)]),flip(a)]. 18 x ` ^ x = (x' v x `) ` ^ x. [assumption]. 19 x ^ (x' v x `) ` = x ^ x `. [copy(18),rewrite([2(2),2(7)]),flip(a)]. 22 R01 ^ x = x. [assumption]. 23 R10 ^ x = R10. [assumption]. ...x ^ x = x. x ^ y = y ^ x. x ^ (y ^ z) = (x ^ y) ^ z. R01 ^ x = x. R10 ^ x = R10. x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).Firstly define lattice meet as idempotent, commutative, associative. That's sufficient to fix
^
as unique. Therefore its identity element and absorbing element(R01, R10)
are fixed unique....
In the lower semilattice the lattice top is not unique. Take 3 element semilattice
({1,2,3}, 1<2, 1<3)
, which element is the top: 2, or 3?To be precise: there's three possible reasons Prover9 hangs. Yes one is that theorem proving is NP-complete. Another is Gödel incompleteness: given a bunch of axioms, there are always theorems that are neither implied by the axioms nor is their negation implied. Thirdly there are theorems that need higher-order unification to prove from the axioms. Prover9 (SAT solvers in general) don't do that. I think it's the third that's biting in case of your negation/inversion operators.
Can you please be more specific? In relational lattice theory we have (or aim to have) purely equational theory (which is another way to say is that there is no disjunction or existential quantifiers). Can you give an example of equational theory where you need to recruit second order logic to prove something?
If by "doubt" you mean I think some axioms are false or inconsistent: no I haven't looked in enough detail. What I mean is your equations are in a form they're not tractable for inference: in the third, both operators appear (and multiple times) on both sides of the
=
; in the second only'
appears, but again it appears multiply on both sides of=
; in the first both operators appear once only, attached toy
which isn't 'anchored' on the other side of the=
.Contrast that in my above definition for
v
: the definiendum appears on only one side of<->
; it appears twice but with the same operands each time; those operands are anchored on the other side of the<->
; the other univ quantified variable also appears on both sides. Now of course I could give equivalent definitions not using<->
, but they'd still have those 'anchoring' characteristics. So I remain perplexed why Litak et al want to limit themselves to quasiequational forms of axioms.To cite Tarski, there is certain appeal of equational theory. The way one derives new facts from the existing ones is reminiscent of high-school algebra: apply operator to the same sides of equality, follow the chain of equalities, etc. There is no requirement for the operations to be introduced in some sort of cascading fashion, nor limitations on particular form of those equations, in general. Therefore, I don't follow your comment about tractability of inference.
Yes, the deductions generated by automatic theorem proving system are not human readable. But there is technique for transforming them for human consumption.
Quote from AntC on October 16, 2019, 5:00 am
No that goal is not one of "those" I was asking about. Those last two equations from me use only lattice top and bottom, not R00. If they're provable, it tells us something about tuple negation and header inversion, not about R00.
Certainly, if R00 = R01'. R00 = R10`.
where R00
is actually a shorthand for something like R00 = ((x' ^ x)` ^ x)`.
, then the system would have no trouble deducing R01' = R10`.
Please note that uniquiness of a constant is not needed for this kind of deduction.
Here is the formal proof:
% Proof 1 at 0.39 (+ 0.01) seconds. % Length of proof is 90. % Level of proof is 18. % Maximum clause weight is 35. % Given clauses 178. 1 R01' = R10 ` # label(non_clause) # label(goal). [goal]. 2 x ^ y = y ^ x. [assumption]. 3 (x ^ y) ^ z = x ^ (y ^ z). [assumption]. 4 x ^ (x v y) = x. [assumption]. 5 x v y = y v x. [assumption]. 6 (x v y) v z = x v (y v z). [assumption]. 7 x v (x ^ y) = x. [assumption]. 8 x = (x ^ y') v (x ^ y `). [assumption]. 9 (x ^ y') v (x ^ y `) = x. [copy(8),flip(a)]. 10 x ^ (y' ^ z')' = ((x ^ y)' ^ (x ^ z)')'. [assumption]. 11 ((x ^ y)' ^ (x ^ z)')' = x ^ (y' ^ z')'. [copy(10),flip(a)]. 12 (x' ^ (y ^ z)')' = (x' ^ y')' ^ (x' ^ z')'. [assumption]. 14 x' ^ x = x ^ ((x' ^ x) ` ^ x) `. [assumption]. 15 x ^ (x ^ (x ^ x') `) ` = x ^ x'. [copy(14),rewrite([2(2),2(4),2(6)]),flip(a)]. 16 x' v x = x v (x' v x `) `. [assumption]. 17 x v (x' v x `) ` = x v x'. [copy(16),rewrite([5(2)]),flip(a)]. 18 x ` ^ x = (x' v x `) ` ^ x. [assumption]. 19 x ^ (x' v x `) ` = x ^ x `. [copy(18),rewrite([2(2),2(7)]),flip(a)]. 22 R01 ^ x = x. [assumption]. 23 R10 ^ x = R10. [assumption]. ...
x ^ x = x. x ^ y = y ^ x. x ^ (y ^ z) = (x ^ y) ^ z. R01 ^ x = x. R10 ^ x = R10. x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).Firstly define lattice meet as idempotent, commutative, associative. That's sufficient to fix
^
as unique. Therefore its identity element and absorbing element(R01, R10)
are fixed unique....
In the lower semilattice the lattice top is not unique. Take 3 element semilattice ({1,2,3}, 1<2, 1<3)
, which element is the top: 2, or 3?
To be precise: there's three possible reasons Prover9 hangs. Yes one is that theorem proving is NP-complete. Another is Gödel incompleteness: given a bunch of axioms, there are always theorems that are neither implied by the axioms nor is their negation implied. Thirdly there are theorems that need higher-order unification to prove from the axioms. Prover9 (SAT solvers in general) don't do that. I think it's the third that's biting in case of your negation/inversion operators.
Can you please be more specific? In relational lattice theory we have (or aim to have) purely equational theory (which is another way to say is that there is no disjunction or existential quantifiers). Can you give an example of equational theory where you need to recruit second order logic to prove something?
If by "doubt" you mean I think some axioms are false or inconsistent: no I haven't looked in enough detail. What I mean is your equations are in a form they're not tractable for inference: in the third, both operators appear (and multiple times) on both sides of the
=
; in the second only'
appears, but again it appears multiply on both sides of=
; in the first both operators appear once only, attached toy
which isn't 'anchored' on the other side of the=
.Contrast that in my above definition for
v
: the definiendum appears on only one side of<->
; it appears twice but with the same operands each time; those operands are anchored on the other side of the<->
; the other univ quantified variable also appears on both sides. Now of course I could give equivalent definitions not using<->
, but they'd still have those 'anchoring' characteristics. So I remain perplexed why Litak et al want to limit themselves to quasiequational forms of axioms.
To cite Tarski, there is certain appeal of equational theory. The way one derives new facts from the existing ones is reminiscent of high-school algebra: apply operator to the same sides of equality, follow the chain of equalities, etc. There is no requirement for the operations to be introduced in some sort of cascading fashion, nor limitations on particular form of those equations, in general. Therefore, I don't follow your comment about tractability of inference.
Yes, the deductions generated by automatic theorem proving system are not human readable. But there is technique for transforming them for human consumption.
Quote from AntC on October 16, 2019, 10:13 pmQuote from Vadim on October 16, 2019, 4:45 pmQuote from AntC on October 16, 2019, 5:00 am
No that goal is not one of "those" I was asking about. Those last two equations from me use only lattice top and bottom, not R00. If they're provable, it tells us something about tuple negation and header inversion, not about R00.
Certainly, if
R00 = R01'. R00 = R10`.
whereR00
is actually a shorthand for something likeR00 = ((x' ^ x)` ^ x)`.
, then the system would have no trouble deducingR01' = R10`.
Please note that uniquiness of a constant is not needed for this kind of deduction.Ok. I was thinking
R00
is shorthand forR01'
. Yes that doesn't fixR00
unless'
is uniquely determined. A constant is a shorthand for an existential:exists z. z = R01'.
(With a similarly existential definition forR01
, see below.)Here is the formal proof:
...x ^ x = x. x ^ y = y ^ x. x ^ (y ^ z) = (x ^ y) ^ z. R01 ^ x = x. R10 ^ x = R10. x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).Firstly define lattice meet as idempotent, commutative, associative. That's sufficient to fix
^
as unique. Therefore its identity element and absorbing element(R01, R10)
are fixed unique....
In the lower semilattice the lattice top is not unique. Take 3 element semilattice
({1,2,3}, 1<2, 1<3)
, which element is the top: 2, or 3?I said above that a constant is a shorthand for an existential.
R01
's definition is equivalent to an axiomexists z. all x. z ^ x = x.
Note theall
is within the scope of theexists
. Then in your posited 3-element semilattice, that axiom does not hold. Then another way to see the definitions for lattice top/bottom is that Relational Lattices must be bounded. Your 3-element semilattice could not be a Relational Lattice.It's really rather dispiriting that the 'discoverer' of Relational Lattices appears to know so little about lattice theory and axiomatisation. (I say that based from many of the recent discussions here, not just the above.) I'm drawing from wikipedia and other basic texts, nothing obscure.
To be precise: there's three possible reasons Prover9 hangs. Yes one is that theorem proving is NP-complete. Another is Gödel incompleteness: given a bunch of axioms, there are always theorems that are neither implied by the axioms nor is their negation implied. Thirdly there are theorems that need higher-order unification to prove from the axioms. Prover9 (SAT solvers in general) don't do that. I think it's the third that's biting in case of your negation/inversion operators.
Can you please be more specific? In relational lattice theory we have (or aim to have) purely equational theory (which is another way to say is that there is no disjunction or existential quantifiers).
OK I can see that in Litak's axioms. And I see the appeal of using only equations, as you cite from Tarski. Alas I don't see you could ever describe Relational Lattices adequately. Everything I'm trying to limit the forms of non-distributivity use disjunctions. Perhaps there are equivalents without disjunctions, but I'm not enough of an algebraicist to figure it out.
Can you give an example of equational theory where you need to recruit second order logic to prove something?
I'll try to dig out some references for where I learnt that. Again, I'm not an algebraicist, and the examples went way over my head.
If by "doubt" you mean I think some axioms are false or inconsistent: no I haven't looked in enough detail. What I mean is your equations are in a form they're not tractable for inference: in the third, both operators appear (and multiple times) on both sides of the
=
; in the second only'
appears, but again it appears multiply on both sides of=
; in the first both operators appear once only, attached toy
which isn't 'anchored' on the other side of the=
.Contrast that in my above definition for
v
: the definiendum appears on only one side of<->
; it appears twice but with the same operands each time; those operands are anchored on the other side of the<->
; the other univ quantified variable also appears on both sides. Now of course I could give equivalent definitions not using<->
, but they'd still have those 'anchoring' characteristics. So I remain perplexed why Litak et al want to limit themselves to quasiequational forms of axioms.To cite Tarski, there is certain appeal of equational theory. The way one derives new facts from the existing ones is reminiscent of high-school algebra: apply operator to the same sides of equality, follow the chain of equalities, etc. There is no requirement for the operations to be introduced in some sort of cascading fashion, nor limitations on particular form of those equations, in general. Therefore, I don't follow your comment about tractability of inference.
Yes, the deductions generated by automatic theorem proving system are not human readable. But there is technique for transforming them for human consumption.
Quote from Vadim on October 16, 2019, 4:45 pmQuote from AntC on October 16, 2019, 5:00 am
No that goal is not one of "those" I was asking about. Those last two equations from me use only lattice top and bottom, not R00. If they're provable, it tells us something about tuple negation and header inversion, not about R00.
Certainly, if
R00 = R01'. R00 = R10`.
whereR00
is actually a shorthand for something likeR00 = ((x' ^ x)` ^ x)`.
, then the system would have no trouble deducingR01' = R10`.
Please note that uniquiness of a constant is not needed for this kind of deduction.
Ok. I was thinking R00
is shorthand for R01'
. Yes that doesn't fix R00
unless '
is uniquely determined. A constant is a shorthand for an existential: exists z. z = R01'.
(With a similarly existential definition for R01
, see below.)
Here is the formal proof:
...x ^ x = x. x ^ y = y ^ x. x ^ (y ^ z) = (x ^ y) ^ z. R01 ^ x = x. R10 ^ x = R10. x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).Firstly define lattice meet as idempotent, commutative, associative. That's sufficient to fix
^
as unique. Therefore its identity element and absorbing element(R01, R10)
are fixed unique....
In the lower semilattice the lattice top is not unique. Take 3 element semilattice
({1,2,3}, 1<2, 1<3)
, which element is the top: 2, or 3?
I said above that a constant is a shorthand for an existential. R01
's definition is equivalent to an axiom exists z. all x. z ^ x = x.
Note the all
is within the scope of the exists
. Then in your posited 3-element semilattice, that axiom does not hold. Then another way to see the definitions for lattice top/bottom is that Relational Lattices must be bounded. Your 3-element semilattice could not be a Relational Lattice.
It's really rather dispiriting that the 'discoverer' of Relational Lattices appears to know so little about lattice theory and axiomatisation. (I say that based from many of the recent discussions here, not just the above.) I'm drawing from wikipedia and other basic texts, nothing obscure.
To be precise: there's three possible reasons Prover9 hangs. Yes one is that theorem proving is NP-complete. Another is Gödel incompleteness: given a bunch of axioms, there are always theorems that are neither implied by the axioms nor is their negation implied. Thirdly there are theorems that need higher-order unification to prove from the axioms. Prover9 (SAT solvers in general) don't do that. I think it's the third that's biting in case of your negation/inversion operators.
Can you please be more specific? In relational lattice theory we have (or aim to have) purely equational theory (which is another way to say is that there is no disjunction or existential quantifiers).
OK I can see that in Litak's axioms. And I see the appeal of using only equations, as you cite from Tarski. Alas I don't see you could ever describe Relational Lattices adequately. Everything I'm trying to limit the forms of non-distributivity use disjunctions. Perhaps there are equivalents without disjunctions, but I'm not enough of an algebraicist to figure it out.
Can you give an example of equational theory where you need to recruit second order logic to prove something?
I'll try to dig out some references for where I learnt that. Again, I'm not an algebraicist, and the examples went way over my head.
If by "doubt" you mean I think some axioms are false or inconsistent: no I haven't looked in enough detail. What I mean is your equations are in a form they're not tractable for inference: in the third, both operators appear (and multiple times) on both sides of the
=
; in the second only'
appears, but again it appears multiply on both sides of=
; in the first both operators appear once only, attached toy
which isn't 'anchored' on the other side of the=
.Contrast that in my above definition for
v
: the definiendum appears on only one side of<->
; it appears twice but with the same operands each time; those operands are anchored on the other side of the<->
; the other univ quantified variable also appears on both sides. Now of course I could give equivalent definitions not using<->
, but they'd still have those 'anchoring' characteristics. So I remain perplexed why Litak et al want to limit themselves to quasiequational forms of axioms.To cite Tarski, there is certain appeal of equational theory. The way one derives new facts from the existing ones is reminiscent of high-school algebra: apply operator to the same sides of equality, follow the chain of equalities, etc. There is no requirement for the operations to be introduced in some sort of cascading fashion, nor limitations on particular form of those equations, in general. Therefore, I don't follow your comment about tractability of inference.
Yes, the deductions generated by automatic theorem proving system are not human readable. But there is technique for transforming them for human consumption.
Quote from Vadim on October 16, 2019, 11:33 pmQuote from AntC on October 16, 2019, 5:00 amx ^ x = x. x ^ y = y ^ x. x ^ (y ^ z) = (x ^ y) ^ z. R01 ^ x = x. R10 ^ x = R10. x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).Firstly define lattice meet as idempotent, commutative, associative. That's sufficient to fix
^
as unique.Can you write your proposition formally (in predicate logic)? Here is what I'm getting, assumptions:
x ^ x = x. x ^ y = y ^ x. (x ^ y) ^ z = x ^ (y ^ z). x * x = x. x * y = y * x. (x * y) * z = x * (y * z).Goal:
x * y = x ^ y .The goal doesn't follow from assumptions (with 2-element model counter example). In what sense lattice meet is unique?
Quote from AntC on October 16, 2019, 5:00 amx ^ x = x. x ^ y = y ^ x. x ^ (y ^ z) = (x ^ y) ^ z. R01 ^ x = x. R10 ^ x = R10. x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).Firstly define lattice meet as idempotent, commutative, associative. That's sufficient to fix
^
as unique.
Can you write your proposition formally (in predicate logic)? Here is what I'm getting, assumptions:
x ^ x = x. x ^ y = y ^ x. (x ^ y) ^ z = x ^ (y ^ z). x * x = x. x * y = y * x. (x * y) * z = x * (y * z).
Goal:
x * y = x ^ y .
The goal doesn't follow from assumptions (with 2-element model counter example). In what sense lattice meet is unique?
Quote from AntC on October 17, 2019, 2:18 amQuote from Vadim on October 16, 2019, 11:33 pmQuote from AntC on October 16, 2019, 5:00 amx ^ x = x. x ^ y = y ^ x. x ^ (y ^ z) = (x ^ y) ^ z. R01 ^ x = x. R10 ^ x = R10. x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).Firstly define lattice meet as idempotent, commutative, associative. That's sufficient to fix
^
as unique.Can you write your proposition formally (in predicate logic)?
See the wiki for definition of meet-semilattice, and using the conventional definition of the partial ordering in terms of
^
. I suppose I'm appealing to TTM and/or Litak et al's definition of 'relation' to generate all the elements of the poset; and to intuition that^
asNatJoin
is closed over those relations.Definition of lattice: "An algebraic structure (L, ∨, ∧), consisting of a set L and two binary operations ∨, and ∧, on L is a lattice if the following axiomatic identities hold ... " (Those identities can be proved from the above definitions.)
The above definition for
v
fixes it in terms of^
-- you can prove that using the same technique. So I have sufficient to define a lattice. We are then modelling this defined-lattice's^
to beNatJoin
andv
to beInnerUnion
; with the consequence that^
's identity modelsDEE
.I could have another algebraic structure defined over the same elements, say (L, +, *). That says nothing about any correspondence between the two lattice meet/join operations.
Quote from Vadim on October 16, 2019, 11:33 pmQuote from AntC on October 16, 2019, 5:00 amx ^ x = x. x ^ y = y ^ x. x ^ (y ^ z) = (x ^ y) ^ z. R01 ^ x = x. R10 ^ x = R10. x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).Firstly define lattice meet as idempotent, commutative, associative. That's sufficient to fix
^
as unique.Can you write your proposition formally (in predicate logic)?
See the wiki for definition of meet-semilattice, and using the conventional definition of the partial ordering in terms of ^
. I suppose I'm appealing to TTM and/or Litak et al's definition of 'relation' to generate all the elements of the poset; and to intuition that ^
as NatJoin
is closed over those relations.
Definition of lattice: "An algebraic structure (L, ∨, ∧), consisting of a set L and two binary operations ∨, and ∧, on L is a lattice if the following axiomatic identities hold ... " (Those identities can be proved from the above definitions.)
The above definition for v
fixes it in terms of ^
-- you can prove that using the same technique. So I have sufficient to define a lattice. We are then modelling this defined-lattice's ^
to be NatJoin
and v
to be InnerUnion
; with the consequence that ^
's identity models DEE
.
I could have another algebraic structure defined over the same elements, say (L, +, *). That says nothing about any correspondence between the two lattice meet/join operations.
Quote from AntC on October 18, 2019, 12:09 amQuote from Vadim on October 16, 2019, 4:45 pmQuote from AntC on October 16, 2019, 5:00 am
In relational lattice theory we have (or aim to have) purely equational theory (which is another way to say is that there is no disjunction or existential quantifiers).
Yes I can see the appeal in "purely equational". The theorems look more elegant. But even Litak et al allow implications -- that is 'quasiequational'. Note my definition for
v
is a bi-implication; I could turn that into two axioms, one for each implication; so am I breaking the rules? And note that
(p -> q) <-> (¬p | q).
So we've introduced both disjunction and negation. (And if
p
has an embedded universal quant, we could turn the negation into existential).Is there a technique for turning any formula, or any set of axioms together into (quasi)equational form?
To cite Tarski, there is certain appeal of equational theory. The way one derives new facts from the existing ones is reminiscent of high-school algebra: apply operator to the same sides of equality, follow the chain of equalities, etc.
But with FOPL in general (whatever the form of the equations) you can "derive new facts from existing". Specifically the most productive way to do that is via Horn clauses, which are disjunctions of negations. My definition for
v
yields exactly the usual lattice definition axioms, so sticking to equational is no more expressive.There is no requirement for the operations to be introduced in some sort of cascading fashion, nor limitations on particular form of those equations, in general.
The 'cascading fashion' is I guess from programmers' habit. Particular form? See if you can still get to this q from me: I see a particular form in Litak et al's axioms.
Therefore, I don't follow your comment about tractability of inference.
Oh, I meant to add: restricting the forms of equations seems to me like restricting to Presburger arithmetic, on the grounds it makes theorems decidable. Or restricting to Naturals (Peano arithmetic) on the grounds that subtraction or division are partial functions. An 'arithmetic' without multiplication, subtraction, division, is not an arithmetic IMO.
Similarly a Relational Lattice in which you can't adequately describe and infer from the properties of
DUM/R00
is not relational. So if I manage to fixR00
using a bunch of ugly axioms (and all the ones I have so far are wretched), I'm happy to hand over to an expert to turn them into (quasi)equational elegance
Quote from Vadim on October 16, 2019, 4:45 pmQuote from AntC on October 16, 2019, 5:00 am
In relational lattice theory we have (or aim to have) purely equational theory (which is another way to say is that there is no disjunction or existential quantifiers).
Yes I can see the appeal in "purely equational". The theorems look more elegant. But even Litak et al allow implications -- that is 'quasiequational'. Note my definition for v
is a bi-implication; I could turn that into two axioms, one for each implication; so am I breaking the rules? And note that
(p -> q) <-> (¬p | q).
So we've introduced both disjunction and negation. (And if p
has an embedded universal quant, we could turn the negation into existential).
Is there a technique for turning any formula, or any set of axioms together into (quasi)equational form?
To cite Tarski, there is certain appeal of equational theory. The way one derives new facts from the existing ones is reminiscent of high-school algebra: apply operator to the same sides of equality, follow the chain of equalities, etc.
But with FOPL in general (whatever the form of the equations) you can "derive new facts from existing". Specifically the most productive way to do that is via Horn clauses, which are disjunctions of negations. My definition for v
yields exactly the usual lattice definition axioms, so sticking to equational is no more expressive.
There is no requirement for the operations to be introduced in some sort of cascading fashion, nor limitations on particular form of those equations, in general.
The 'cascading fashion' is I guess from programmers' habit. Particular form? See if you can still get to this q from me: I see a particular form in Litak et al's axioms.
Therefore, I don't follow your comment about tractability of inference.
Oh, I meant to add: restricting the forms of equations seems to me like restricting to Presburger arithmetic, on the grounds it makes theorems decidable. Or restricting to Naturals (Peano arithmetic) on the grounds that subtraction or division are partial functions. An 'arithmetic' without multiplication, subtraction, division, is not an arithmetic IMO.
Similarly a Relational Lattice in which you can't adequately describe and infer from the properties of DUM/R00
is not relational. So if I manage to fix R00
using a bunch of ugly axioms (and all the ones I have so far are wretched), I'm happy to hand over to an expert to turn them into (quasi)equational elegance
Quote from Vadim on October 18, 2019, 4:17 amQuote from AntC on October 18, 2019, 12:09 amYes I can see the appeal in "purely equational". The theorems look more elegant. But even Litak et al allow implications -- that is 'quasiequational'. Note my definition for
v
is a bi-implication; I could turn that into two axioms, one for each implication; so am I breaking the rules? And note that
(p -> q) <-> (¬p | q).
So we've introduced both disjunction and negation. (And if
p
has an embedded universal quant, we could turn the negation into existential).Is there a technique for turning any formula, or any set of axioms together into (quasi)equational form?
That is a very intriguing question, but first, let me get to some background of Litak&Mikulas&Hidders system, which I will refer in the future as LMH. Before LMH 2009 paper several axioms of conditional distributivity were known, with Spight criteria being the most profound. In that 2009 paper LMH established that all those implicational axioms follow from a system of 3 equational axioms. This is why like you I'm also wondering if they are aware of some general technique of transforming implications into equalities.
But with FOPL in general (whatever the form of the equations) you can "derive new facts from existing". Specifically the most productive way to do that is via Horn clauses, which are disjunctions of negations. My definition for
v
yields exactly the usual lattice definition axioms, so sticking to equational is no more expressive.This is logician's perspective and it is perfectly fine. Especially that Prover9/Mace4 has no problems with "impure" equational system spiced up with logical connectors and quantifiers.
The other perspective is algebraic/geometric. I will illustrate those ideas with the help of linear algebra, and hope that most readers still remember it from their college years. The basic object of study in linear algebra is a system of linear equations. The fundamental problem in linear algebra is solving a system of equations, or demonstrating that some other equation follows from that system. Notice that deriving equations from the given system is actually more general than solving the system. A solution is actually a set of derived equations of particularly simple kind (e.g.
x=5
). I hope you see the analogy here between solving linear system and deriving new axioms in a logical system.Now, the derivation in linear algebra is performed via Gaussian method, and I never heard of Horn clauses in that context. That is OK you may say, because those 2 are completely different areas. Are they?
Polynomial algebra much more sophisticated field of study, which loosely can be viewed as "linear algebra on steroids". Solutions of polynomial equations are called [algebraic] varieties -- this is to hint where the term "equational variety" is coming from. Again, there is universal method for solving polynomial systems, moreover it has nothing to do with logic. Quite the opposite, this method turned to be so powerful that researchers started to use it in the areas where it didn't appear belonging to. Grobner basis (that is the name of the method) is used for automatic theorem proving in Euclidean geometry (Ch 6), SAT solving, and bunch of other areas. No, Grobner folks didn't win any SAT competitions yet, but the fact that they dared to offer the Grobner basis method in logic context is remarkable.
How this development can possibly be relevant to our discussion? The geometric picture. A logical implication is set theoretic inclusion of two spaces. Or, speaking in algebraic geometry terms, varieties...
The 'cascading fashion' is I guess from programmers' habit. Particular form? See if you can still get to this q from me: I see a particular form in Litak et al's axioms.
The link appears to be dead.
Quote from AntC on October 18, 2019, 12:09 amYes I can see the appeal in "purely equational". The theorems look more elegant. But even Litak et al allow implications -- that is 'quasiequational'. Note my definition for
v
is a bi-implication; I could turn that into two axioms, one for each implication; so am I breaking the rules? And note that
(p -> q) <-> (¬p | q).
So we've introduced both disjunction and negation. (And if
p
has an embedded universal quant, we could turn the negation into existential).Is there a technique for turning any formula, or any set of axioms together into (quasi)equational form?
That is a very intriguing question, but first, let me get to some background of Litak&Mikulas&Hidders system, which I will refer in the future as LMH. Before LMH 2009 paper several axioms of conditional distributivity were known, with Spight criteria being the most profound. In that 2009 paper LMH established that all those implicational axioms follow from a system of 3 equational axioms. This is why like you I'm also wondering if they are aware of some general technique of transforming implications into equalities.
But with FOPL in general (whatever the form of the equations) you can "derive new facts from existing". Specifically the most productive way to do that is via Horn clauses, which are disjunctions of negations. My definition for
v
yields exactly the usual lattice definition axioms, so sticking to equational is no more expressive.
This is logician's perspective and it is perfectly fine. Especially that Prover9/Mace4 has no problems with "impure" equational system spiced up with logical connectors and quantifiers.
The other perspective is algebraic/geometric. I will illustrate those ideas with the help of linear algebra, and hope that most readers still remember it from their college years. The basic object of study in linear algebra is a system of linear equations. The fundamental problem in linear algebra is solving a system of equations, or demonstrating that some other equation follows from that system. Notice that deriving equations from the given system is actually more general than solving the system. A solution is actually a set of derived equations of particularly simple kind (e.g. x=5
). I hope you see the analogy here between solving linear system and deriving new axioms in a logical system.
Now, the derivation in linear algebra is performed via Gaussian method, and I never heard of Horn clauses in that context. That is OK you may say, because those 2 are completely different areas. Are they?
Polynomial algebra much more sophisticated field of study, which loosely can be viewed as "linear algebra on steroids". Solutions of polynomial equations are called [algebraic] varieties -- this is to hint where the term "equational variety" is coming from. Again, there is universal method for solving polynomial systems, moreover it has nothing to do with logic. Quite the opposite, this method turned to be so powerful that researchers started to use it in the areas where it didn't appear belonging to. Grobner basis (that is the name of the method) is used for automatic theorem proving in Euclidean geometry (Ch 6), SAT solving, and bunch of other areas. No, Grobner folks didn't win any SAT competitions yet, but the fact that they dared to offer the Grobner basis method in logic context is remarkable.
How this development can possibly be relevant to our discussion? The geometric picture. A logical implication is set theoretic inclusion of two spaces. Or, speaking in algebraic geometry terms, varieties...
The 'cascading fashion' is I guess from programmers' habit. Particular form? See if you can still get to this q from me: I see a particular form in Litak et al's axioms.
The link appears to be dead.
Quote from AntC on October 18, 2019, 6:54 amQuote from Vadim on October 18, 2019, 4:17 amQuote from AntC on October 18, 2019, 12:09 am
The 'cascading fashion' is I guess from programmers' habit. Particular form? See if you can still get to this q from me: I see a particular form in Litak et al's axioms.
The link appears to be dead.
Hmm. The q was about 'quasiequational'; it got downvoted by someone rude enough to give no explanation; got no answers; and after a month got auto-deleted. (I guess I can still see it because I was the author.) StackOverflow is becoming more and more of a cesspool.
I looked at the pattern of axioms in the most recent LMH paper, and conjectured:
- 'equational' axioms are those which are a plain equality of terms (comprising variables, constants and function application);
- 'quasi-equations' are two 'equations' connected by an implication;
- there's no negations nor
≠
nor other logical connectives;- the variables are implicitly universally quantified across the whole (quasi)equation;
- in any (quasi)equation, all variables are mentioned at least once on each side of the
=
resp⇒
;- there's nothing existentially quantified -- which wouldn't of itself be a restriction, except negations aren't allowed.
It's point 5 'variables mentioned on each side' that is what I'm calling 'anchoring' above.
I want an equation that at least partially constrains
R00
:R00 ^ x ≠ x <-> (x v R00) ^ z = z.
That breaks most of my conjectured points. Is there an (quasi)equational equivalent?To express the pentagon lattice with
R01
topmost and havingR00
on the 2-interval side, I think I need to appeal to 'lattice cover', to capture thatR01
coversR00
andR00
covers the bottom of the pentagon."A lattice element y is said to cover another element x, if y > x, but there does not exist a z such that y > z > x. Here, y > x means x ≤ y and x ≠ y."
That sounds awfully non-quasiequational: "does not exist a z"; "x ≠ y". I can express it as
(x ^ z = x & z ^ y = z) <-> (z = x | z = y).
OK I've got rid of the
≠
and thenot exists
, but I have a disjunction on rhs, and a conjunction on lhs of (bi)-implication. (Prover9 expands<->
to two disjunctions with one negation in each. Not atall quasi-equational.)
Quote from Vadim on October 18, 2019, 4:17 amQuote from AntC on October 18, 2019, 12:09 am
The 'cascading fashion' is I guess from programmers' habit. Particular form? See if you can still get to this q from me: I see a particular form in Litak et al's axioms.
The link appears to be dead.
Hmm. The q was about 'quasiequational'; it got downvoted by someone rude enough to give no explanation; got no answers; and after a month got auto-deleted. (I guess I can still see it because I was the author.) StackOverflow is becoming more and more of a cesspool.
I looked at the pattern of axioms in the most recent LMH paper, and conjectured:
- 'equational' axioms are those which are a plain equality of terms (comprising variables, constants and function application);
- 'quasi-equations' are two 'equations' connected by an implication;
- there's no negations nor
≠
nor other logical connectives; - the variables are implicitly universally quantified across the whole (quasi)equation;
- in any (quasi)equation, all variables are mentioned at least once on each side of the
=
resp⇒
; - there's nothing existentially quantified -- which wouldn't of itself be a restriction, except negations aren't allowed.
It's point 5 'variables mentioned on each side' that is what I'm calling 'anchoring' above.
I want an equation that at least partially constrains R00
: R00 ^ x ≠ x <-> (x v R00) ^ z = z.
That breaks most of my conjectured points. Is there an (quasi)equational equivalent?
To express the pentagon lattice with R01
topmost and having R00
on the 2-interval side, I think I need to appeal to 'lattice cover', to capture that R01
covers R00
and R00
covers the bottom of the pentagon.
"A lattice element y is said to cover another element x, if y > x, but there does not exist a z such that y > z > x. Here, y > x means x ≤ y and x ≠ y."
That sounds awfully non-quasiequational: "does not exist a z"; "x ≠ y". I can express it as
(x ^ z = x & z ^ y = z) <-> (z = x | z = y).
OK I've got rid of the ≠
and the not exists
, but I have a disjunction on rhs, and a conjunction on lhs of (bi)-implication. (Prover9 expands <->
to two disjunctions with one negation in each. Not atall quasi-equational.)