Possreps, Objects, Mutability, Immutability, and Developing Non-database Applications.
Quote from Erwin on June 23, 2019, 11:38 amQuote from johnwcowan on June 22, 2019, 10:48 pmIn principle it is always possible to write an expression for each relvar in the database giving its new value in terms of the existing values of some or all of the relvars in the database, and then a multiple assignment to the relvars will achieve a correct result. But the resulting code might be a tad difficult for a human being to understand, even with the introduction of private relvars prior to the assignment to capture common subexpressions. It would be quite an optimizer that could reliably turn a multi-relvar transformation of arbitrary complexity into a minimal number of relvar inserts, deletes, and updates at the physical level.
Indeed, if multiple assignments like this were the rule rather than the exception, we could dispense with transactions, and simply say that the unit of work is a relational assignment, single or multiple.
I believe I once wrote somewhere
EFFECTIVE_INSERTS(RX) === PROPOSED_INSERTS(RX) MINUS RX
EFFECTIVE_DELETES(RX) === PROPOSED_DELETES(RX) INTERSECT RX
IIRC, the reviewer proposed to replace "EFFECTIVE" with "ACTUAL", but at any rate, I fail to see how ik takes "quite an optimizer" to achieve that.
And transactions are still a useful concept even if their typical usage pattern could indeed be replaced with single-MA invocations that "do everything in one go".
Note that there is also a notion of "user transaction" consisting of, e.g. , user fetches data from db (one db transaction), edits the data on screen (no db transaction) and submits the edits to update the db (second db transaction).
Quote from johnwcowan on June 22, 2019, 10:48 pmIn principle it is always possible to write an expression for each relvar in the database giving its new value in terms of the existing values of some or all of the relvars in the database, and then a multiple assignment to the relvars will achieve a correct result. But the resulting code might be a tad difficult for a human being to understand, even with the introduction of private relvars prior to the assignment to capture common subexpressions. It would be quite an optimizer that could reliably turn a multi-relvar transformation of arbitrary complexity into a minimal number of relvar inserts, deletes, and updates at the physical level.
Indeed, if multiple assignments like this were the rule rather than the exception, we could dispense with transactions, and simply say that the unit of work is a relational assignment, single or multiple.
I believe I once wrote somewhere
EFFECTIVE_INSERTS(RX) === PROPOSED_INSERTS(RX) MINUS RX
EFFECTIVE_DELETES(RX) === PROPOSED_DELETES(RX) INTERSECT RX
IIRC, the reviewer proposed to replace "EFFECTIVE" with "ACTUAL", but at any rate, I fail to see how ik takes "quite an optimizer" to achieve that.
And transactions are still a useful concept even if their typical usage pattern could indeed be replaced with single-MA invocations that "do everything in one go".
Note that there is also a notion of "user transaction" consisting of, e.g. , user fetches data from db (one db transaction), edits the data on screen (no db transaction) and submits the edits to update the db (second db transaction).
Quote from Erwin on June 23, 2019, 11:50 amQuote from johnwcowan on June 23, 2019, 12:56 amI think it is "quite beyond the powers of all the dwarves put together, if they could all be collected again from the four corners of the world" to convert such a single assignment in full generality to a minimal set of physical-level inserts, deletes, and updates.
Thank you.
Quote from johnwcowan on June 23, 2019, 12:56 amI think it is "quite beyond the powers of all the dwarves put together, if they could all be collected again from the four corners of the world" to convert such a single assignment in full generality to a minimal set of physical-level inserts, deletes, and updates.
Thank you.
Quote from johnwcowan on June 23, 2019, 1:43 pmQuote from Erwin on June 23, 2019, 11:38 amI believe I once wrote somewhere
EFFECTIVE_INSERTS(RX) === PROPOSED_INSERTS(RX) MINUS RX
EFFECTIVE_DELETES(RX) === PROPOSED_DELETES(RX) INTERSECT RX
That's fine. Now show me your execution plan for this situation. Suppose relation R has a heading of {A, B, C, D} and relation S a heading of {C, D, E, F}, and what I want to do is:
R, S:= RENAME(S SEMIJOIN R) {E -> A, F -> B, RENAME(R SEMIJOIN S) {A -> E, F -> B};
Note that there is also a notion of "user transaction" consisting of, e.g. , user fetches data from db (one db transaction), edits the data on screen (no db transaction) and submits the edits to update the db (second db transaction).
Quite so, and that needs to be solved by a notion of the database version (timestamp, whatever) which can be read by the first transaction and checked by the second transaction so that the transaction aborts if the version is wrong.
Quote from Erwin on June 23, 2019, 11:38 am
I believe I once wrote somewhere
EFFECTIVE_INSERTS(RX) === PROPOSED_INSERTS(RX) MINUS RX
EFFECTIVE_DELETES(RX) === PROPOSED_DELETES(RX) INTERSECT RX
That's fine. Now show me your execution plan for this situation. Suppose relation R has a heading of {A, B, C, D} and relation S a heading of {C, D, E, F}, and what I want to do is:
R, S:= RENAME(S SEMIJOIN R) {E -> A, F -> B, RENAME(R SEMIJOIN S) {A -> E, F -> B};
Note that there is also a notion of "user transaction" consisting of, e.g. , user fetches data from db (one db transaction), edits the data on screen (no db transaction) and submits the edits to update the db (second db transaction).
Quite so, and that needs to be solved by a notion of the database version (timestamp, whatever) which can be read by the first transaction and checked by the second transaction so that the transaction aborts if the version is wrong.
Quote from Erwin on June 23, 2019, 4:33 pmQuote from johnwcowan on June 23, 2019, 1:43 pmQuote from Erwin on June 23, 2019, 11:38 amI believe I once wrote somewhere
EFFECTIVE_INSERTS(RX) === PROPOSED_INSERTS(RX) MINUS RX
EFFECTIVE_DELETES(RX) === PROPOSED_DELETES(RX) INTERSECT RX
That's fine. Now show me your execution plan for this situation. Suppose relation R has a heading of {A, B, C, D} and relation S a heading of {C, D, E, F}, and what I want to do is:
R, S:= RENAME(S SEMIJOIN R) {E -> A, F -> B, RENAME(R SEMIJOIN S) {A -> E, F -> B};
Answer 2 : detailed execution plans depend on physical design information that you did not provide, ergo no answer is possible
Answer 1 :
Assuming that was meant to be (TD syntactic style for MA is not the same as PL/1 syntactic style for same)
R := RENAME(S SEMIJOIN R) {E -> A, F -> B} , S := RENAME(R SEMIJOIN S) {A -> E, F -> B};
(and assuming the A/B and E/F types correspond appropriately) (and assuming R and S are base relvars or at least entirely independent) The execution plan is exactly what RM PRE 21 prescirbes :
(1) The semijoins and renames are computed/evaluated, and let's say the results are R_TARGET_VALUE and S_TARGET_VALUE
(2) And the actual inserts and actual deletes for both R and S are then computed as (R_TARGET_VALUE MINUS R) and (R_TARGET_VALUE INTERSECT R) and likewise for S
(bis) Steps (1) and (2) can be merged into computing RENAME(S SEMIJOIN R) {E -> A, F -> B} MINUS R for the actual inserts to R (and three other similar computations for the rest), if there's any hope of the optimizer spotting a gain there - and that's a very very big if because the expression itself very clearly shows that there's no avoiding evaluating every tuple in S, evaluating its {A B} match in R for every tuple and also evaluating its "after-the-rename-match" in R, also for every tuple.
(3) The actual inserts and actual deletes for each relvar are then matched according to key attributes to detect whether there is stuff in there that could be regarded as an UPDATE which could give rise to an update-in-place as opposed to actual physical delete-then-insert.
And that's the best anyone can do. The actual inserts/deletes are needed for efficiently checking all the constraints that apply to R and S. If there are no constraints on either R or S, I'm going to ask you for the number of such use cases there would be in real life and for the ROI of actually building an optimizer that exploits the circumstance.
MA handles it. Period. (Within which timeframe is of no concern. If they know (or reasonably expect) it's going to take hours and hours to complete, then just reserve a weekend for doing it. Or find an alternative solution doing it in smaller chunks if weekends cannot be reserved. Point is : MA as defined and implemented correctly handles it.)
Quote from johnwcowan on June 23, 2019, 1:43 pmQuote from Erwin on June 23, 2019, 11:38 amI believe I once wrote somewhere
EFFECTIVE_INSERTS(RX) === PROPOSED_INSERTS(RX) MINUS RX
EFFECTIVE_DELETES(RX) === PROPOSED_DELETES(RX) INTERSECT RX
That's fine. Now show me your execution plan for this situation. Suppose relation R has a heading of {A, B, C, D} and relation S a heading of {C, D, E, F}, and what I want to do is:
R, S:= RENAME(S SEMIJOIN R) {E -> A, F -> B, RENAME(R SEMIJOIN S) {A -> E, F -> B};
Answer 2 : detailed execution plans depend on physical design information that you did not provide, ergo no answer is possible
Answer 1 :
Assuming that was meant to be (TD syntactic style for MA is not the same as PL/1 syntactic style for same)
R := RENAME(S SEMIJOIN R) {E -> A, F -> B} , S := RENAME(R SEMIJOIN S) {A -> E, F -> B};
(and assuming the A/B and E/F types correspond appropriately) (and assuming R and S are base relvars or at least entirely independent) The execution plan is exactly what RM PRE 21 prescirbes :
(1) The semijoins and renames are computed/evaluated, and let's say the results are R_TARGET_VALUE and S_TARGET_VALUE
(2) And the actual inserts and actual deletes for both R and S are then computed as (R_TARGET_VALUE MINUS R) and (R_TARGET_VALUE INTERSECT R) and likewise for S
(bis) Steps (1) and (2) can be merged into computing RENAME(S SEMIJOIN R) {E -> A, F -> B} MINUS R for the actual inserts to R (and three other similar computations for the rest), if there's any hope of the optimizer spotting a gain there - and that's a very very big if because the expression itself very clearly shows that there's no avoiding evaluating every tuple in S, evaluating its {A B} match in R for every tuple and also evaluating its "after-the-rename-match" in R, also for every tuple.
(3) The actual inserts and actual deletes for each relvar are then matched according to key attributes to detect whether there is stuff in there that could be regarded as an UPDATE which could give rise to an update-in-place as opposed to actual physical delete-then-insert.
And that's the best anyone can do. The actual inserts/deletes are needed for efficiently checking all the constraints that apply to R and S. If there are no constraints on either R or S, I'm going to ask you for the number of such use cases there would be in real life and for the ROI of actually building an optimizer that exploits the circumstance.
MA handles it. Period. (Within which timeframe is of no concern. If they know (or reasonably expect) it's going to take hours and hours to complete, then just reserve a weekend for doing it. Or find an alternative solution doing it in smaller chunks if weekends cannot be reserved. Point is : MA as defined and implemented correctly handles it.)
Quote from dandl on June 24, 2019, 12:05 amQuote from Erwin on June 21, 2019, 6:55 am
And not to forget : this way security rules can be altered without having to recompile. Independence, anyone ?
That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.
So that's not to say there ***could not be*** security-like rules checked by the compiler. It's still just checking [a particular instantiation of] a predicate at some point in time. Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.
My proposition is that a language derives great benefit from having a visibility mechanism, regardless of whether it also has a rights mechanism.
Assume an application A has two modules M1 and M2. M2 exports a range of services to be used by A, but none of them are visible to M1. Therefore M1 has no dependency on M2; a bug in M1 could not be caused by M2 and M2 can be changed without needing to consider M1.
Now replace 'none of them are visible' by 'none of them are permitted to be used by M1. M1 is now dependent on M2 at compile time, because the permissions might change with no recompile. We lose the ability to make those confident statements about bugs and consequences of change.
My experience tells me this is a serious loss. We have it now, and for all its faults OO does this well, for both stateful and stateless dependencies. I would like to know how it is proposed that a TTM type system would do it, if it can do it at all.
does it.
Quote from Erwin on June 21, 2019, 6:55 am
And not to forget : this way security rules can be altered without having to recompile. Independence, anyone ?
That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.
So that's not to say there ***could not be*** security-like rules checked by the compiler. It's still just checking [a particular instantiation of] a predicate at some point in time. Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.
My proposition is that a language derives great benefit from having a visibility mechanism, regardless of whether it also has a rights mechanism.
Assume an application A has two modules M1 and M2. M2 exports a range of services to be used by A, but none of them are visible to M1. Therefore M1 has no dependency on M2; a bug in M1 could not be caused by M2 and M2 can be changed without needing to consider M1.
Now replace 'none of them are visible' by 'none of them are permitted to be used by M1. M1 is now dependent on M2 at compile time, because the permissions might change with no recompile. We lose the ability to make those confident statements about bugs and consequences of change.
My experience tells me this is a serious loss. We have it now, and for all its faults OO does this well, for both stateful and stateless dependencies. I would like to know how it is proposed that a TTM type system would do it, if it can do it at all.
does it.
Quote from Dave Voorhis on June 24, 2019, 9:16 amQuote from dandl on June 24, 2019, 12:05 amQuote from Erwin on June 21, 2019, 6:55 am
And not to forget : this way security rules can be altered without having to recompile. Independence, anyone ?
That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.
So that's not to say there ***could not be*** security-like rules checked by the compiler. It's still just checking [a particular instantiation of] a predicate at some point in time. Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.
My proposition is that a language derives great benefit from having a visibility mechanism, regardless of whether it also has a rights mechanism.
Assume an application A has two modules M1 and M2. M2 exports a range of services to be used by A, but none of them are visible to M1. Therefore M1 has no dependency on M2; a bug in M1 could not be caused by M2 and M2 can be changed without needing to consider M1.
Now replace 'none of them are visible' by 'none of them are permitted to be used by M1. M1 is now dependent on M2 at compile time, because the permissions might change with no recompile. We lose the ability to make those confident statements about bugs and consequences of change.
My experience tells me this is a serious loss. We have it now, and for all its faults OO does this well, for both stateful and stateless dependencies. I would like to know how it is proposed that a TTM type system would do it, if it can do it at all.
does it.If we're talking about just the type system, Date & Darwen make mention of "protected operators" that implement (among other things, perhaps) mappings between the exposed interface of a type -- i.e., selectors, THE_ operators and so forth -- and the physical or internal representation. Search DTATRM for the phrase "protected operators".
As for whatever other protected modularity you might wish to implement, that's outside of TTM and falls into the RM Pre 26 category.
Quote from dandl on June 24, 2019, 12:05 amQuote from Erwin on June 21, 2019, 6:55 am
And not to forget : this way security rules can be altered without having to recompile. Independence, anyone ?
That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.
So that's not to say there ***could not be*** security-like rules checked by the compiler. It's still just checking [a particular instantiation of] a predicate at some point in time. Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.
My proposition is that a language derives great benefit from having a visibility mechanism, regardless of whether it also has a rights mechanism.
Assume an application A has two modules M1 and M2. M2 exports a range of services to be used by A, but none of them are visible to M1. Therefore M1 has no dependency on M2; a bug in M1 could not be caused by M2 and M2 can be changed without needing to consider M1.
Now replace 'none of them are visible' by 'none of them are permitted to be used by M1. M1 is now dependent on M2 at compile time, because the permissions might change with no recompile. We lose the ability to make those confident statements about bugs and consequences of change.
My experience tells me this is a serious loss. We have it now, and for all its faults OO does this well, for both stateful and stateless dependencies. I would like to know how it is proposed that a TTM type system would do it, if it can do it at all.
does it.
If we're talking about just the type system, Date & Darwen make mention of "protected operators" that implement (among other things, perhaps) mappings between the exposed interface of a type -- i.e., selectors, THE_ operators and so forth -- and the physical or internal representation. Search DTATRM for the phrase "protected operators".
As for whatever other protected modularity you might wish to implement, that's outside of TTM and falls into the RM Pre 26 category.
Quote from dandl on June 24, 2019, 4:33 pmQuote from Dave Voorhis on June 24, 2019, 9:16 amQuote from dandl on June 24, 2019, 12:05 amQuote from Erwin on June 21, 2019, 6:55 am
And not to forget : this way security rules can be altered without having to recompile. Independence, anyone ?
That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.
So that's not to say there ***could not be*** security-like rules checked by the compiler. It's still just checking [a particular instantiation of] a predicate at some point in time. Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.
My proposition is that a language derives great benefit from having a visibility mechanism, regardless of whether it also has a rights mechanism.
Assume an application A has two modules M1 and M2. M2 exports a range of services to be used by A, but none of them are visible to M1. Therefore M1 has no dependency on M2; a bug in M1 could not be caused by M2 and M2 can be changed without needing to consider M1.
Now replace 'none of them are visible' by 'none of them are permitted to be used by M1. M1 is now dependent on M2 at compile time, because the permissions might change with no recompile. We lose the ability to make those confident statements about bugs and consequences of change.
My experience tells me this is a serious loss. We have it now, and for all its faults OO does this well, for both stateful and stateless dependencies. I would like to know how it is proposed that a TTM type system would do it, if it can do it at all.
does it.If we're talking about just the type system, Date & Darwen make mention of "protected operators" that implement (among other things, perhaps) mappings between the exposed interface of a type -- i.e., selectors, THE_ operators and so forth -- and the physical or internal representation. Search DTATRM for the phrase "protected operators".
As for whatever other protected modularity you might wish to implement, that's outside of TTM and falls into the RM Pre 26 category.
When you say "exposed interface" you are already singing my song. Exposed refers to the visibility of the interface, the degree to which a caller may have a dependency on it, regardless of the rights of the caller.
I have no idea what to implement, that's why I'm asking. The proposition has been put that a value-based type system such as TTM describes can adequately replace one based on OO. I want to know what feature or principle is proposed to replace or implement encapsulation: packets of state with extended lifetime but restricted visibility.
If you have the handle for an open file, where does the file state go?
Quote from Dave Voorhis on June 24, 2019, 9:16 amQuote from dandl on June 24, 2019, 12:05 amQuote from Erwin on June 21, 2019, 6:55 am
And not to forget : this way security rules can be altered without having to recompile. Independence, anyone ?
That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.
So that's not to say there ***could not be*** security-like rules checked by the compiler. It's still just checking [a particular instantiation of] a predicate at some point in time. Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.
My proposition is that a language derives great benefit from having a visibility mechanism, regardless of whether it also has a rights mechanism.
Assume an application A has two modules M1 and M2. M2 exports a range of services to be used by A, but none of them are visible to M1. Therefore M1 has no dependency on M2; a bug in M1 could not be caused by M2 and M2 can be changed without needing to consider M1.
Now replace 'none of them are visible' by 'none of them are permitted to be used by M1. M1 is now dependent on M2 at compile time, because the permissions might change with no recompile. We lose the ability to make those confident statements about bugs and consequences of change.
My experience tells me this is a serious loss. We have it now, and for all its faults OO does this well, for both stateful and stateless dependencies. I would like to know how it is proposed that a TTM type system would do it, if it can do it at all.
does it.If we're talking about just the type system, Date & Darwen make mention of "protected operators" that implement (among other things, perhaps) mappings between the exposed interface of a type -- i.e., selectors, THE_ operators and so forth -- and the physical or internal representation. Search DTATRM for the phrase "protected operators".
As for whatever other protected modularity you might wish to implement, that's outside of TTM and falls into the RM Pre 26 category.
When you say "exposed interface" you are already singing my song. Exposed refers to the visibility of the interface, the degree to which a caller may have a dependency on it, regardless of the rights of the caller.
I have no idea what to implement, that's why I'm asking. The proposition has been put that a value-based type system such as TTM describes can adequately replace one based on OO. I want to know what feature or principle is proposed to replace or implement encapsulation: packets of state with extended lifetime but restricted visibility.
If you have the handle for an open file, where does the file state go?
Quote from Dave Voorhis on June 24, 2019, 4:44 pmQuote from dandl on June 24, 2019, 4:33 pmQuote from Dave Voorhis on June 24, 2019, 9:16 amQuote from dandl on June 24, 2019, 12:05 amQuote from Erwin on June 21, 2019, 6:55 am
And not to forget : this way security rules can be altered without having to recompile. Independence, anyone ?
That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.
So that's not to say there ***could not be*** security-like rules checked by the compiler. It's still just checking [a particular instantiation of] a predicate at some point in time. Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.
My proposition is that a language derives great benefit from having a visibility mechanism, regardless of whether it also has a rights mechanism.
Assume an application A has two modules M1 and M2. M2 exports a range of services to be used by A, but none of them are visible to M1. Therefore M1 has no dependency on M2; a bug in M1 could not be caused by M2 and M2 can be changed without needing to consider M1.
Now replace 'none of them are visible' by 'none of them are permitted to be used by M1. M1 is now dependent on M2 at compile time, because the permissions might change with no recompile. We lose the ability to make those confident statements about bugs and consequences of change.
My experience tells me this is a serious loss. We have it now, and for all its faults OO does this well, for both stateful and stateless dependencies. I would like to know how it is proposed that a TTM type system would do it, if it can do it at all.
does it.If we're talking about just the type system, Date & Darwen make mention of "protected operators" that implement (among other things, perhaps) mappings between the exposed interface of a type -- i.e., selectors, THE_ operators and so forth -- and the physical or internal representation. Search DTATRM for the phrase "protected operators".
As for whatever other protected modularity you might wish to implement, that's outside of TTM and falls into the RM Pre 26 category.
When you say "exposed interface" you are already singing my song. Exposed refers to the visibility of the interface, the degree to which a caller may have a dependency on it, regardless of the rights of the caller.
I have no idea what to implement, that's why I'm asking. The proposition has been put that a value-based type system such as TTM describes can adequately replace one based on OO. I want to know what feature or principle is proposed to replace or implement encapsulation: packets of state with extended lifetime but restricted visibility.
If you have the handle for an open file, where does the file state go?
I suppose my -- perhaps flippant -- answer is, implement whatever you like. TTM certainly doesn't preclude implementing a classic object-oriented approach for modularising run-time state and code, but might (but that's only a might, not necessarily a would) frown on its instances winding up in the database.
Or, explicitly provide modules, with module-local scope.
Or, strictly pass the file handle from operator to operator via parameters.
Or, pass a "state" tuple from operator to operator via parameters.
Or, provide higher-order operators and pass operators -- perhaps with closures -- from operator to operator via parameters.
Etc. There isn't one right answer here. Whatever approach is "right" depends on the nature of the language.
Quote from dandl on June 24, 2019, 4:33 pmQuote from Dave Voorhis on June 24, 2019, 9:16 amQuote from dandl on June 24, 2019, 12:05 amQuote from Erwin on June 21, 2019, 6:55 am
And not to forget : this way security rules can be altered without having to recompile. Independence, anyone ?
That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.
So that's not to say there ***could not be*** security-like rules checked by the compiler. It's still just checking [a particular instantiation of] a predicate at some point in time. Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.
My proposition is that a language derives great benefit from having a visibility mechanism, regardless of whether it also has a rights mechanism.
Assume an application A has two modules M1 and M2. M2 exports a range of services to be used by A, but none of them are visible to M1. Therefore M1 has no dependency on M2; a bug in M1 could not be caused by M2 and M2 can be changed without needing to consider M1.
Now replace 'none of them are visible' by 'none of them are permitted to be used by M1. M1 is now dependent on M2 at compile time, because the permissions might change with no recompile. We lose the ability to make those confident statements about bugs and consequences of change.
My experience tells me this is a serious loss. We have it now, and for all its faults OO does this well, for both stateful and stateless dependencies. I would like to know how it is proposed that a TTM type system would do it, if it can do it at all.
does it.If we're talking about just the type system, Date & Darwen make mention of "protected operators" that implement (among other things, perhaps) mappings between the exposed interface of a type -- i.e., selectors, THE_ operators and so forth -- and the physical or internal representation. Search DTATRM for the phrase "protected operators".
As for whatever other protected modularity you might wish to implement, that's outside of TTM and falls into the RM Pre 26 category.
When you say "exposed interface" you are already singing my song. Exposed refers to the visibility of the interface, the degree to which a caller may have a dependency on it, regardless of the rights of the caller.
I have no idea what to implement, that's why I'm asking. The proposition has been put that a value-based type system such as TTM describes can adequately replace one based on OO. I want to know what feature or principle is proposed to replace or implement encapsulation: packets of state with extended lifetime but restricted visibility.
If you have the handle for an open file, where does the file state go?
I suppose my -- perhaps flippant -- answer is, implement whatever you like. TTM certainly doesn't preclude implementing a classic object-oriented approach for modularising run-time state and code, but might (but that's only a might, not necessarily a would) frown on its instances winding up in the database.
Or, explicitly provide modules, with module-local scope.
Or, strictly pass the file handle from operator to operator via parameters.
Or, pass a "state" tuple from operator to operator via parameters.
Or, provide higher-order operators and pass operators -- perhaps with closures -- from operator to operator via parameters.
Etc. There isn't one right answer here. Whatever approach is "right" depends on the nature of the language.
Quote from Erwin on June 24, 2019, 11:17 pmQuote from dandl on June 24, 2019, 4:33 pmThe proposition has been put that a value-based type system such as TTM describes can adequately replace one based on OO. I want to know what feature or principle is proposed to replace or implement encapsulation: packets of state with extended lifetime but restricted visibility.Seems to me like you are assuming that "restricted visibility mechanisms" such as java's are a fundamental part of OO [and in particular its feature called "encapsulation"].
It is not. Encapsulation is the property of internal state being invisible by definition (except through exposed methods). Java does not do that because java allows internal state to be protected or even public, *direclty*.
And the visibility technique as applied to the methods themselves has proved to have been crippled too : see the story of the com.sun packages (none of them intended to be visible except to the methods of the java runtime itself, but that was a kind of visibility predicate that the visibility mechanism could not support, so everyone knew they were there and the dumber ones trod into the pitfall).
Quote from dandl on June 24, 2019, 4:33 pmThe proposition has been put that a value-based type system such as TTM describes can adequately replace one based on OO. I want to know what feature or principle is proposed to replace or implement encapsulation: packets of state with extended lifetime but restricted visibility.
Seems to me like you are assuming that "restricted visibility mechanisms" such as java's are a fundamental part of OO [and in particular its feature called "encapsulation"].
It is not. Encapsulation is the property of internal state being invisible by definition (except through exposed methods). Java does not do that because java allows internal state to be protected or even public, *direclty*.
And the visibility technique as applied to the methods themselves has proved to have been crippled too : see the story of the com.sun packages (none of them intended to be visible except to the methods of the java runtime itself, but that was a kind of visibility predicate that the visibility mechanism could not support, so everyone knew they were there and the dumber ones trod into the pitfall).
Quote from johnwcowan on June 24, 2019, 11:28 pmQuote from Erwin on June 24, 2019, 11:17 pmSeems to me like you are assuming that "restricted visibility mechanisms" such as java's are a fundamental part of OO [and in particular its feature called "encapsulation"].
It is not. Encapsulation is the property of internal state being invisible by definition (except through exposed methods). Java does not do that because java allows internal state to be protected or even public, *direclty*.
Apparently the term encapsulation, like many other technical terms, is used with two different meanings by different people. Per Wikipedia (see the page for hyperlinks and references):
In object oriented programming languages, encapsulation is used to refer to one of two related but distinct notions, and sometimes to the combination thereof:
- A language mechanism for restricting direct access to some of the object's components.
- A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data.
Some programming language researchers and academics use the first meaning alone or in combination with the second as a distinguishing feature of object-oriented programming, while some programming languages that provide lexical closures view encapsulation as a feature of the language orthogonal to object orientation.
The second definition is motivated by the fact that in many of the OOP languages hiding of components is not automatic or can be overridden; thus, information hiding is defined as a separate notion by those who prefer the second definition.
You are evidently using it in the first meaning, but with the further requirement that only methods and not other components can be exposed.
(The Unicode glossary shows that the term character is used with four different meanings, so this is nothing unusual.)
Quote from Erwin on June 24, 2019, 11:17 pmSeems to me like you are assuming that "restricted visibility mechanisms" such as java's are a fundamental part of OO [and in particular its feature called "encapsulation"].
It is not. Encapsulation is the property of internal state being invisible by definition (except through exposed methods). Java does not do that because java allows internal state to be protected or even public, *direclty*.
Apparently the term encapsulation, like many other technical terms, is used with two different meanings by different people. Per Wikipedia (see the page for hyperlinks and references):
In object oriented programming languages, encapsulation is used to refer to one of two related but distinct notions, and sometimes to the combination thereof:
- A language mechanism for restricting direct access to some of the object's components.
- A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data.
Some programming language researchers and academics use the first meaning alone or in combination with the second as a distinguishing feature of object-oriented programming, while some programming languages that provide lexical closures view encapsulation as a feature of the language orthogonal to object orientation.
The second definition is motivated by the fact that in many of the OOP languages hiding of components is not automatic or can be overridden; thus, information hiding is defined as a separate notion by those who prefer the second definition.
You are evidently using it in the first meaning, but with the further requirement that only methods and not other components can be exposed.
(The Unicode glossary shows that the term character is used with four different meanings, so this is nothing unusual.)