The Forum for Discussion about The Third Manifesto and Related Matters

You need to log in to create posts and topics.

Possreps, Objects, Mutability, Immutability, and Developing Non-database Applications.

Quote from dandl on June 20, 2019, 1:42 pm

Though I rather like the TTM implication that global mutable state == the database. It's also certainly reasonable to distinguish local state and global state, but that's outside of the scope of TTM.

One of the TTM writings suggests using security mechanisms to control access rather than access modifiers. I think that's an interesting approach to explore.

Yes. I deeply distrust this suggestion, but that could be because I've never seen a detailed explanation of how it might work. It suggests a program might compile (because X is visible) and then not run (because X is not accessible). Doesn't sound like a good idea.

Except that's exactly how security mechanisms in languages and operating systems normally work. Don't have permission to write to directory X? Compiled code runs until it tries to write to X, then errors occur, exceptions get thrown, or the program stops.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from AntC on June 20, 2019, 6:33 am

I'd actually propose that the only global variable you need is the database; and that local variables be considered views into it or holding slots for to-be-transactions.

Quite so.  In Clean and Mercury, both pre-Haskell pure functional programming languages, there is a type World that is passed around to anything involving I/O or state generally.  You get back another object of type World, but you have to be sure that you never use the old World object again.  "A whole new world / A new fantastic point of view."  The Clean compiler enforces uniqueness typing to make sure that you can't.  It's equivalent in power to the Haskell I/O monad, but more explicit.

Why does the power of OO come from encapsulation?

In my opinion, the best thing about OO is generic functions.

I'm not quite sure what John's getting at with the claim that FP is the dual of OO,

Impure FP is the dual of OO.  Pure FP is the dual of OO-without-state.

 

Quote from dandl on June 20, 2019, 11:27 am

My reading is that a 'closure' with access only to 'effectively final' variables is not a closure.

Absolutely (update in the sense of closure I am using here; immutable closures are still closures in some sense).

   

Each call prints a different value, because the state in the closure is mutable. Can Java do that? Can Rel?

Java can do it somewhat painfully.  Although access to variables in outer classes is read-only, the variable can hold either an array  (which is mutable) with one element, or one of the obscure org.omg.COBRA classes BooleanHolder, StringHolder, IntHolder, FixedHolder (exact decimal values), DoubleHolder, etc. etc., all of which hold exactly one value of the specified type and are mutable.  As I noted far, far above, such mutable boxes are equivalent in power to mutable variables.

 

Quote from johnwcowan on June 20, 2019, 2:34 pm

Each call prints a different value, because the state in the closure is mutable. Can Java do that? Can Rel?

Java can do it somewhat painfully.  Although access to variables in outer classes is read-only, the variable can hold either an array  (which is mutable) with one element, or one of the obscure org.omg.COBRA classes BooleanHolder, StringHolder, IntHolder, FixedHolder (exact decimal values), DoubleHolder, etc. etc., all of which hold exactly one value of the specified type and are mutable. [...]

Before Java 8, maybe. I doubt anyone would do that now. See https://forum.thethirdmanifesto.com/forum/topic/possreps-objects-mutability-immutability-and-developing-non-database-applications/?part=2#postid-984894

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on June 20, 2019, 2:00 pm
Quote from dandl on June 20, 2019, 1:42 pm

Though I rather like the TTM implication that global mutable state == the database. It's also certainly reasonable to distinguish local state and global state, but that's outside of the scope of TTM.

One of the TTM writings suggests using security mechanisms to control access rather than access modifiers. I think that's an interesting approach to explore.

Yes. I deeply distrust this suggestion, but that could be because I've never seen a detailed explanation of how it might work. It suggests a program might compile (because X is visible) and then not run (because X is not accessible). Doesn't sound like a good idea.

Except that's exactly how security mechanisms in languages and operating systems normally work. Don't have permission to write to directory X? Compiled code runs until it tries to write to X, then errors occur, exceptions get thrown, or the program stops.

And not to forget : this way security rules can be altered without having to recompile.  Independence, anyone ?

That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.

So that's not to say there ***could not be*** security-like rules checked by the compiler.  It's still just checking [a particular instantiation of] a predicate at some point in time.  Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.

Quote from Erwin on June 21, 2019, 6:55 am
Quote from Dave Voorhis on June 20, 2019, 2:00 pm
Quote from dandl on June 20, 2019, 1:42 pm

Though I rather like the TTM implication that global mutable state == the database. It's also certainly reasonable to distinguish local state and global state, but that's outside of the scope of TTM.

One of the TTM writings suggests using security mechanisms to control access rather than access modifiers. I think that's an interesting approach to explore.

Yes. I deeply distrust this suggestion, but that could be because I've never seen a detailed explanation of how it might work. It suggests a program might compile (because X is visible) and then not run (because X is not accessible). Doesn't sound like a good idea.

Except that's exactly how security mechanisms in languages and operating systems normally work. Don't have permission to write to directory X? Compiled code runs until it tries to write to X, then errors occur, exceptions get thrown, or the program stops.

And not to forget : this way security rules can be altered without having to recompile.  Independence, anyone ?

That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.

Hmm, hmm. The sort of 'security rules' encapsulated by methods are more of the form: you the user, or you the programmer acting as client for my method, cannot be trusted to understand the very delicate internal logic of these fields/tuples/relations. So I won't let the user poke the database direct, neither will I let the client programmer's poke the database direct. Instead you(s) must call my method, and I'll take responsibility for poking the database correctly.

  1. This is the sort of encapsulation I strongly detest/the sort of encapsulation that is a leading case to be deeply suspicious about OO.
  2. This is the sort of subtlety that general-purpose security rules are going to struggle to express.
  3. So does the security mechanism embody a notion of 'trusted' method vs. untrusted user? That's prone to piggy-in-the-middle attacks and various kinds of spoofery. (Not insurmountable, but all adds to complexity/fragile to maintain.)
  4. Why not (for the love of Mike!) express this as constraints in the database? Then the rules can't be evaded by spoofery/anybody can be allowed to 'have a go' at an update (that upholds the constraints)/it'll catch logic errors in the supposedly 'trustworthy' method. ("very delicate internal logic" is longhand for 'bugs'.)

Why not? Of course because SQL's support for constraints is a joke. This should be a strong argument for TTM over SQL.

So that's not to say there ***could not be*** security-like rules checked by the compiler.  It's still just checking [a particular instantiation of] a predicate at some point in time.  Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.

There seems to be a conspiracy between dba's, "security guys", and developers that rules (whether they're security or business logic) should not be allowed across the threshold into the database/schema definition. Something about a performance impact. Of course this is propellor-headed 'performance'. Nothing sinks the enterprise's performance like having non-compliant data in the database; and having to pay an army of data analysts to come in and sort it out/suspend transactions so they can work on a frozen database state. (Speaking as somebody who's worked as a major-general in several such armies, and having charged handsomely for it.)

Quote from AntC on June 21, 2019, 9:59 am
Quote from Erwin on June 21, 2019, 6:55 am
Quote from Dave Voorhis on June 20, 2019, 2:00 pm
Quote from dandl on June 20, 2019, 1:42 pm

Though I rather like the TTM implication that global mutable state == the database. It's also certainly reasonable to distinguish local state and global state, but that's outside of the scope of TTM.

One of the TTM writings suggests using security mechanisms to control access rather than access modifiers. I think that's an interesting approach to explore.

Yes. I deeply distrust this suggestion, but that could be because I've never seen a detailed explanation of how it might work. It suggests a program might compile (because X is visible) and then not run (because X is not accessible). Doesn't sound like a good idea.

Except that's exactly how security mechanisms in languages and operating systems normally work. Don't have permission to write to directory X? Compiled code runs until it tries to write to X, then errors occur, exceptions get thrown, or the program stops.

And not to forget : this way security rules can be altered without having to recompile.  Independence, anyone ?

That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.

Hmm, hmm. The sort of 'security rules' encapsulated by methods are more of the form: you the user, or you the programmer acting as client for my method, cannot be trusted to understand the very delicate internal logic of these fields/tuples/relations. So I won't let the user poke the database direct, neither will I let the client programmer's poke the database direct. Instead you(s) must call my method, and I'll take responsibility for poking the database correctly.

  1. This is the sort of encapsulation I strongly detest/the sort of encapsulation that is a leading case to be deeply suspicious about OO.
  2. This is the sort of subtlety that general-purpose security rules are going to struggle to express.
  3. So does the security mechanism embody a notion of 'trusted' method vs. untrusted user? That's prone to piggy-in-the-middle attacks and various kinds of spoofery. (Not insurmountable, but all adds to complexity/fragile to maintain.)
  4. Why not (for the love of Mike!) express this as constraints in the database? Then the rules can't be evaded by spoofery/anybody can be allowed to 'have a go' at an update (that upholds the constraints)/it'll catch logic errors in the supposedly 'trustworthy' method. ("very delicate internal logic" is longhand for 'bugs'.)

Why not? Of course because SQL's support for constraints is a joke. This should be a strong argument for TTM over SQL.

So that's not to say there ***could not be*** security-like rules checked by the compiler.  It's still just checking [a particular instantiation of] a predicate at some point in time.  Nothing special about it even if the security guy is obsessed with keeping those predicates hidden from the developers.

There seems to be a conspiracy between dba's, "security guys", and developers that rules (whether they're security or business logic) should not be allowed across the threshold into the database/schema definition. Something about a performance impact. Of course this is propellor-headed 'performance'. Nothing sinks the enterprise's performance like having non-compliant data in the database; and having to pay an army of data analysts to come in and sort it out/suspend transactions so they can work on a frozen database state. (Speaking as somebody who's worked as a major-general in several such armies, and having charged handsomely for it.)

Grin.  Theoretically I should now send you a cask of Westvleteren.

Quote from AntC on June 21, 2019, 9:59 am
Quote from Erwin on June 21, 2019, 6:55 am
Quote from Dave Voorhis on June 20, 2019, 2:00 pm
Quote from dandl on June 20, 2019, 1:42 pm

Though I rather like the TTM implication that global mutable state == the database. It's also certainly reasonable to distinguish local state and global state, but that's outside of the scope of TTM.

One of the TTM writings suggests using security mechanisms to control access rather than access modifiers. I think that's an interesting approach to explore.

Yes. I deeply distrust this suggestion, but that could be because I've never seen a detailed explanation of how it might work. It suggests a program might compile (because X is visible) and then not run (because X is not accessible). Doesn't sound like a good idea.

Except that's exactly how security mechanisms in languages and operating systems normally work. Don't have permission to write to directory X? Compiled code runs until it tries to write to X, then errors occur, exceptions get thrown, or the program stops.

And not to forget : this way security rules can be altered without having to recompile.  Independence, anyone ?

That said, of course "not allowed to write to directory X" and "not allowed to invoke operator Y" have different targets (and purposes) and ***of course*** the latter kind would be checked by a compiler if all needed information is available at compile-time.

Hmm, hmm. The sort of 'security rules' encapsulated by methods are more of the form: you the user, or you the programmer acting as client for my method, cannot be trusted to understand the very delicate internal logic of these fields/tuples/relations. So I won't let the user poke the database direct, neither will I let the client programmer's poke the database direct. Instead you(s) must call my method, and I'll take responsibility for poking the database correctly.

  1. This is the sort of encapsulation I strongly detest/the sort of encapsulation that is a leading case to be deeply suspicious about OO.

You mean you hate bad programming? ;-)

I'm sure we all do, but object oriented programming isn't to blame; it's misuse of object oriented programming that is to blame. The idea behind information hiding1 and access modifiers is to allow the same level of portability and implementation-invisibility that almost all programming languages provide for built-in types like integer, float, decimal, and string, but extend it to user-defined types. How often do you need to "break encapsulation" or peer through information hiding to see how your integers and strings are implemented?

Information hiding has a valuable purpose: it helps keep programs from becoming brittle, difficult to read, difficult to maintain, and non-portable by preventing developers from using system-dependent, changeable, or otherwise-risky private internal implementation mechanisms in constructs that should otherwise only present a well-defined, stable, and portable public interface.

Of course, we've all probably seen badly-written object-oriented code where what should have been part of the public interface was inadvertently or intentionally made private, which forces developers to use ugly workarounds like reflection or duplicated code, or give up in frustration. That's where security-based access modifiers might make sense: it would potentially allow straightforward privileged access to internal implementation details when required, whilst still preventing application developers from using risky internal implementation mechanisms that result in brittle, unreadable, unmaintainable, non-portable code.

--

1 I'm careful not to use the term "encapsulation", because it's so often misused to mean information hiding. Encapsulation and information hiding often appear together, but they are distinct and orthogonal.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on June 21, 2019, 11:03 am

You mean you hate bad programming? ;-)

I'm sure we all do, but object oriented programming isn't to blame; it's misuse of object oriented programming that is to blame. The idea behind information hiding1 and access modifiers is to allow the same level of portability and implementation-invisibility that almost all programming languages provide for built-in types like integer, float, decimal, and string, but extend it to user-defined types. How often do you need to "break encapsulation" or peer through information hiding to see how your integers and strings are implemented?

Information hiding has a valuable purpose: it helps keep programs from becoming brittle, difficult to read, difficult to maintain, and non-portable by preventing developers from using system-dependent, changeable, or otherwise-risky private internal implementation mechanisms in constructs that should otherwise only present a well-defined, stable, and portable public interface.

Of course, we've all probably seen badly-written object-oriented code where what should have been part of the public interface was inadvertently or intentionally made private, which forces developers to use ugly workarounds like reflection or duplicated code, or give up in frustration. That's where security-based access modifiers might make sense: it would potentially allow straightforward privileged access to internal implementation details when required, whilst still preventing application developers from using risky internal implementation mechanisms that result in brittle, unreadable, unmaintainable, non-portable code.

--

1 I'm careful not to use the term "encapsulation", because it's so often misused to mean information hiding. Encapsulation and information hiding often appear together, but they are distinct and orthogonal.

All those poor sods neglecting the warning "don't use the com.sun packages - they're ***NOT*** part of the API" come to mind.  Truth be told : if they hadn't been collected into separate packages named com.sun then the methods involved wouldn't have had to be made public and the hacking, opn-src-code-inspecting part of the audience wouldn't have been trapped by the pitfall they apparently needed to be protected from.

Quote from AntC on June 21, 2019, 9:59 am

... Of course because SQL's support for constraints is a joke. This should be a strong argument for TTM over SQL.

Wouldn't SQL ASSERT be adequate for the task?

It is the implementation of database assertions that needs more thought. Either in SQL or TTM, evaluating assertions can be extremely expensive.

In real life, constraints are everywhere, they are just called differently. Humans gave up on enforcing the law flawlessly (i.e. without any incidental violations) long time ago. Therefore, one can break a rule (or two). Some mess would be created that have to be cleaned afterwards, but the troublemaker is penalized (if caught). Constraint enforcement via applications idea is of that kind.