The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Pre-relational database models' influence on "theoreticians"

PreviousPage 3 of 4Next
Quote from Dave Voorhis on August 4, 2019, 7:44 pm
Quote from Erwin on August 4, 2019, 6:33 pm
Quote from Erwin on August 4, 2019, 6:24 pm
Quote from Erwin on August 4, 2019, 6:18 pm
Quote from AntC on August 3, 2019, 6:53 am

There's somebody I would describe as a RM troll just materialised on StackOverflow. I don't think it's Fabien under an alias(?)

No, PerformanceDBA on SO is I-don't-remember-his-name but it's the guy who managed to get kicked out of the old discussion list.

I'm rather surprised he's surfacing again.

The guy who kept moaning that if people would just use IDEFIX modeling they would never even run into any of the problems TTM claimed to solve.

 

I intentionally avoided mentioning his name in post #2 in this thread, ...

Yes, he thought IDEFIX was the solution to everything, Sybase was the ultimate DBMS, Codd was infallible (and perhaps omniscient), and his post series on transactions was pure gold (hint: all you need is a transaction number, apparently.)

But I didn't ban him for any of those things. ...

Chiefly I remember the length of his posts, and their frequency. Did he not have a day job? Dilbert as ever is right on time.

If you ever 'landed' a point that contradicted him, he just would not let it go. I suppose if an argument is not worth making once, it's not worth making 17 times.

Re "Codd was infallible/omniscient", I wonder if his SO answer is saying NULL is not the best of ideas? That's what I was hinting at by commenting that Codd post ~1975 is of doubtful value. So he counters me by linking to Codd 1970? At that point I dropped it: I could see I was only in for a bout of pig-wrestling, even before knowing who was the pig.

I well remember the banning: in vein of "just would not let it go", I'd suffered a series of nearly-personal attacks from him. Suddenly he was banned and sending me emails off-forum saying how friendly he'd always been, and what an outrage he'd been banned, and he was sure I would support him in getting restored to the forum.

AFAIK, nobody objected to Dave's action.

How do people like this maintain a career? (By "people" I include Fabian, David McG, Brian Selzer.) Yes the industry has plenty of people with 'hobby horses' they can bore you with privately. Do they seethe under their breaths when they have to deal with ordinary mortals with ordinary schemas/applications that are just plain 'wrong'? How do they avoid being so disruptive they just get sacked -- no matter how deep their claimed knowledge?

Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

I consider myself an "OO thinker", having written object-oriented code on an almost daily basis since the mid-1980's. The notion that object-orientation revolves around "object identity" is a categorisation that often comes from OO outsiders, almost never from insiders. I very rarely have any reason to think about object identity. It just doesn't come up. On the other hand, I often have to think about how to define one instance of a given class as being equal to another.

My background is somewhat similar, but with a strong focus on building software with 'walls': stringent controls over dependencies and visibility. I was writing OO-like code in C before there was C++, for this sole purpose. I care about abstraction, encapsulation, separation of concerns far more than inheritance and polymorphism.

I also care about abstraction, encapsulation, and separation of concerns. Inheritance and polymorphism are a means -- not the only one, obviously -- of implementing separation of concerns between general abstractions and specific implementations.

Just pointing out this was the primary reason why people moved (from Cobol, Fortran, Algol, PL/I, VB, C, etc). Not idealogy, just really useful stuff for managing messy lumps of software across orders of magnitude of code size, memory and CPU. I've wondered at times whether there was a different path that would have led to a different paradigm, and what that might have been. Nothing I can easily identify. Ada certainly wasn't it.

Andl - A New Database Language - andl.org
Quote from dandl on August 5, 2019, 5:19 am
Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

I consider myself an "OO thinker", having written object-oriented code on an almost daily basis since the mid-1980's. The notion that object-orientation revolves around "object identity" is a categorisation that often comes from OO outsiders, almost never from insiders.

Who is it that starts every table definition with a 'meaningless' record ID, deliberately intending it not be a business-recognisable value? Isn't that some sort of standard with ORM tooling? And whoever's doing it is what, if not an OO insider?

What does it IDentify if not record-as-object? Why intentionally support changing any business-oriented field in the record, including those the business thinks of as identifiers, if you're not thinking the record is an object and the ID is its (only persistent) identity?

I very rarely have any reason to think about object identity. It just doesn't come up. On the other hand, I often have to think about how to define one instance of a given class as being equal to another.

My background is somewhat similar, but with a strong focus on building software with 'walls': stringent controls over dependencies and visibility. I was writing OO-like code in C before there was C++, for this sole purpose. I care about abstraction, encapsulation, separation of concerns far more than inheritance and polymorphism.

I also care about abstraction, encapsulation, and separation of concerns. Inheritance and polymorphism are a means -- not the only one, obviously -- of implementing separation of concerns between general abstractions and specific implementations.

Just pointing out this was the primary reason why people moved (from Cobol, Fortran, Algol, PL/I, VB, C, etc). Not idealogy, just really useful stuff for managing messy lumps of software across orders of magnitude of code size, memory and CPU.

Not sure I've seen any evidence that OO in general has turned out any more useful for managing "messy lumps", etc. I expect the response that mediocre practitioners can make anything mediocre, whatever the toolset. Then perhaps we should be looking for toolsets that prevent the mediocre from fouling things up too much? Rather than toolsets with intellectually-appealing abstractions.

I've wondered at times whether there was a different path that would have led to a different paradigm, and what that might have been. Nothing I can easily identify. Ada certainly wasn't it.

There are different interpretations/implementations of polymorphism, inheritance, encapsulation, etc. Some fit much better with a public/shareable data structure -- i.e. a multi-user/multi-programmed database.

But you can see on SO every day how the mediocre can foul up every abstraction.

Quote from AntC on August 5, 2019, 4:32 am

How do people like this maintain a career? (By "people" I include Fabian, David McG, Brian Selzer.) Yes the industry has plenty of people with 'hobby horses' they can bore you with privately. Do they seethe under their breaths when they have to deal with ordinary mortals with ordinary schemas/applications that are just plain 'wrong'? How do they avoid being so disruptive they just get sacked -- no matter how deep their claimed knowledge?

They usually survive by having face-to-face personae quite different from their online personae.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from AntC on August 5, 2019, 5:59 am
Quote from dandl on August 5, 2019, 5:19 am
Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

I consider myself an "OO thinker", having written object-oriented code on an almost daily basis since the mid-1980's. The notion that object-orientation revolves around "object identity" is a categorisation that often comes from OO outsiders, almost never from insiders.

Who is it that starts every table definition with a 'meaningless' record ID, deliberately intending it not be a business-recognisable value? Isn't that some sort of standard with ORM tooling? And whoever's doing it is what, if not an OO insider?

What does it IDentify if not record-as-object? Why intentionally support changing any business-oriented field in the record, including those the business thinks of as identifiers, if you're not thinking the record is an object and the ID is its (only persistent) identity?

Some ORM systems do create autonumbered record IDs in every table, along with meaningless column names, meaningless table names, tables that map meaningless column/table names to class attribute names and class names, and assorted other cruft. But there are reasons for that: there may be no visible notion of a "primary key" in the object oriented client (hence numeric "record IDs"), identifier validity not the same for the object-oriented client language(s) and the SQL DBMS (hence meaningless table/column names), etc. These come about as a result of shoehorning a SQL DBMS into a badly-fitting persistence engine role for object oriented programming languages, which is quite different from using a SQL DBMS as a database management system which may or may not be accessed from object oriented languages.

When database designers choose to use meaningless numeric record IDs as primary keys and they're not using ORMs, I don't think they're reflecting object oriented thinking. They may not be thinking at all, and have picked up viral bad advice on StackOverflow or whatever, or are instructed to do it by their managers. Or, they belong to the not-insignificant cohort that genuinely believes surrogate primary keys are the only proper primary keys (guaranteed stable and immutable, etc). Or, they've sat through the inevitable lecture in some database class that led everyone through choosing and evaluating candidates for primary keys, which often points out that every natural primary key is potentially unstable and -- in horror -- have decided to never subject themselves to such uncertainty again, so surrogate keys all the way. Or, they've noticed that spreadsheets have numbered rows, so...

Again, object identity is something that OO outsiders make much of and OO insiders rarely think about, so I doubt record IDs come from object oriented thinking. If anything, spreadsheet thinking is more likely.

Quote from AntC on August 5, 2019, 5:59 am
Quote from dandl on August 5, 2019, 5:19 am
Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

I consider myself an "OO thinker", having written object-oriented code on an almost daily basis since the mid-1980's. The notion that object-orientation revolves around "object identity" is a categorisation that often comes from OO outsiders, almost never from insiders.

Who is it that starts every table definition with a 'meaningless' record ID, deliberately intending it not be a business-recognisable value? Isn't that some sort of standard with ORM tooling? And whoever's doing it is what, if not an OO insider?

What does it IDentify if not record-as-object? Why intentionally support changing any business-oriented field in the record, including those the business thinks of as identifiers, if you're not thinking the record is an object and the ID is its (only persistent) identity?

I very rarely have any reason to think about object identity. It just doesn't come up. On the other hand, I often have to think about how to define one instance of a given class as being equal to another.

My background is somewhat similar, but with a strong focus on building software with 'walls': stringent controls over dependencies and visibility. I was writing OO-like code in C before there was C++, for this sole purpose. I care about abstraction, encapsulation, separation of concerns far more than inheritance and polymorphism.

I also care about abstraction, encapsulation, and separation of concerns. Inheritance and polymorphism are a means -- not the only one, obviously -- of implementing separation of concerns between general abstractions and specific implementations.

Just pointing out this was the primary reason why people moved (from Cobol, Fortran, Algol, PL/I, VB, C, etc). Not idealogy, just really useful stuff for managing messy lumps of software across orders of magnitude of code size, memory and CPU.

Not sure I've seen any evidence that OO in general has turned out any more useful for managing "messy lumps", etc. I expect the response that mediocre practitioners can make anything mediocre, whatever the toolset. Then perhaps we should be looking for toolsets that prevent the mediocre from fouling things up too much? Rather than toolsets with intellectually-appealing abstractions.

That's apparently a big part of the motivation for Google's Go language. It's also a strong motivation for using spreadsheets.

I don't know whether OO in general has turned out any more useful for managing "messy lumps", particularly as we have so few examples (do we have any?) comparing developing a given system in notionally-equivalent OO and non-OO languages.

I do know that when I started programming in C++, it was a delight to discover that all the things I'd been awkwardly doing in C to make the code more manageable (using structs to define types, using function pointers to implement polymorphism, using composition of structs with function pointers to resemble inheritance and polymorphism) were now easy-to-use built-in parts of the language. That's the benefit of object oriented programming -- it makes improving cohesion and coupling easier than doing it in non-OO procedural languages. I.e., it's an iterative refinement to procedural programming.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from dandl on August 5, 2019, 5:19 am

I've wondered at times whether there was a different path that would have led to a different paradigm, and what that might have been. Nothing I can easily identify. Ada certainly wasn't it.

Lisp, the programmable programming language family.

Myths:  Lisp is slow, Lisp only has lists, Lisp is entirely dynamically typed:

Truths: Lisp is stable (two people working independently got the same Lisp program from 1959 working today with minimal-to-no change); Lisp gives you otherwise unprecedented power (not only procedural abstraction but syntactic abstraction); Lisp gives you exactly as much encapsulation, abstraction, and separation of concerns as you need to solve your particular problem, no more and no less.

To within ε, everything begins in Lisp and then filters out into other languages about thirty years later, generally in dumbed-down form.  (Relational databases are certainly an exception.)

Lisp tries to solve the problem of mediocre programmers by making good programmers so efficient that you don't need any mediocre programmers.  "Lisp programmers do not write code; their macros write it for them." (Anon.)

Of course Lisp has the vices of its virtues:  the Curse of Lisp.

Quote from Dave Voorhis on August 5, 2019, 11:57 am

I do know that when I started programming in C++, it was a delight to discover that all the things I'd been awkwardly doing in C to make the code more manageable (using structs to define types, using function pointers to implement polymorphism, using composition of structs with function pointers to resemble inheritance and polymorphism) were now easy-to-use built-in parts of the language. That's the benefit of object oriented programming -- it makes improving cohesion and coupling easier than doing it in non-OO procedural languages. I.e., it's an iterative refinement to procedural programming.

Nailed it. From pre-OO to OO was a quantum leap comparable to going from ASM to (any) HLL. I can think of similar but smaller steps since, including generics/templates, GC (vs C/C++), maybe LINQ, but nothing with that same level of impact.

Andl - A New Database Language - andl.org
Quote from johnwcowan on August 5, 2019, 1:51 pm
Quote from dandl on August 5, 2019, 5:19 am

I've wondered at times whether there was a different path that would have led to a different paradigm, and what that might have been. Nothing I can easily identify. Ada certainly wasn't it.

Lisp, the programmable programming language family.

To within ε, everything begins in Lisp and then filters out into other languages about thirty years later, generally in dumbed-down form.  (Relational databases are certainly an exception.)

Lisp tries to solve the problem of mediocre programmers by making good programmers so efficient that you don't need any mediocre programmers.  "Lisp programmers do not write code; their macros write it for them." (Anon.)

Of course Lisp has the vices of its virtues:  the Curse of Lisp.

I wondered if someone would bring u Lisp.

To be clear, I don't hate Lisp. However I do regard it as (a) a religion, with its own subjective reality and (b) the founding member of a small number of write only languages.

All the criticisms in the linked article apply. Every significant chunk of Lisp code I've seen created its own new language (or more than one), so after the first page every program is a foreign land, with its own culture, lingo, geography, government, laws, friends and enemies (or more than one). I seriously don't believe a language as capable, undisciplined and downright lawless as Lisp could ever found a paradigm. It's just the Wild West forever more.

Andl - A New Database Language - andl.org
Quote from Dave Voorhis on August 5, 2019, 11:57 am
Quote from AntC on August 5, 2019, 5:59 am
Quote from dandl on August 5, 2019, 5:19 am
Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

I consider myself an "OO thinker", having written object-oriented code on an almost daily basis since the mid-1980's. The notion that object-orientation revolves around "object identity" is a categorisation that often comes from OO outsiders, almost never from insiders.

Who is it that starts every table definition with a 'meaningless' record ID, deliberately intending it not be a business-recognisable value? Isn't that some sort of standard with ORM tooling? And whoever's doing it is what, if not an OO insider?

What does it IDentify if not record-as-object? Why intentionally support changing any business-oriented field in the record, including those the business thinks of as identifiers, if you're not thinking the record is an object and the ID is its (only persistent) identity?

Some ORM systems do ...

Again, object identity is something that OO outsiders make much of and OO insiders rarely think about, so I doubt record IDs come from object oriented thinking. If anything, spreadsheet thinking is more likely.

I'm struggling to separate the Truly OOTM thinkers/insiders here from the people who call themselves OO thinkers but (apparently) aren't. Saying this here, because there's more confusion later in your message -- esp when read with David B's response.

Quote from AntC on August 5, 2019, 5:59 am
Quote from dandl on August 5, 2019, 5:19 am
Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

 

I very rarely have any reason to think about object identity. It just doesn't come up. On the other hand, I often have to think about how to define one instance of a given class as being equal to another.

My background is somewhat similar, but with a strong focus on building software with 'walls': stringent controls over dependencies and visibility. I was writing OO-like code in C before there was C++, for this sole purpose. I care about abstraction, encapsulation, separation of concerns far more than inheritance and polymorphism.

I also care about abstraction, encapsulation, and separation of concerns. Inheritance and polymorphism are a means -- not the only one, obviously -- of implementing separation of concerns between general abstractions and specific implementations.

Just pointing out this was the primary reason why people moved (from Cobol, Fortran, Algol, PL/I, VB, C, etc). Not idealogy, just really useful stuff for managing messy lumps of software across orders of magnitude of code size, memory and CPU.

Not sure I've seen any evidence that OO in general has turned out any more useful for managing "messy lumps", etc. I expect the response that mediocre practitioners can make anything mediocre, whatever the toolset. Then perhaps we should be looking for toolsets that prevent the mediocre from fouling things up too much? Rather than toolsets with intellectually-appealing abstractions.

That's apparently a big part of the motivation for Google's Go language. It's also a strong motivation for using spreadsheets.

Aside: eh? Spreadsheets are the toolset for ensuring the mediocre do foul things up, and even the very able can barely avoid fouling up.

I don't know whether OO in general has turned out any more useful for managing "messy lumps", particularly as we have so few examples (do we have any?) comparing developing a given system in notionally-equivalent OO and non-OO languages.

I do know that when I started programming in C++, ...

David's agreement says

From pre-OO to OO was a quantum leap comparable to going from ASM to (any) HLL.

I'm confused: I do not count C++ as a HLL, any more than C is. In particular, no Garbage Collection; no protecting pointers from dreferencing abuse.

it was a delight to discover that all the things I'd been awkwardly doing in C to make the code more manageable (using structs to define types, using function pointers to implement polymorphism, using composition of structs with function pointers to resemble inheritance and polymorphism) were now easy-to-use built-in parts of the language. That's the benefit of object oriented programming -- it makes improving cohesion and coupling easier than doing it in non-OO procedural languages. I.e., it's an iterative refinement to procedural programming.

Using structs to define types is emphatically not a feature specific to OO. Polymorphism is emphatically not a feature specific to OO. Taking functions as first-class is emphatically not OO. (Using pointers to functions is not HLL and not first-class and not OO: BCPL was doing it in 1965.) Combining functions with 'data' into structs is emphatically not a feature specific to OO. You could hide the low-level cruft away in protected libraries, then give end-users something like a HLL interface.

I'm not disputing those features are powerful abstractions for organising programs. In C++ they're far too low-level to be reliable as abstractions. (Unless you wrap it in type system bolt-ons like generics/templates, and even then ...)

Is it possible you're actually a Functional Programmer masquerading as something else, and you're not Truly OOTM at all?

Quote from AntC on August 6, 2019, 11:26 am
Quote from Dave Voorhis on August 5, 2019, 11:57 am
Quote from AntC on August 5, 2019, 5:59 am
Quote from dandl on August 5, 2019, 5:19 am
Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

I consider myself an "OO thinker", having written object-oriented code on an almost daily basis since the mid-1980's. The notion that object-orientation revolves around "object identity" is a categorisation that often comes from OO outsiders, almost never from insiders.

Who is it that starts every table definition with a 'meaningless' record ID, deliberately intending it not be a business-recognisable value? Isn't that some sort of standard with ORM tooling? And whoever's doing it is what, if not an OO insider?

What does it IDentify if not record-as-object? Why intentionally support changing any business-oriented field in the record, including those the business thinks of as identifiers, if you're not thinking the record is an object and the ID is its (only persistent) identity?

Some ORM systems do ...

Again, object identity is something that OO outsiders make much of and OO insiders rarely think about, so I doubt record IDs come from object oriented thinking. If anything, spreadsheet thinking is more likely.

I'm struggling to separate the Truly OOTM thinkers/insiders here from the people who call themselves OO thinkers but (apparently) aren't.

I suppose it depends how you define "Truly OOTM thinkers/insiders" and "people who call themselves OO thinkers but (apparently) aren't."

This is beginning to smack of the "No true Scotsman" fallacy.

Quote from AntC on August 6, 2019, 11:26 am
Quote from Dave Voorhis on August 5, 2019, 11:57 am
Quote from AntC on August 5, 2019, 5:59 am
Quote from dandl on August 5, 2019, 5:19 am
Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

I consider myself an "OO thinker", having written object-oriented code on an almost daily basis since the mid-1980's. The notion that object-orientation revolves around "object identity" is a categorisation that often comes from OO outsiders, almost never from insiders.

Who is it that starts every table definition with a 'meaningless' record ID, deliberately intending it not be a business-recognisable value? Isn't that some sort of standard with ORM tooling? And whoever's doing it is what, if not an OO insider?

What does it IDentify if not record-as-object? Why intentionally support changing any business-oriented field in the record, including those the business thinks of as identifiers, if you're not thinking the record is an object and the ID is its (only persistent) identity?

Some ORM systems do ...

Again, object identity is something that OO outsiders make much of and OO insiders rarely think about, so I doubt record IDs come from object oriented thinking. If anything, spreadsheet thinking is more likely.

I'm struggling to separate the Truly OOTM thinkers/insiders here from the people who call themselves OO thinkers but (apparently) aren't. Saying this here, because there's more confusion later in your message -- esp when read with David B's response.

Quote from AntC on August 5, 2019, 5:59 am
Quote from dandl on August 5, 2019, 5:19 am
Quote from Dave Voorhis on August 4, 2019, 11:31 am
Quote from dandl on August 4, 2019, 11:11 am
Quote from Dave Voorhis on August 4, 2019, 8:54 am
Quote from AntC on August 4, 2019, 7:49 am

Then can we say GUIDs (database-global) are a more reliable indicator of OO thinking?

 

I very rarely have any reason to think about object identity. It just doesn't come up. On the other hand, I often have to think about how to define one instance of a given class as being equal to another.

My background is somewhat similar, but with a strong focus on building software with 'walls': stringent controls over dependencies and visibility. I was writing OO-like code in C before there was C++, for this sole purpose. I care about abstraction, encapsulation, separation of concerns far more than inheritance and polymorphism.

I also care about abstraction, encapsulation, and separation of concerns. Inheritance and polymorphism are a means -- not the only one, obviously -- of implementing separation of concerns between general abstractions and specific implementations.

Just pointing out this was the primary reason why people moved (from Cobol, Fortran, Algol, PL/I, VB, C, etc). Not idealogy, just really useful stuff for managing messy lumps of software across orders of magnitude of code size, memory and CPU.

Not sure I've seen any evidence that OO in general has turned out any more useful for managing "messy lumps", etc. I expect the response that mediocre practitioners can make anything mediocre, whatever the toolset. Then perhaps we should be looking for toolsets that prevent the mediocre from fouling things up too much? Rather than toolsets with intellectually-appealing abstractions.

That's apparently a big part of the motivation for Google's Go language. It's also a strong motivation for using spreadsheets.

Aside: eh? Spreadsheets are the toolset for ensuring the mediocre do foul things up, and even the very able can barely avoid fouling up.

I was being somewhat sarcastic, and that's indeed how they turn out.

But outside of the IT world they're typically not perceived that way at all. Among administrative, management and executive staff, they often seen as the way to solve information management problems and do it "right" without the IT Department fouling it up, as usual.

Quote from AntC on August 6, 2019, 11:26 am

David's agreement says

From pre-OO to OO was a quantum leap comparable to going from ASM to (any) HLL.

I'm confused: I do not count C++ as a HLL, any more than C is. In particular, no Garbage Collection; no protecting pointers from dreferencing abuse.

C++ is much higher level than C, in spite of manual memory management and bare pointers (though both can be hidden.)  The notionally "object oriented" facilities of C++ make a vast difference compared to programming in C.

C++ is a curious language in that it is simultaneously lower level than, say, C# and Java (with automated memory management and no bare pointers), and higher level due to the power of templates.

Though I'm not sure "high level" vs "low level" is particularly useful to consider, outside of gross distinctions like Assembly being (obviously) low level and, say, Haskell being (obviously) high level.

Quote from AntC on August 6, 2019, 11:26 am

it was a delight to discover that all the things I'd been awkwardly doing in C to make the code more manageable (using structs to define types, using function pointers to implement polymorphism, using composition of structs with function pointers to resemble inheritance and polymorphism) were now easy-to-use built-in parts of the language. That's the benefit of object oriented programming -- it makes improving cohesion and coupling easier than doing it in non-OO procedural languages. I.e., it's an iterative refinement to procedural programming.

Using structs to define types is emphatically not a feature specific to OO. Polymorphism is emphatically not a feature specific to OO. Taking functions as first-class is emphatically not OO. (Using pointers to functions is not HLL and not first-class and not OO: BCPL was doing it in 1965.) Combining functions with 'data' into structs is emphatically not a feature specific to OO. You could hide the low-level cruft away in protected libraries, then give end-users something like a HLL interface.

I'm not disputing those features are powerful abstractions for organising programs. In C++ they're far too low-level to be reliable as abstractions. (Unless you wrap it in type system bolt-ons like generics/templates, and even then ...)

Using bare structs to awkwardly define types was a painful characteristic of raw C, significantly improved by "new structs" aka classes in C++.

No one suggested that polymorphism, first-class functions, etc., are unique to object orientation, which is typically characterised by the presence of all three of encapsulation, inheritance and polymorphism. These three may be found in other paradigms too. What was notable was that adding these to bare C (C++ and Objective C) made C so much better.

In C++ they're not too low-level to be reliable as abstractions but the assumption is you're using generics and templates, though C++ without these is still much superior to C. Yes, C++ is potentially brittle and requires more diligence than, say, Java or Smalltalk or C# to "get it right", but it is what it is.

Quote from AntC on August 6, 2019, 11:26 am

Is it possible you're actually a Functional Programmer masquerading as something else, and you're not Truly OOTM at all?

I'm not really sure what that means, but capable multi-paradigm programmers are almost inevitably appreciative of functional programming and object oriented programming, and tend to blend the best features of both -- as applicable, and where possible -- in their object oriented code. For example, I don't know any good "true" object oriented programmer who doesn't try to minimise and encapsulate statefulness.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
PreviousPage 3 of 4Next