"Ruinous inheritance" (again)
Quote from AntC on April 7, 2026, 3:25 am(Quoted from DBE Ch 19, attrib Gaius)
Would you be surprise by the claim
the actual central concept in Object Orientation is Inheritance, a mechanism for programming by modularly extending partial specifications of code.
I was, though I readily confess I don't really 'get' OOP. More discussion on Lambda-the-Ultimate.
I'm afraid the doco (supposedly a draft of a text? book) is long and rambling, and chiefly a vehicle for some highly opinionated invective. Does anyone know this François-René Rideau? Is anything of what he says a valid contribution to the theory of OOP? (It seems to me to mostly contradict the 'conventional wisdom', as found for example on wikip.)
It'll come as no surprise round here opinionated opinions disagreeing with wikip doesn't put me off in itself. Making one characteristic the (only) "central concept" of a whole programming paradigm does, though, seem ... ambitious. Specifically, Inheritance (without further specificity) strikes me as neither sufficient, nor even necessary. YMMV.
It did spur me to come up with a counter-example, extending (hah!) the "all circles are ellipses" thread.
* A square is-a rectangle is-a quadrilateral.
* A square is-a regular polygon (which rectangles generally aren't).
* A rectangle is-a orientable figure (has property prone vs upright);
as is-a ellipse; as aren't square nor circle.
(Quoted from DBE Ch 19, attrib Gaius)
Would you be surprise by the claim
the actual central concept in Object Orientation is Inheritance, a mechanism for programming by modularly extending partial specifications of code.
I was, though I readily confess I don't really 'get' OOP. More discussion on Lambda-the-Ultimate.
I'm afraid the doco (supposedly a draft of a text? book) is long and rambling, and chiefly a vehicle for some highly opinionated invective. Does anyone know this François-René Rideau? Is anything of what he says a valid contribution to the theory of OOP? (It seems to me to mostly contradict the 'conventional wisdom', as found for example on wikip.)
It'll come as no surprise round here opinionated opinions disagreeing with wikip doesn't put me off in itself. Making one characteristic the (only) "central concept" of a whole programming paradigm does, though, seem ... ambitious. Specifically, Inheritance (without further specificity) strikes me as neither sufficient, nor even necessary. YMMV.
It did spur me to come up with a counter-example, extending (hah!) the "all circles are ellipses" thread.
* A square is-a rectangle is-a quadrilateral.
* A square is-a regular polygon (which rectangles generally aren't).
* A rectangle is-a orientable figure (has property prone vs upright);
as is-a ellipse; as aren't square nor circle.
Quote from dandl on April 7, 2026, 9:12 amMy take is: bollocks! The core benefit of OO is quite simply: objects. Little packets of scope with related data and function that you can think about, implement, create and destroy, etc, etc. It took me years to really 'get it', and this constant focus on inheritance just blurred those benefits.
Turns out inheritance does have its place, but it's hard get right and mostly best avoided.
My take is: bollocks! The core benefit of OO is quite simply: objects. Little packets of scope with related data and function that you can think about, implement, create and destroy, etc, etc. It took me years to really 'get it', and this constant focus on inheritance just blurred those benefits.
Turns out inheritance does have its place, but it's hard get right and mostly best avoided.
Quote from AntC on April 8, 2026, 5:29 amQuote from dandl on April 7, 2026, 9:12 am*** ... quite simply: ...
Thank you for saying the *** out loud. That is what I'd thought. You've surely learnt by now there's nothing "simply" about PLT(?)
objects. Little packets of scope with related data and function that you can think about, implement, create and destroy, etc, etc.
Yes, that's what I would have said. You mean specifically packets of persistent data, right? (And with functions to access/update it; no direct access to how/where that data is stored inside the packet.) Rideau claims encapsulation/modularity can't be the defining characteristic, because lots of HLLs support modules/even separate compilation with name-hiding inside the module. I find that objection beside the point: other HLLs might have private declarations of data structures/types/Abstract Data Types (not to be confused with Algebraic Data Types); with access from outside only via privileged functions; but they don't hold persistent data of that type. Rather, they return the data to the caller -- who can't look inside it because Abstract.
The caller could even output that data to a database for persistence; but still no application can look inside except via a trusted function.
BTW across on LtU, somebody's volunteered some other/contrary criterion. And again my reaction is that's a common characteristic; but also common in other HLLs; so not sufficient to define OOP (and probably not even necessary).
Do we have a Family resemblance? Characterisation A has a lot in common with characterisation B (but they're not identical). Characterisation C has a lot in common with B, but less with A ... characterisation Z has dwindling amounts in common with Y, X, ...; but nothing in common with A. (For example I don't see where message-passing per the original Smalltalk fits in the chain of stuff in common.) Or do we say that when all sorts of languages jumped on the OOP bandwagon/claimed to be multiparadigm, the claims were/are just bogus?
Possible comparison: back in the 1960's LISP was the best crack at Functional Programming. A whole series of languages grew out of LISP (and its mistakes). I wouldn't these days count LISP as FP. (I might allow Scheme.) But what would be my characterisation of FP to exclude LISP?
Quote from dandl on April 7, 2026, 9:12 am*** ... quite simply: ...
Thank you for saying the *** out loud. That is what I'd thought. You've surely learnt by now there's nothing "simply" about PLT(?)
objects. Little packets of scope with related data and function that you can think about, implement, create and destroy, etc, etc.
Yes, that's what I would have said. You mean specifically packets of persistent data, right? (And with functions to access/update it; no direct access to how/where that data is stored inside the packet.) Rideau claims encapsulation/modularity can't be the defining characteristic, because lots of HLLs support modules/even separate compilation with name-hiding inside the module. I find that objection beside the point: other HLLs might have private declarations of data structures/types/Abstract Data Types (not to be confused with Algebraic Data Types); with access from outside only via privileged functions; but they don't hold persistent data of that type. Rather, they return the data to the caller -- who can't look inside it because Abstract.
The caller could even output that data to a database for persistence; but still no application can look inside except via a trusted function.
BTW across on LtU, somebody's volunteered some other/contrary criterion. And again my reaction is that's a common characteristic; but also common in other HLLs; so not sufficient to define OOP (and probably not even necessary).
Do we have a Family resemblance? Characterisation A has a lot in common with characterisation B (but they're not identical). Characterisation C has a lot in common with B, but less with A ... characterisation Z has dwindling amounts in common with Y, X, ...; but nothing in common with A. (For example I don't see where message-passing per the original Smalltalk fits in the chain of stuff in common.) Or do we say that when all sorts of languages jumped on the OOP bandwagon/claimed to be multiparadigm, the claims were/are just bogus?
Possible comparison: back in the 1960's LISP was the best crack at Functional Programming. A whole series of languages grew out of LISP (and its mistakes). I wouldn't these days count LISP as FP. (I might allow Scheme.) But what would be my characterisation of FP to exclude LISP?
Quote from Dave Voorhis on April 9, 2026, 11:45 pmQuote from AntC on April 7, 2026, 3:25 am(Quoted from DBE Ch 19, attrib Gaius)
Would you be surprise by the claim
the actual central concept in Object Orientation ...
There is no agreement on what the central concept is, though there's probably a majority consensus on it being encapsulation, polymorphism and inheritance. That said, there is no definitive definition of these, nor any indication of what amount of each is required to manifest the "object oriented" whole. E.g., does inheritance require explicit language features for it to be "object orientation" or can it be kludged? If the language supports it but the programmer doesn't use it, is it still "object orientation"? Etc.
Re dandl's "little packets of scope with related data and function that you can think about, implement, create and destroy, etc, etc." that's roughly encapsulation, but plenty of around-the-water-cooler debates revolve around whether it's sufficient to be "object oriented" or not. E.g., VB6 had "little packets of scope with related data and function that you can think about, implement, create and destroy, etc, etc." but most wouldn't consider it "object oriented."
Though some would.
I was, though I readily confess I don't really 'get' OOP.
I get it. As a result, I find it unhelpful to use the term "object oriented" for anything.
The real problem is the term, not the collection of concepts and programming language features that may or may not belong to its various definitions. They're useful, well-defined and worthy of discussion.
The term "object oriented" isn't.
Quote from AntC on April 7, 2026, 3:25 am(Quoted from DBE Ch 19, attrib Gaius)
Would you be surprise by the claim
the actual central concept in Object Orientation ...
There is no agreement on what the central concept is, though there's probably a majority consensus on it being encapsulation, polymorphism and inheritance. That said, there is no definitive definition of these, nor any indication of what amount of each is required to manifest the "object oriented" whole. E.g., does inheritance require explicit language features for it to be "object orientation" or can it be kludged? If the language supports it but the programmer doesn't use it, is it still "object orientation"? Etc.
Re dandl's "little packets of scope with related data and function that you can think about, implement, create and destroy, etc, etc." that's roughly encapsulation, but plenty of around-the-water-cooler debates revolve around whether it's sufficient to be "object oriented" or not. E.g., VB6 had "little packets of scope with related data and function that you can think about, implement, create and destroy, etc, etc." but most wouldn't consider it "object oriented."
Though some would.
I was, though I readily confess I don't really 'get' OOP.
I get it. As a result, I find it unhelpful to use the term "object oriented" for anything.
The real problem is the term, not the collection of concepts and programming language features that may or may not belong to its various definitions. They're useful, well-defined and worthy of discussion.
The term "object oriented" isn't.
Quote from dandl on April 10, 2026, 12:24 amThe concept that makes OO useful, as compared with earlier languages, is that of little packets of state bundled with operations on that state. Fortran, Cobol, Basic, Pascal, C did not have that. C++ and Java do. Why that got the OO moniker really does not matter a whole lot. Inheritance and polymorphism are add-ons to that core feature.
The major split is byref vs byval. FP languages do state bundles byval, OO languages do them byref. That has major implications, but is not captured by the naming choices. My take is a language with both is to be preferred, but there are not so many of them (C++, C#, ?).
The concept that makes OO useful, as compared with earlier languages, is that of little packets of state bundled with operations on that state. Fortran, Cobol, Basic, Pascal, C did not have that. C++ and Java do. Why that got the OO moniker really does not matter a whole lot. Inheritance and polymorphism are add-ons to that core feature.
The major split is byref vs byval. FP languages do state bundles byval, OO languages do them byref. That has major implications, but is not captured by the naming choices. My take is a language with both is to be preferred, but there are not so many of them (C++, C#, ?).
Quote from AntC on April 10, 2026, 2:45 amQuote from Dave Voorhis on April 9, 2026, 11:45 pm..., I find it unhelpful to use the term "object oriented" for anything.
The real problem is the term, ...
I can see that "object" is so vague/general-purpose as to be empty of meaning. OTOH the industry seems to cope with the synonymous "entity" as in Entity-Attribute or Entity-relation.
IIRC, OOP came to the fore with interactive/WYSIWYG systems, where the programming challenge is to capture and respond to mouse and keyboard activity more-or-less at random hitting different controls on the screen. (This was the 'discrete even simulations' of Simula; also fitted the message-passing event-driven model of Smalltalk.) The program needs to keep track of the current state of each window, and each control in the window, by capturing user actions. Yes I programmed to those concepts in VB6.
This is @dandl's "little packets of state". Contrast Imperative programming where the whole program state/whole of memory (vonNeumann)/whole tape (Turing) is in principle accessible from any instruction; any 'subroutine' might appear to do the job asked of it, but as a side-effect trample all over some random data structure without warning. (Which is why handing out addresses/pointers/byref is hazardous.)
Then the supposed merit of the concept for program design is that you can also think of 'business objects'/Entities (customers, cansofbeans, warehouse locations, ...) as having discrete autonomous state that the application needs to keep track of.
I agree it's debatable whether polymorphism or inheritance add anything specific here; or are just best practice programming disciplines for orthogonal design: Don't Repeat Yourself/separation of concerns. It's an open case-by-case design question whether cansofbeans are sufficiently similar to videogames to share some ur-type inheritance hierarchy.
And yeah, how much C++/C#/Java code could equally be plain C code? Probably it's the auto-garbage collection that's important; memory safety must draw on at least some explicit features in the language for the compiler to understand allocation lifetimes. (If I had a penny for every memory leak I've had to chase down ...)
@dandl FP languages do state bundles byval
I think rather the key feature is Abstract (private) Data Types. The PhysRep might be a vanilla record structure of vanilla
Int, String, Bool, but it's encapsulated by the type/module system such that any non-privileged function only knows it's got aCanofBeans, and must call privileged methods to find out anything about it. Similarly any transition in the 'object's state must be implemented by calling a method that returns a new 'object' of the same inscrutable Type. (As the Rideau 'textbook' points out, FP had a poor story on this until the discovery of 'Monads' -- or rather until somebody figured out how an entirely 'useless concept' from Category theory could be implemented in a language.)
Quote from Dave Voorhis on April 9, 2026, 11:45 pm..., I find it unhelpful to use the term "object oriented" for anything.
The real problem is the term, ...
I can see that "object" is so vague/general-purpose as to be empty of meaning. OTOH the industry seems to cope with the synonymous "entity" as in Entity-Attribute or Entity-relation.
IIRC, OOP came to the fore with interactive/WYSIWYG systems, where the programming challenge is to capture and respond to mouse and keyboard activity more-or-less at random hitting different controls on the screen. (This was the 'discrete even simulations' of Simula; also fitted the message-passing event-driven model of Smalltalk.) The program needs to keep track of the current state of each window, and each control in the window, by capturing user actions. Yes I programmed to those concepts in VB6.
This is @dandl's "little packets of state". Contrast Imperative programming where the whole program state/whole of memory (vonNeumann)/whole tape (Turing) is in principle accessible from any instruction; any 'subroutine' might appear to do the job asked of it, but as a side-effect trample all over some random data structure without warning. (Which is why handing out addresses/pointers/byref is hazardous.)
Then the supposed merit of the concept for program design is that you can also think of 'business objects'/Entities (customers, cansofbeans, warehouse locations, ...) as having discrete autonomous state that the application needs to keep track of.
I agree it's debatable whether polymorphism or inheritance add anything specific here; or are just best practice programming disciplines for orthogonal design: Don't Repeat Yourself/separation of concerns. It's an open case-by-case design question whether cansofbeans are sufficiently similar to videogames to share some ur-type inheritance hierarchy.
And yeah, how much C++/C#/Java code could equally be plain C code? Probably it's the auto-garbage collection that's important; memory safety must draw on at least some explicit features in the language for the compiler to understand allocation lifetimes. (If I had a penny for every memory leak I've had to chase down ...)
@dandl FP languages do state bundles byval
I think rather the key feature is Abstract (private) Data Types. The PhysRep might be a vanilla record structure of vanilla Int, String, Bool, but it's encapsulated by the type/module system such that any non-privileged function only knows it's got a CanofBeans, and must call privileged methods to find out anything about it. Similarly any transition in the 'object's state must be implemented by calling a method that returns a new 'object' of the same inscrutable Type. (As the Rideau 'textbook' points out, FP had a poor story on this until the discovery of 'Monads' -- or rather until somebody figured out how an entirely 'useless concept' from Category theory could be implemented in a language.)
Quote from Dave Voorhis on April 10, 2026, 7:11 pmQuote from dandl on April 10, 2026, 12:24 amThe concept that makes OO useful, as compared with earlier languages, is that of little packets of state bundled with operations on that state. Fortran, Cobol, Basic, Pascal, C did not have that.
Pascal and C provide that via C structs, Pascal records, and functions/procedures that restrict parameter arguments to specified struct/record types.
For Fortran, COBOL and BASIC, it depends which Fortran, COBOL or BASIC (or Basic) you're talking about.
What they didn't have (largely -- depends which Pascal, at least) are visibility modifiers to implement information hiding.
Quote from dandl on April 10, 2026, 12:24 amThe concept that makes OO useful, as compared with earlier languages, is that of little packets of state bundled with operations on that state. Fortran, Cobol, Basic, Pascal, C did not have that.
Pascal and C provide that via C structs, Pascal records, and functions/procedures that restrict parameter arguments to specified struct/record types.
For Fortran, COBOL and BASIC, it depends which Fortran, COBOL or BASIC (or Basic) you're talking about.
What they didn't have (largely -- depends which Pascal, at least) are visibility modifiers to implement information hiding.
Quote from Dave Voorhis on April 10, 2026, 8:58 pmQuote from AntC on April 10, 2026, 2:45 amQuote from Dave Voorhis on April 9, 2026, 11:45 pm..., I find it unhelpful to use the term "object oriented" for anything.
The real problem is the term, ...
I can see that "object" is so vague/general-purpose as to be empty of meaning. OTOH the industry seems to cope with the synonymous "entity" as in Entity-Attribute or Entity-relation.
IIRC, OOP came to the fore with interactive/WYSIWYG systems, where the programming challenge is to capture and respond to mouse and keyboard activity more-or-less at random hitting different controls on the screen.
Yes, that and video games, for similar reasons.
And yeah, how much C++/C#/Java code could equally be plain C code?
All of it could be, though the string handling and exposed pointers would be unpleasant. A lot of typical business C++/C#/Java code is essentially classically procedural with classes being used to define modules. Most instances are singleton, or effectively singleton as there may be one instance per API session but that's largely hidden. Indeed, a lot of typical business code in general is just defining API endpoints that invoke SQL queries or other API endpoints and sending the results back, plus the scaffolding to hold that up and appropriately secure it, etc. No particular benefit from polymorphism, inheritance, etc., there.
That said, multiple class instances, polymorphism, inheritance, and every other language feature becomes appropriate at some point. Then what negatively sticks out is when some developer eschews modern facilities like polymorphism, inheritance, interfaces, type inference, functional programming flavoured features (e.g., Java Streams or C# LINQ), pattern matching and so forth when these are appropriate. It hurts the eyes.
Sure, you can write pure procedural code in Java or C# or C++ or whatever and for the most part it will do the job, but it's painfully un-ergonomic and ugly -- except, presumably, to the folks who do it, usually old-timers (even older than us!) who refuse to grow, or business consultants playing at software engineering.
Probably it's the auto-garbage collection that's important; memory safety must draw on at least some explicit features in the language for the compiler to understand allocation lifetimes. (If I had a penny for every memory leak I've had to chase down ...)
Garbage collection is indeed probably the biggest benefit of C# and Java, aside from the massive scale of their respective ecosystems and associated libraries and frameworks.
@dandl FP languages do state bundles byval
I think rather the key feature is Abstract (private) Data Types. The PhysRep might be a vanilla record structure of vanilla
Int, String, Bool, but it's encapsulated by the type/module system such that any non-privileged function only knows it's got aCanofBeans, and must call privileged methods to find out anything about it. Similarly any transition in the 'object's state must be implemented by calling a method that returns a new 'object' of the same inscrutable Type.This. It's a fundamental of good programming, equally applicable to whatever "object oriented" programming might be, though the new 'object' could be the old 'object' with a different state than before the method was called. But mutability is increasingly deprecated unless necessary.
Quote from AntC on April 10, 2026, 2:45 amQuote from Dave Voorhis on April 9, 2026, 11:45 pm..., I find it unhelpful to use the term "object oriented" for anything.
The real problem is the term, ...
I can see that "object" is so vague/general-purpose as to be empty of meaning. OTOH the industry seems to cope with the synonymous "entity" as in Entity-Attribute or Entity-relation.
IIRC, OOP came to the fore with interactive/WYSIWYG systems, where the programming challenge is to capture and respond to mouse and keyboard activity more-or-less at random hitting different controls on the screen.
Yes, that and video games, for similar reasons.
And yeah, how much C++/C#/Java code could equally be plain C code?
All of it could be, though the string handling and exposed pointers would be unpleasant. A lot of typical business C++/C#/Java code is essentially classically procedural with classes being used to define modules. Most instances are singleton, or effectively singleton as there may be one instance per API session but that's largely hidden. Indeed, a lot of typical business code in general is just defining API endpoints that invoke SQL queries or other API endpoints and sending the results back, plus the scaffolding to hold that up and appropriately secure it, etc. No particular benefit from polymorphism, inheritance, etc., there.
That said, multiple class instances, polymorphism, inheritance, and every other language feature becomes appropriate at some point. Then what negatively sticks out is when some developer eschews modern facilities like polymorphism, inheritance, interfaces, type inference, functional programming flavoured features (e.g., Java Streams or C# LINQ), pattern matching and so forth when these are appropriate. It hurts the eyes.
Sure, you can write pure procedural code in Java or C# or C++ or whatever and for the most part it will do the job, but it's painfully un-ergonomic and ugly -- except, presumably, to the folks who do it, usually old-timers (even older than us!) who refuse to grow, or business consultants playing at software engineering.
Probably it's the auto-garbage collection that's important; memory safety must draw on at least some explicit features in the language for the compiler to understand allocation lifetimes. (If I had a penny for every memory leak I've had to chase down ...)
Garbage collection is indeed probably the biggest benefit of C# and Java, aside from the massive scale of their respective ecosystems and associated libraries and frameworks.
@dandl FP languages do state bundles byval
I think rather the key feature is Abstract (private) Data Types. The PhysRep might be a vanilla record structure of vanilla
Int, String, Bool, but it's encapsulated by the type/module system such that any non-privileged function only knows it's got aCanofBeans, and must call privileged methods to find out anything about it. Similarly any transition in the 'object's state must be implemented by calling a method that returns a new 'object' of the same inscrutable Type.
This. It's a fundamental of good programming, equally applicable to whatever "object oriented" programming might be, though the new 'object' could be the old 'object' with a different state than before the method was called. But mutability is increasingly deprecated unless necessary.
Quote from dandl on April 11, 2026, 1:55 amQuote from Dave Voorhis on April 10, 2026, 7:11 pmQuote from dandl on April 10, 2026, 12:24 amThe concept that makes OO useful, as compared with earlier languages, is that of little packets of state bundled with operations on that state. Fortran, Cobol, Basic, Pascal, C did not have that.
Pascal and C provide that via C structs, Pascal records, and functions/procedures that restrict parameter arguments to specified struct/record types.
Believe me I know it, used it, lived it and no, they really didn't. They're not bundled until you get them into the type system as class types.
For Fortran, COBOL and BASIC, it depends which Fortran, COBOL or BASIC (or Basic) you're talking about.
The ones in existence when C++ and then Java went mainstream. Anything added since then is copycat.
What they didn't have (largely -- depends which Pascal, at least) are visibility modifiers to implement information hiding.
There are OO languages without private scope.
Quote from Dave Voorhis on April 10, 2026, 7:11 pmQuote from dandl on April 10, 2026, 12:24 amThe concept that makes OO useful, as compared with earlier languages, is that of little packets of state bundled with operations on that state. Fortran, Cobol, Basic, Pascal, C did not have that.
Pascal and C provide that via C structs, Pascal records, and functions/procedures that restrict parameter arguments to specified struct/record types.
Believe me I know it, used it, lived it and no, they really didn't. They're not bundled until you get them into the type system as class types.
For Fortran, COBOL and BASIC, it depends which Fortran, COBOL or BASIC (or Basic) you're talking about.
The ones in existence when C++ and then Java went mainstream. Anything added since then is copycat.
What they didn't have (largely -- depends which Pascal, at least) are visibility modifiers to implement information hiding.
There are OO languages without private scope.
Quote from dandl on April 11, 2026, 5:12 amQuote from AntC on April 10, 2026, 2:45 amQuote from Dave Voorhis on April 9, 2026, 11:45 pm..., I find it unhelpful to use the term "object oriented" for anything.
The real problem is the term, ...
I can see that "object" is so vague/general-purpose as to be empty of meaning. OTOH the industry seems to cope with the synonymous "entity" as in Entity-Attribute or Entity-relation.
I agree. Which is why I always refer to OO.
IIRC, OOP came to the fore with interactive/WYSIWYG systems, where the programming challenge is to capture and respond to mouse and keyboard activity more-or-less at random hitting different controls on the screen. (This was the 'discrete even simulations' of Simula; also fitted the message-passing event-driven model of Smalltalk.) The program needs to keep track of the current state of each window, and each control in the window, by capturing user actions. Yes I programmed to those concepts in VB6.
This was a major driver in Smalltalk, but not I think in Simula or C++. Simula was all about a general purpose Algol-like language to be used for discrete event simulation, never UI. C++ was intended for systems programming, large systems, tight constraints, not UI. Java was probably the first strongly typed OO language with UI as a priority.
Then the supposed merit of the concept for program design is that you can also think of 'business objects'/Entities (customers, cansofbeans, warehouse locations, ...) as having discrete autonomous state that the application needs to keep track of.
I've never been a fan. OO works well for defining the behaviour of software, but not for modelling real world entities. The relational model is a good start, and the OR mismatch is a good indicator of where the problems lie. I don't have a better solution, but a start would be a language that could implement the RA directly.
I agree it's debatable whether polymorphism or inheritance add anything specific here; or are just best practice programming disciplines for orthogonal design: Don't Repeat Yourself/separation of concerns. It's an open case-by-case design question whether cansofbeans are sufficiently similar to videogames to share some ur-type inheritance hierarchy.
And yeah, how much C++/C#/Java code could equally be plain C code? Probably it's the auto-garbage collection that's important; memory safety must draw on at least some explicit features in the language for the compiler to understand allocation lifetimes. (If I had a penny for every memory leak I've had to chase down ...
I think rather the key feature is Abstract (private) Data Types. The PhysRep might be a vanilla record structure of vanilla
Int, String, Bool, but it's encapsulated by the type/module system such that any non-privileged function only knows it's got aCanofBeans, and must call privileged methods to find out anything about it.It's interesting to note that C++ struggled with replacing macros by templates, but the final result (including the STL) is really powerful, and can make writing new C++ code very safe (including value types). Java baled on that one, to its eternal shame. Value types and generics are the main reason I (strongly) prefer C#.
Quote from AntC on April 10, 2026, 2:45 amQuote from Dave Voorhis on April 9, 2026, 11:45 pm..., I find it unhelpful to use the term "object oriented" for anything.
The real problem is the term, ...
I can see that "object" is so vague/general-purpose as to be empty of meaning. OTOH the industry seems to cope with the synonymous "entity" as in Entity-Attribute or Entity-relation.
I agree. Which is why I always refer to OO.
IIRC, OOP came to the fore with interactive/WYSIWYG systems, where the programming challenge is to capture and respond to mouse and keyboard activity more-or-less at random hitting different controls on the screen. (This was the 'discrete even simulations' of Simula; also fitted the message-passing event-driven model of Smalltalk.) The program needs to keep track of the current state of each window, and each control in the window, by capturing user actions. Yes I programmed to those concepts in VB6.
This was a major driver in Smalltalk, but not I think in Simula or C++. Simula was all about a general purpose Algol-like language to be used for discrete event simulation, never UI. C++ was intended for systems programming, large systems, tight constraints, not UI. Java was probably the first strongly typed OO language with UI as a priority.
Then the supposed merit of the concept for program design is that you can also think of 'business objects'/Entities (customers, cansofbeans, warehouse locations, ...) as having discrete autonomous state that the application needs to keep track of.
I've never been a fan. OO works well for defining the behaviour of software, but not for modelling real world entities. The relational model is a good start, and the OR mismatch is a good indicator of where the problems lie. I don't have a better solution, but a start would be a language that could implement the RA directly.
I agree it's debatable whether polymorphism or inheritance add anything specific here; or are just best practice programming disciplines for orthogonal design: Don't Repeat Yourself/separation of concerns. It's an open case-by-case design question whether cansofbeans are sufficiently similar to videogames to share some ur-type inheritance hierarchy.
And yeah, how much C++/C#/Java code could equally be plain C code? Probably it's the auto-garbage collection that's important; memory safety must draw on at least some explicit features in the language for the compiler to understand allocation lifetimes. (If I had a penny for every memory leak I've had to chase down ...
I think rather the key feature is Abstract (private) Data Types. The PhysRep might be a vanilla record structure of vanilla
Int, String, Bool, but it's encapsulated by the type/module system such that any non-privileged function only knows it's got aCanofBeans, and must call privileged methods to find out anything about it.
It's interesting to note that C++ struggled with replacing macros by templates, but the final result (including the STL) is really powerful, and can make writing new C++ code very safe (including value types). Java baled on that one, to its eternal shame. Value types and generics are the main reason I (strongly) prefer C#.