Meaning of a relation
Quote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Quote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Quote from dandl on January 23, 2021, 7:52 amQuote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yes, that's the wolrd I know. So in your judgment, is the concept of a layer over SQL, whether you call it an ORM or not, using TTM-like principles, at all possible? Or do we tell 'em they're dreaming?
Quote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yes, that's the wolrd I know. So in your judgment, is the concept of a layer over SQL, whether you call it an ORM or not, using TTM-like principles, at all possible? Or do we tell 'em they're dreaming?
Quote from Dave Voorhis on January 23, 2021, 10:15 amQuote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yup, there are often multiple distinct database abstraction layers -- one per DBMS -- of which you install one during deployment, though there might be facilities for auto-conversions of SQL syntax (e.g., '"' converted to '[' and ']' or whatever) so simple queries can target multiple DBMSs and the (usually more complex) non-syntax-convertible ones defined distinctly for each DBMS. I.e., the ones that have to be completely rewritten for each DBMS because, for example, DBMS A supports UPSERT/MERGE and DBMS B doesn't.
Or, you can wrap it up in an all-for-one ORM and take the performance hit and developer annoyance. And the hand-written bypass-the-ORM SQL (one per DBMS) to get around the performance problems.
Or, you can say "screw it" to supporting multiple DBMSs. You don't run PostgreSQL natively? No problem, here's a Docker container definition to run PostgreSQL for just this application, and here's how you hook this application into your existing single-sign-on system.
Etc.
Trade-offs, as usual.
Quote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yup, there are often multiple distinct database abstraction layers -- one per DBMS -- of which you install one during deployment, though there might be facilities for auto-conversions of SQL syntax (e.g., '"' converted to '[' and ']' or whatever) so simple queries can target multiple DBMSs and the (usually more complex) non-syntax-convertible ones defined distinctly for each DBMS. I.e., the ones that have to be completely rewritten for each DBMS because, for example, DBMS A supports UPSERT/MERGE and DBMS B doesn't.
Or, you can wrap it up in an all-for-one ORM and take the performance hit and developer annoyance. And the hand-written bypass-the-ORM SQL (one per DBMS) to get around the performance problems.
Or, you can say "screw it" to supporting multiple DBMSs. You don't run PostgreSQL natively? No problem, here's a Docker container definition to run PostgreSQL for just this application, and here's how you hook this application into your existing single-sign-on system.
Etc.
Trade-offs, as usual.
Quote from AntC on January 23, 2021, 10:23 amQuote from dandl on January 23, 2021, 7:52 amQuote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yes, that's the wolrd I know. So in your judgment, is the concept of a layer over SQL, whether you call it an ORM or not, using TTM-like principles, at all possible? Or do we tell 'em they're dreaming?
"using TTM-like principles" no: package design suffers the 'missing data' problem in spades: client X uses features A, B; client Y uses features A, C, so has 'missing data' for the fields in feature B; client Z uses features B, C, so has 'missing data' for the fields in feature A; new client W hates feature A and won't buy the package unless it's completely re-engineered into feature D, so has 'missing data' for feature A, whereas all the other clients have 'missing data' for feature D; and so on; and so forth.
If you look at the schemas, nearly all the non-key fields are nullable. Hugh would have apoplexy.
Most of the software architects I've worked with have been around the traps enough to know what we know about ORMs. Furthermore package/generic ORMs just aren't scalable, nor do they output efficiently tailored SQL for a broad variety of DBMSs, or to target (tens of) thousands of tables, of which ~25% contain maybe 1 row (some sort of header) and another ~50% contain ~40% null fields.
One vendor I worked with (or, rather against), as the client went live and started filling up their tables (phased transition rather than 'big bang'), found they were continually taking tables out of the generic ORM and custom-coding a handler. It was still useful to use the generic ORM for the low-volume configuration settings.
I saw very little use of Relational Algebra thinking: queries were plodding
SELECT * FROM r INNER JOIN s ON ... WHERE (filter on indexed fields only)
then pipe the lot into additional program logic RBAR. How/why could you compose something? Lookups to decode status values into meaningful descriptions were usually done in-program, from an in-memory cache of the codes table. Type system? What could you possibly want to put in persistent storage except numbers, dates and text? Oh and of course hashes as record-Ids for Foreign keys -- IOW the worst possible bit of OO thinking.We could have a poll about whether changing DBMS platform is more or less frequent than renumbering the whole Customer base. Usually if your Customer numbering scheme (all those number-ranges for different categories of customers) is creaking at the seams, it's time for a new package.
Quote from dandl on January 23, 2021, 7:52 amQuote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yes, that's the wolrd I know. So in your judgment, is the concept of a layer over SQL, whether you call it an ORM or not, using TTM-like principles, at all possible? Or do we tell 'em they're dreaming?
"using TTM-like principles" no: package design suffers the 'missing data' problem in spades: client X uses features A, B; client Y uses features A, C, so has 'missing data' for the fields in feature B; client Z uses features B, C, so has 'missing data' for the fields in feature A; new client W hates feature A and won't buy the package unless it's completely re-engineered into feature D, so has 'missing data' for feature A, whereas all the other clients have 'missing data' for feature D; and so on; and so forth.
If you look at the schemas, nearly all the non-key fields are nullable. Hugh would have apoplexy.
Most of the software architects I've worked with have been around the traps enough to know what we know about ORMs. Furthermore package/generic ORMs just aren't scalable, nor do they output efficiently tailored SQL for a broad variety of DBMSs, or to target (tens of) thousands of tables, of which ~25% contain maybe 1 row (some sort of header) and another ~50% contain ~40% null fields.
One vendor I worked with (or, rather against), as the client went live and started filling up their tables (phased transition rather than 'big bang'), found they were continually taking tables out of the generic ORM and custom-coding a handler. It was still useful to use the generic ORM for the low-volume configuration settings.
I saw very little use of Relational Algebra thinking: queries were plodding SELECT * FROM r INNER JOIN s ON ... WHERE (filter on indexed fields only)
then pipe the lot into additional program logic RBAR. How/why could you compose something? Lookups to decode status values into meaningful descriptions were usually done in-program, from an in-memory cache of the codes table. Type system? What could you possibly want to put in persistent storage except numbers, dates and text? Oh and of course hashes as record-Ids for Foreign keys -- IOW the worst possible bit of OO thinking.
We could have a poll about whether changing DBMS platform is more or less frequent than renumbering the whole Customer base. Usually if your Customer numbering scheme (all those number-ranges for different categories of customers) is creaking at the seams, it's time for a new package.
Quote from Dave Voorhis on January 23, 2021, 10:31 amQuote from dandl on January 23, 2021, 7:52 amQuote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yes, that's the wolrd I know. So in your judgment, is the concept of a layer over SQL, whether you call it an ORM or not, using TTM-like principles, at all possible? Or do we tell 'em they're dreaming?
From the application point of view, the database abstraction layer is a bunch of method/function calls.
From the database abstraction layer point of view, it's a bunch of SQL queries.
I suppose it's not unreasonable for the outside view of the database abstraction layer to embody some TTM-like principles, like no duplicate tuples and no null values (if the host language supports null) even if for whatever reason (e.g., integration with a pre-existing database) the database allows NULL values. Within the database abstraction layer, it's SQL, so it can be a matter of policy to deprecate NULL and use SELECT DISTINCT, etc.
But this is mostly orthogonal to TTM.
Quote from dandl on January 23, 2021, 7:52 amQuote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yes, that's the wolrd I know. So in your judgment, is the concept of a layer over SQL, whether you call it an ORM or not, using TTM-like principles, at all possible? Or do we tell 'em they're dreaming?
From the application point of view, the database abstraction layer is a bunch of method/function calls.
From the database abstraction layer point of view, it's a bunch of SQL queries.
I suppose it's not unreasonable for the outside view of the database abstraction layer to embody some TTM-like principles, like no duplicate tuples and no null values (if the host language supports null) even if for whatever reason (e.g., integration with a pre-existing database) the database allows NULL values. Within the database abstraction layer, it's SQL, so it can be a matter of policy to deprecate NULL and use SELECT DISTINCT, etc.
But this is mostly orthogonal to TTM.
Quote from Dave Voorhis on January 23, 2021, 4:38 pmQuote from AntC on January 23, 2021, 10:23 amQuote from dandl on January 23, 2021, 7:52 amQuote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yes, that's the wolrd I know. So in your judgment, is the concept of a layer over SQL, whether you call it an ORM or not, using TTM-like principles, at all possible? Or do we tell 'em they're dreaming?
"using TTM-like principles" no: package design suffers the 'missing data' problem in spades: client X uses features A, B; client Y uses features A, C, so has 'missing data' for the fields in feature B; client Z uses features B, C, so has 'missing data' for the fields in feature A; new client W hates feature A and won't buy the package unless it's completely re-engineered into feature D, so has 'missing data' for feature A, whereas all the other clients have 'missing data' for feature D; and so on; and so forth.
...
Yes, this.
Depending on the domain (I suspect) and belief system of the technical staff on the buyer side, some potential buyers will insist that primary keys only ever be GUIDs. Others will insist that they be natural only, never GUIDs. Some will expect full data integrity constraints in the database. Others will regard that as abomination; surely everyone knows by now that a SQL database is only good for persistence and not much good even at that. Therefore, the application will maintain all integrity and there must be no CHECK constraints, foreign key constraints, primary key constraints, unique constraints, and all columns must be nullable.
Etc.
Quote from AntC on January 23, 2021, 10:23 amQuote from dandl on January 23, 2021, 7:52 amQuote from AntC on January 23, 2021, 1:26 amQuote from Dave Voorhis on January 22, 2021, 1:52 pmQuote from dandl on January 22, 2021, 1:41 pmQuote from Dave Voorhis on January 20, 2021, 2:42 pmQuote from dandl on January 20, 2021, 1:29 pmMy experience in using tools like this to build software is limited: Rails, MS EF, couple of others. I defer to experts.
But I know about building tools that are used in this way by others, and I know how hard it is to create the right abstraction and defend it over time, at scale and across change. I think the fundamental premise of the ORM is wrong, but while I think a 'layer over SQL' is probably a better way, at least for smaller apps, I don't think it scales or delivers the other benefits of a true abstraction layer.
These days, I get paid to use (and sometimes build) tools like this. ORMs are kind of like dynamically typed languages -- they're deceptively simple and make you think you're writing less code, and maybe you are, but you actually wind up going to a lot more effort.
I think a layer over SQL probably is a better way. Over time, I'll find out as I use it.
Having used true abstraction layers of at least the ORM variety -- such as Entity Framework -- with large applications, they seem to wind up encouraging a certain amount of code bloat and a lot of awfully SQL-like non-SQL, plus some actual SQL shoehorned in because something runs too slowly.
That doesn't seem particularly scalable.
The ORM is wrong because it maintains a fiction of an OO database. If you had an OO database you wouldn't use it, so why pretend?
Layer over SQL is less wrong (because it keeps the good bits) but it really only gives you better glue code, it doesn't raise the level of the language (or the database).
I would prefer a solution that was function oriented (like Linq or Java Streams) and composable, but providing the benefits of TTM: extensible type system, RA, MA. Visible SQL won't get you there.
Yes, the full benefits of the TTM ideal would be nice.
But, tradeoffs, as usual.
Writing less code is almost always good, but the main thing at scale is to change less code. Say you move between Postgres and MySQL and you find you have to change the case in every query. Or even the convention for joins is different. Or it's Oracle which is even weirder. If the SQL queries are all abstracted into a generator, that' s one thing to change; if they're all hand-written that could be thousands. Even automated tools can struggle.
In roughly 30+ years of developing, supporting, and consulting on SQL-based database-driven applications, I've seen a requirement to move from one DBMS to a different DBMS once.
In my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
Yes, that's the wolrd I know. So in your judgment, is the concept of a layer over SQL, whether you call it an ORM or not, using TTM-like principles, at all possible? Or do we tell 'em they're dreaming?
"using TTM-like principles" no: package design suffers the 'missing data' problem in spades: client X uses features A, B; client Y uses features A, C, so has 'missing data' for the fields in feature B; client Z uses features B, C, so has 'missing data' for the fields in feature A; new client W hates feature A and won't buy the package unless it's completely re-engineered into feature D, so has 'missing data' for feature A, whereas all the other clients have 'missing data' for feature D; and so on; and so forth.
...
Yes, this.
Depending on the domain (I suspect) and belief system of the technical staff on the buyer side, some potential buyers will insist that primary keys only ever be GUIDs. Others will insist that they be natural only, never GUIDs. Some will expect full data integrity constraints in the database. Others will regard that as abomination; surely everyone knows by now that a SQL database is only good for persistence and not much good even at that. Therefore, the application will maintain all integrity and there must be no CHECK constraints, foreign key constraints, primary key constraints, unique constraints, and all columns must be nullable.
Etc.
Quote from tobega on January 24, 2021, 3:56 pmQuote from AntC on January 23, 2021, 1:26 amIn my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
I suppose this is really the crunch that even if we do manage to find a suitable language and create a great abstraction layer, we're still down to the SQL-interfacing to multiple products which is never going to be "better", for some versions of "better", than just using SQL.
So if one was on a mission to improve the database world, one would probably have to come up from the/a database product itself. So what if one added a mode to e.g. postgresql to run set-based and null free? That could become popular given that nulls are very much out of favour these days. If one then could demonstrate performance improvements when running in that mode, so much the better.
I suppose SQL would still work in that mode, which would be good for adoption, but there is still a market for something less string-based, even if any protocol will necessarily be serialized the way things are built these days.
Going full circle in my reasoning here, one could of course just focus on one open-source database and provide a non-SQL interface for some reasonably popular language.
Of course none of that is going to make any money, despite lots of organizations still insisting on paying Oracle too much money. Both databases and programming languages have to be free these days, then you have to make money on consulting and services.
Quote from AntC on January 23, 2021, 1:26 amIn my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
I suppose this is really the crunch that even if we do manage to find a suitable language and create a great abstraction layer, we're still down to the SQL-interfacing to multiple products which is never going to be "better", for some versions of "better", than just using SQL.
So if one was on a mission to improve the database world, one would probably have to come up from the/a database product itself. So what if one added a mode to e.g. postgresql to run set-based and null free? That could become popular given that nulls are very much out of favour these days. If one then could demonstrate performance improvements when running in that mode, so much the better.
I suppose SQL would still work in that mode, which would be good for adoption, but there is still a market for something less string-based, even if any protocol will necessarily be serialized the way things are built these days.
Going full circle in my reasoning here, one could of course just focus on one open-source database and provide a non-SQL interface for some reasonably popular language.
Of course none of that is going to make any money, despite lots of organizations still insisting on paying Oracle too much money. Both databases and programming languages have to be free these days, then you have to make money on consulting and services.
Quote from Dave Voorhis on January 24, 2021, 4:16 pmQuote from tobega on January 24, 2021, 3:56 pmQuote from AntC on January 23, 2021, 1:26 amIn my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
I suppose this is really the crunch that even if we do manage to find a suitable language and create a great abstraction layer, we're still down to the SQL-interfacing to multiple products which is never going to be "better", for some versions of "better", than just using SQL.
So if one was on a mission to improve the database world, one would probably have to come up from the/a database product itself. So what if one added a mode to e.g. postgresql to run set-based and null free? That could become popular given that nulls are very much out of favour these days. If one then could demonstrate performance improvements when running in that mode, so much the better.
Many years ago, Ingres and later Postgres supported an arguably better query language, QUEL. (See https://en.wikipedia.org/wiki/QUEL_query_languages)
The problem is that for most developers/architects/CIOs outside of a small group of zealots (i.e., us), a marginal (to most) difference in query language capability simply isn't a compelling reason to dispense with SQL. Indeed, for a significant proportion of developers/architects/CIOs there simply can't be a compelling reason to dispense with SQL, full stop. It's not even that they like or dislike it. It simply is, and dispensing with it is as inconceivable as, say, dispensing with the Web and the HTTP protocol.
I suppose SQL would still work in that mode, which would be good for adoption, but there is still a market for something less string-based, even if any protocol will necessarily be serialized the way things are built these days.
There is Microsoft's LINQ to SQL and its "modern" equivalent in Entity Framework which is not string based, and all the many ORMs and database-layer APIs sporting either a new query language or a native query API.
But... So many tradeoffs.
And of course there is Embedded SQL, which has been around almost as long as SQL and re-emerges in some new guise every so often. Here too are many tradeoffs.
Quote from tobega on January 24, 2021, 3:56 pmQuote from AntC on January 23, 2021, 1:26 amIn my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
I suppose this is really the crunch that even if we do manage to find a suitable language and create a great abstraction layer, we're still down to the SQL-interfacing to multiple products which is never going to be "better", for some versions of "better", than just using SQL.
So if one was on a mission to improve the database world, one would probably have to come up from the/a database product itself. So what if one added a mode to e.g. postgresql to run set-based and null free? That could become popular given that nulls are very much out of favour these days. If one then could demonstrate performance improvements when running in that mode, so much the better.
Many years ago, Ingres and later Postgres supported an arguably better query language, QUEL. (See https://en.wikipedia.org/wiki/QUEL_query_languages)
The problem is that for most developers/architects/CIOs outside of a small group of zealots (i.e., us), a marginal (to most) difference in query language capability simply isn't a compelling reason to dispense with SQL. Indeed, for a significant proportion of developers/architects/CIOs there simply can't be a compelling reason to dispense with SQL, full stop. It's not even that they like or dislike it. It simply is, and dispensing with it is as inconceivable as, say, dispensing with the Web and the HTTP protocol.
I suppose SQL would still work in that mode, which would be good for adoption, but there is still a market for something less string-based, even if any protocol will necessarily be serialized the way things are built these days.
There is Microsoft's LINQ to SQL and its "modern" equivalent in Entity Framework which is not string based, and all the many ORMs and database-layer APIs sporting either a new query language or a native query API.
But... So many tradeoffs.
And of course there is Embedded SQL, which has been around almost as long as SQL and re-emerges in some new guise every so often. Here too are many tradeoffs.
Quote from tobega on January 25, 2021, 7:07 amQuote from Dave Voorhis on January 24, 2021, 4:16 pmQuote from tobega on January 24, 2021, 3:56 pmQuote from AntC on January 23, 2021, 1:26 amIn my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
I suppose this is really the crunch that even if we do manage to find a suitable language and create a great abstraction layer, we're still down to the SQL-interfacing to multiple products which is never going to be "better", for some versions of "better", than just using SQL.
So if one was on a mission to improve the database world, one would probably have to come up from the/a database product itself. So what if one added a mode to e.g. postgresql to run set-based and null free? That could become popular given that nulls are very much out of favour these days. If one then could demonstrate performance improvements when running in that mode, so much the better.
Many years ago, Ingres and later Postgres supported an arguably better query language, QUEL. (See https://en.wikipedia.org/wiki/QUEL_query_languages)
The problem is that for most developers/architects/CIOs outside of a small group of zealots (i.e., us), a marginal (to most) difference in query language capability simply isn't a compelling reason to dispense with SQL. Indeed, for a significant proportion of developers/architects/CIOs there simply can't be a compelling reason to dispense with SQL, full stop. It's not even that they like or dislike it. It simply is, and dispensing with it is as inconceivable as, say, dispensing with the Web and the HTTP protocol.
Right, they don't see any reasons to change, it's just there (and as wrong as the perception is in practice, it is a "standard")
I've been used to it taking 5-10 years before anyone understands what I'm saying, I suppose you've been going about 25 now? But in astrophysics, scientists ignored an actual dataset for 50 years because it didn't fit their models. Eventually "dark matter" was theorized. And in medicine, Ioannidis paper that proves that 90% or more of medical research has false or grossly overexaggerated conclusions (and it's not much better in other fields) is entirely uncontroversial, yet everyone sees it as somebody else's problem.
In databases, a misstep into NoSQL was made because there was a need to store "any object", generally serialized as json, but now you can store json objects in relational databases too, so that's one thing we've been after. OK, the type-checking of it is a little so-so, but it's a step.
Currently nulls are very much frowned upon in programming languages so perhaps now the time is ripe for frowning upon them in the database as well. Of course, anybody can just stop using nulls even while still using SQL, but it could be a step. A database-flag to enforce this could be helpful.
I don't know enough yet to favour the set-based approach over the bag-based approach, but I suppose anyone could introduce a unique constraint on the compound value of the whole row on every table and mandate the use of SELECT DISTINCT, while still using SQL. A database-flag to enforce this could be helpful.
I suppose SQL would still work in that mode, which would be good for adoption, but there is still a market for something less string-based, even if any protocol will necessarily be serialized the way things are built these days.
There is Microsoft's LINQ to SQL and its "modern" equivalent in Entity Framework which is not string based, and all the many ORMs and database-layer APIs sporting either a new query language or a native query API.
But... So many tradeoffs.
And of course there is Embedded SQL, which has been around almost as long as SQL and re-emerges in some new guise every so often. Here too are many tradeoffs.
Right, but I think we concluded they were all somewhat lacking?
I still hear people starry-eyed about LINQ, having it as a wished-for feature in whatever language they use. I don't know enough about it to know what they see in it.
As I said previously, there is a lot of disenchantment with ORMs in Java but JDBC hasn't gotten better. Named parameters in prepared statements help a lot though, it's amazing they're still not in the SDK. So SQL and JDBC amplifiers are definitely useful.
I don't know how to get all the way with the language, but I think it's good that some people hold a vision.
There is also a possibility that perhaps SQL is the most practical compromise that allows people to shoot themselves in the foot any way they like. Or it just is perceived as inevitable. Looking back, there was the almost entirely unmotivated move to use C as a programming language for everything, probably only because it was "free" and freely available. It did manage to kill a large plethora of programming languages, arguably a good thing, and probably was largely instrumental in fuelling the open source and free software movements. Sometimes perhaps worse is better even though we don't yet understand how or why?
Quote from Dave Voorhis on January 24, 2021, 4:16 pmQuote from tobega on January 24, 2021, 3:56 pmQuote from AntC on January 23, 2021, 1:26 amIn my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
I suppose this is really the crunch that even if we do manage to find a suitable language and create a great abstraction layer, we're still down to the SQL-interfacing to multiple products which is never going to be "better", for some versions of "better", than just using SQL.
So if one was on a mission to improve the database world, one would probably have to come up from the/a database product itself. So what if one added a mode to e.g. postgresql to run set-based and null free? That could become popular given that nulls are very much out of favour these days. If one then could demonstrate performance improvements when running in that mode, so much the better.
Many years ago, Ingres and later Postgres supported an arguably better query language, QUEL. (See https://en.wikipedia.org/wiki/QUEL_query_languages)
The problem is that for most developers/architects/CIOs outside of a small group of zealots (i.e., us), a marginal (to most) difference in query language capability simply isn't a compelling reason to dispense with SQL. Indeed, for a significant proportion of developers/architects/CIOs there simply can't be a compelling reason to dispense with SQL, full stop. It's not even that they like or dislike it. It simply is, and dispensing with it is as inconceivable as, say, dispensing with the Web and the HTTP protocol.
Right, they don't see any reasons to change, it's just there (and as wrong as the perception is in practice, it is a "standard")
I've been used to it taking 5-10 years before anyone understands what I'm saying, I suppose you've been going about 25 now? But in astrophysics, scientists ignored an actual dataset for 50 years because it didn't fit their models. Eventually "dark matter" was theorized. And in medicine, Ioannidis paper that proves that 90% or more of medical research has false or grossly overexaggerated conclusions (and it's not much better in other fields) is entirely uncontroversial, yet everyone sees it as somebody else's problem.
In databases, a misstep into NoSQL was made because there was a need to store "any object", generally serialized as json, but now you can store json objects in relational databases too, so that's one thing we've been after. OK, the type-checking of it is a little so-so, but it's a step.
Currently nulls are very much frowned upon in programming languages so perhaps now the time is ripe for frowning upon them in the database as well. Of course, anybody can just stop using nulls even while still using SQL, but it could be a step. A database-flag to enforce this could be helpful.
I don't know enough yet to favour the set-based approach over the bag-based approach, but I suppose anyone could introduce a unique constraint on the compound value of the whole row on every table and mandate the use of SELECT DISTINCT, while still using SQL. A database-flag to enforce this could be helpful.
I suppose SQL would still work in that mode, which would be good for adoption, but there is still a market for something less string-based, even if any protocol will necessarily be serialized the way things are built these days.
There is Microsoft's LINQ to SQL and its "modern" equivalent in Entity Framework which is not string based, and all the many ORMs and database-layer APIs sporting either a new query language or a native query API.
But... So many tradeoffs.
And of course there is Embedded SQL, which has been around almost as long as SQL and re-emerges in some new guise every so often. Here too are many tradeoffs.
Right, but I think we concluded they were all somewhat lacking?
I still hear people starry-eyed about LINQ, having it as a wished-for feature in whatever language they use. I don't know enough about it to know what they see in it.
As I said previously, there is a lot of disenchantment with ORMs in Java but JDBC hasn't gotten better. Named parameters in prepared statements help a lot though, it's amazing they're still not in the SDK. So SQL and JDBC amplifiers are definitely useful.
I don't know how to get all the way with the language, but I think it's good that some people hold a vision.
There is also a possibility that perhaps SQL is the most practical compromise that allows people to shoot themselves in the foot any way they like. Or it just is perceived as inevitable. Looking back, there was the almost entirely unmotivated move to use C as a programming language for everything, probably only because it was "free" and freely available. It did manage to kill a large plethora of programming languages, arguably a good thing, and probably was largely instrumental in fuelling the open source and free software movements. Sometimes perhaps worse is better even though we don't yet understand how or why?
Quote from Dave Voorhis on January 25, 2021, 11:35 amQuote from tobega on January 25, 2021, 7:07 amQuote from Dave Voorhis on January 24, 2021, 4:16 pmQuote from tobega on January 24, 2021, 3:56 pmQuote from AntC on January 23, 2021, 1:26 amIn my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
I suppose this is really the crunch that even if we do manage to find a suitable language and create a great abstraction layer, we're still down to the SQL-interfacing to multiple products which is never going to be "better", for some versions of "better", than just using SQL.
So if one was on a mission to improve the database world, one would probably have to come up from the/a database product itself. So what if one added a mode to e.g. postgresql to run set-based and null free? That could become popular given that nulls are very much out of favour these days. If one then could demonstrate performance improvements when running in that mode, so much the better.
Many years ago, Ingres and later Postgres supported an arguably better query language, QUEL. (See https://en.wikipedia.org/wiki/QUEL_query_languages)
The problem is that for most developers/architects/CIOs outside of a small group of zealots (i.e., us), a marginal (to most) difference in query language capability simply isn't a compelling reason to dispense with SQL. Indeed, for a significant proportion of developers/architects/CIOs there simply can't be a compelling reason to dispense with SQL, full stop. It's not even that they like or dislike it. It simply is, and dispensing with it is as inconceivable as, say, dispensing with the Web and the HTTP protocol.
Right, they don't see any reasons to change, it's just there (and as wrong as the perception is in practice, it is a "standard")
I've been used to it taking 5-10 years before anyone understands what I'm saying, I suppose you've been going about 25 now? But in astrophysics, scientists ignored an actual dataset for 50 years because it didn't fit their models. Eventually "dark matter" was theorized. And in medicine, Ioannidis paper that proves that 90% or more of medical research has false or grossly overexaggerated conclusions (and it's not much better in other fields) is entirely uncontroversial, yet everyone sees it as somebody else's problem.
In databases, a misstep into NoSQL was made because there was a need to store "any object", generally serialized as json, but now you can store json objects in relational databases too, so that's one thing we've been after. OK, the type-checking of it is a little so-so, but it's a step.
Currently nulls are very much frowned upon in programming languages so perhaps now the time is ripe for frowning upon them in the database as well. Of course, anybody can just stop using nulls even while still using SQL, but it could be a step. A database-flag to enforce this could be helpful.
I don't know enough yet to favour the set-based approach over the bag-based approach, but I suppose anyone could introduce a unique constraint on the compound value of the whole row on every table and mandate the use of SELECT DISTINCT, while still using SQL. A database-flag to enforce this could be helpful.
Flags introduce modality, which invariably seems like a good idea and turns out to be a bad one.
Modal languages are almost categorically bad.
I suspect the real problem here is a broader, almost philosophical one: We simply haven't reached that zeitgeist point in our industrial evolution where SQL is seen as a problem.
SQL-wise, we're like the mid 1980's with C, or early 1990's with C++, or late 1970's with COBOL. I.e., it seems perfectly fine, good even.
But in a year or three or ten or twenty, it will all change and there'll be an avalanche of anti-SQL sentiment and our work here will seem uncommonly prescient.
I suppose SQL would still work in that mode, which would be good for adoption, but there is still a market for something less string-based, even if any protocol will necessarily be serialized the way things are built these days.
There is Microsoft's LINQ to SQL and its "modern" equivalent in Entity Framework which is not string based, and all the many ORMs and database-layer APIs sporting either a new query language or a native query API.
But... So many tradeoffs.
And of course there is Embedded SQL, which has been around almost as long as SQL and re-emerges in some new guise every so often. Here too are many tradeoffs.
Right, but I think we concluded they were all somewhat lacking?
Indeed. Much tradeoffs.
I still hear people starry-eyed about LINQ, having it as a wished-for feature in whatever language they use. I don't know enough about it to know what they see in it.
The LINQ "query syntax" is a (very) SQL-like syntax embedded in C#. It's almost embedded SQL, but for various gotchas and quirks. E.g., quick -- write a left outer join that (a) runs via on the SQL DBMS rather than running locally and (b) doesn't require an hour and a half to find a useful example because you've forgotten how to do it.
LINQ "method syntax" is sometimes assumed to be a C#-only innovation, but it isn't. Java has had its Streams equivalent since Java 8, and various 3rd party equivalents have been around for years and pre-dated LINQ.
Its main benefit is avoiding record-by-agonising-record processing. Which, as I've mentioned before, is its biggest criticism from some quarters.
As I said previously, there is a lot of disenchantment with ORMs in Java but JDBC hasn't gotten better. Named parameters in prepared statements help a lot though, it's amazing they're still not in the SDK. So SQL and JDBC amplifiers are definitely useful.
I don't know how to get all the way with the language, but I think it's good that some people hold a vision.
There is also a possibility that perhaps SQL is the most practical compromise that allows people to shoot themselves in the foot any way they like. Or it just is perceived as inevitable. Looking back, there was the almost entirely unmotivated move to use C as a programming language for everything, probably only because it was "free" and freely available. It did manage to kill a large plethora of programming languages, arguably a good thing, and probably was largely instrumental in fuelling the open source and free software movements. Sometimes perhaps worse is better even though we don't yet understand how or why?
In time, views will inevitably change and SQL will fall out of fashion.
We might be waiting a while, though.
Quote from tobega on January 25, 2021, 7:07 amQuote from Dave Voorhis on January 24, 2021, 4:16 pmQuote from tobega on January 24, 2021, 3:56 pmQuote from AntC on January 23, 2021, 1:26 amIn my career, I've mostly worked for software houses providing the same package on multiple platforms/multiple DBMS's. Yes it's rare for clients to move database platforms, but being able to run on multiple platforms is a usual requirement for any software tender. I guess clients feel 'hooked in' to one vendor, and are moving to a package to feel AMOT they have more leverage over a DBMS vendor/feel less 'hooked in'. (Yeah, they're dreaming.)
From the package vendor point of view, multi-platform/multi-DBMS needs hugely much more code. SQL is not a standard; each vendor has different twiddles; even if the same query runs on different platforms, it often has hugely different performance characteristics. Software houses try to hide the complexity inside a DBMS interface layer, so that the business logic/UI handling doesn't have to care. (Oh but of course UI has to handle multiple platforms also.) But the abstractions are very leaky. I've even seen so-called Business Logic with stuff like: If Client = X then run this query else run that.
I suppose this is really the crunch that even if we do manage to find a suitable language and create a great abstraction layer, we're still down to the SQL-interfacing to multiple products which is never going to be "better", for some versions of "better", than just using SQL.
So if one was on a mission to improve the database world, one would probably have to come up from the/a database product itself. So what if one added a mode to e.g. postgresql to run set-based and null free? That could become popular given that nulls are very much out of favour these days. If one then could demonstrate performance improvements when running in that mode, so much the better.
Many years ago, Ingres and later Postgres supported an arguably better query language, QUEL. (See https://en.wikipedia.org/wiki/QUEL_query_languages)
The problem is that for most developers/architects/CIOs outside of a small group of zealots (i.e., us), a marginal (to most) difference in query language capability simply isn't a compelling reason to dispense with SQL. Indeed, for a significant proportion of developers/architects/CIOs there simply can't be a compelling reason to dispense with SQL, full stop. It's not even that they like or dislike it. It simply is, and dispensing with it is as inconceivable as, say, dispensing with the Web and the HTTP protocol.
Right, they don't see any reasons to change, it's just there (and as wrong as the perception is in practice, it is a "standard")
I've been used to it taking 5-10 years before anyone understands what I'm saying, I suppose you've been going about 25 now? But in astrophysics, scientists ignored an actual dataset for 50 years because it didn't fit their models. Eventually "dark matter" was theorized. And in medicine, Ioannidis paper that proves that 90% or more of medical research has false or grossly overexaggerated conclusions (and it's not much better in other fields) is entirely uncontroversial, yet everyone sees it as somebody else's problem.
In databases, a misstep into NoSQL was made because there was a need to store "any object", generally serialized as json, but now you can store json objects in relational databases too, so that's one thing we've been after. OK, the type-checking of it is a little so-so, but it's a step.
Currently nulls are very much frowned upon in programming languages so perhaps now the time is ripe for frowning upon them in the database as well. Of course, anybody can just stop using nulls even while still using SQL, but it could be a step. A database-flag to enforce this could be helpful.
I don't know enough yet to favour the set-based approach over the bag-based approach, but I suppose anyone could introduce a unique constraint on the compound value of the whole row on every table and mandate the use of SELECT DISTINCT, while still using SQL. A database-flag to enforce this could be helpful.
Flags introduce modality, which invariably seems like a good idea and turns out to be a bad one.
Modal languages are almost categorically bad.
I suspect the real problem here is a broader, almost philosophical one: We simply haven't reached that zeitgeist point in our industrial evolution where SQL is seen as a problem.
SQL-wise, we're like the mid 1980's with C, or early 1990's with C++, or late 1970's with COBOL. I.e., it seems perfectly fine, good even.
But in a year or three or ten or twenty, it will all change and there'll be an avalanche of anti-SQL sentiment and our work here will seem uncommonly prescient.
I suppose SQL would still work in that mode, which would be good for adoption, but there is still a market for something less string-based, even if any protocol will necessarily be serialized the way things are built these days.
There is Microsoft's LINQ to SQL and its "modern" equivalent in Entity Framework which is not string based, and all the many ORMs and database-layer APIs sporting either a new query language or a native query API.
But... So many tradeoffs.
And of course there is Embedded SQL, which has been around almost as long as SQL and re-emerges in some new guise every so often. Here too are many tradeoffs.
Right, but I think we concluded they were all somewhat lacking?
Indeed. Much tradeoffs.
I still hear people starry-eyed about LINQ, having it as a wished-for feature in whatever language they use. I don't know enough about it to know what they see in it.
The LINQ "query syntax" is a (very) SQL-like syntax embedded in C#. It's almost embedded SQL, but for various gotchas and quirks. E.g., quick -- write a left outer join that (a) runs via on the SQL DBMS rather than running locally and (b) doesn't require an hour and a half to find a useful example because you've forgotten how to do it.
LINQ "method syntax" is sometimes assumed to be a C#-only innovation, but it isn't. Java has had its Streams equivalent since Java 8, and various 3rd party equivalents have been around for years and pre-dated LINQ.
Its main benefit is avoiding record-by-agonising-record processing. Which, as I've mentioned before, is its biggest criticism from some quarters.
As I said previously, there is a lot of disenchantment with ORMs in Java but JDBC hasn't gotten better. Named parameters in prepared statements help a lot though, it's amazing they're still not in the SDK. So SQL and JDBC amplifiers are definitely useful.
I don't know how to get all the way with the language, but I think it's good that some people hold a vision.
There is also a possibility that perhaps SQL is the most practical compromise that allows people to shoot themselves in the foot any way they like. Or it just is perceived as inevitable. Looking back, there was the almost entirely unmotivated move to use C as a programming language for everything, probably only because it was "free" and freely available. It did manage to kill a large plethora of programming languages, arguably a good thing, and probably was largely instrumental in fuelling the open source and free software movements. Sometimes perhaps worse is better even though we don't yet understand how or why?
In time, views will inevitably change and SQL will fall out of fashion.
We might be waiting a while, though.