The future of TTM
Quote from dandl on July 23, 2021, 4:33 amI think it would be fair to say that the original goals of TTM, however expressed, are unlikely to be realised. SQL is centre stage, and probably has been since about the mid 1990s. So quo vadis TTM?
For managing and accessing a shared corporate application database, SQL serves well. For business application development accessing a shared database, modern GP languages such as Java and C# are sufficient. Despite the need for an ORM and 'glue' code the combination is good enough, and no-one is looking to replace it. For everything else, it has (or should have) competition.
Features that get me looking for something other than SQL+GP include:
- embedded or in-process database (no server, not shared)
- non-business data eg images, audio, time series, geographic, real-time
- non-database data eg file hierarchies, CSV files, spreadsheets, documents
- anytime setting up and maintaining a server does not seem feasible/justified
- anytime writing SQL plus ORM/ODBC glue code does not seem feasible/justified.
There is competition, including:
- SQLite: in-process, but still SQL+ORM
- NoSQL: but you don't get relational queries
- LINQ/Java Streams: but you don't get update
- bare metal raw files plus various libraries.
So what does TTM offer? Stripping out the new language (which no-one seems to want), the main features seem to be:
- A type system that can be applied to attributes
- An in-language DQL based on the RA
- Updates based on relational assignment
- Database features including transactions and constraints.
So the addressable market is everyone using data that is not a good match for SQL+GP (because of any of the features listed above), especially those who are already using SQLite or NoSQL or raw files, and who would benefit from any of the TTM features. That's huge.
I think it would be fair to say that the original goals of TTM, however expressed, are unlikely to be realised. SQL is centre stage, and probably has been since about the mid 1990s. So quo vadis TTM?
For managing and accessing a shared corporate application database, SQL serves well. For business application development accessing a shared database, modern GP languages such as Java and C# are sufficient. Despite the need for an ORM and 'glue' code the combination is good enough, and no-one is looking to replace it. For everything else, it has (or should have) competition.
Features that get me looking for something other than SQL+GP include:
- embedded or in-process database (no server, not shared)
- non-business data eg images, audio, time series, geographic, real-time
- non-database data eg file hierarchies, CSV files, spreadsheets, documents
- anytime setting up and maintaining a server does not seem feasible/justified
- anytime writing SQL plus ORM/ODBC glue code does not seem feasible/justified.
There is competition, including:
- SQLite: in-process, but still SQL+ORM
- NoSQL: but you don't get relational queries
- LINQ/Java Streams: but you don't get update
- bare metal raw files plus various libraries.
So what does TTM offer? Stripping out the new language (which no-one seems to want), the main features seem to be:
- A type system that can be applied to attributes
- An in-language DQL based on the RA
- Updates based on relational assignment
- Database features including transactions and constraints.
So the addressable market is everyone using data that is not a good match for SQL+GP (because of any of the features listed above), especially those who are already using SQLite or NoSQL or raw files, and who would benefit from any of the TTM features. That's huge.
Quote from Darren Duncan on July 23, 2021, 7:31 amFor what I want, TTM provides a slew of well thought out language design and type system ideas that can guide the design of my own industrial languages/formats/systems/etc.
I consider that what I'm working on is a full industrial implementation of TTM+IM but I also interpret those more liberally, more to the spirit than to the letter, as in I would consider it fully conforming but some others might not due to certain technical details.
One targeted usage scenario is a general high-level in-memory virtual machine with an emphasis on DBMS functionality, implemented for multiple host programming languages.
Another targeted usage scenario is a high-level strongly-typed interchange format for data and code that is an alternative for the likes of JSON/XML/SQL or REST/RPC/etc.
Another usage scenario is providing a uniform way of declaring strong data types that is portable between various programming languages so you can define your various strong business types and use them with any common language whether that is inherently strongly or weakly typed.
Another targeted usage scenario is a toolkit or foundation for making ORMs or translators or for porting database stored procedures etc between different DBMSs, eg helping free people from DBMS vendor lockin due to their investment in eg Oracle-specific stored procedures.
And so on.
For what I want, TTM provides a slew of well thought out language design and type system ideas that can guide the design of my own industrial languages/formats/systems/etc.
I consider that what I'm working on is a full industrial implementation of TTM+IM but I also interpret those more liberally, more to the spirit than to the letter, as in I would consider it fully conforming but some others might not due to certain technical details.
One targeted usage scenario is a general high-level in-memory virtual machine with an emphasis on DBMS functionality, implemented for multiple host programming languages.
Another targeted usage scenario is a high-level strongly-typed interchange format for data and code that is an alternative for the likes of JSON/XML/SQL or REST/RPC/etc.
Another usage scenario is providing a uniform way of declaring strong data types that is portable between various programming languages so you can define your various strong business types and use them with any common language whether that is inherently strongly or weakly typed.
Another targeted usage scenario is a toolkit or foundation for making ORMs or translators or for porting database stored procedures etc between different DBMSs, eg helping free people from DBMS vendor lockin due to their investment in eg Oracle-specific stored procedures.
And so on.
Quote from Dave Voorhis on July 23, 2021, 9:19 amQuote from dandl on July 23, 2021, 4:33 amI think it would be fair to say that the original goals of TTM, however expressed, are unlikely to be realised. SQL is centre stage, and probably has been since about the mid 1990s. So quo vadis TTM?
For managing and accessing a shared corporate application database, SQL serves well. For business application development accessing a shared database, modern GP languages such as Java and C# are sufficient. Despite the need for an ORM and 'glue' code the combination is good enough, and no-one is looking to replace it. For everything else, it has (or should have) competition.
Features that get me looking for something other than SQL+GP include:
- embedded or in-process database (no server, not shared)
- non-business data eg images, audio, time series, geographic, real-time
- non-database data eg file hierarchies, CSV files, spreadsheets, documents
- anytime setting up and maintaining a server does not seem feasible/justified
- anytime writing SQL plus ORM/ODBC glue code does not seem feasible/justified.
There is competition, including:
- SQLite: in-process, but still SQL+ORM
- NoSQL: but you don't get relational queries
- LINQ/Java Streams: but you don't get update
- bare metal raw files plus various libraries.
So what does TTM offer? Stripping out the new language (which no-one seems to want), the main features seem to be:
- A type system that can be applied to attributes
- An in-language DQL based on the RA
- Updates based on relational assignment
- Database features including transactions and constraints.
So the addressable market is everyone using data that is not a good match for SQL+GP (because of any of the features listed above), especially those who are already using SQLite or NoSQL or raw files, and who would benefit from any of the TTM features. That's huge.
I agree. There are undoubtedly a vast number of tools that can be written -- in and/or for any number of different languages, platforms, and environments -- which embody one or more of the TTM & relational model & relational algebra ideas. SQL itself isn't yet ready to be replaced -- and promoting products solely on being faithful to the relational model doesn't work as nobody but us cares and the general assumption is that relational == SQL -- but languages, platforms, tools, utilities, you-name-it that do something good (i.e., that save/earn money and/or save effort and/or improve quality) and are based on TTM ideas even if not explicitly mentioned as such in the brochure?
Absolutely.
Quote from dandl on July 23, 2021, 4:33 amI think it would be fair to say that the original goals of TTM, however expressed, are unlikely to be realised. SQL is centre stage, and probably has been since about the mid 1990s. So quo vadis TTM?
For managing and accessing a shared corporate application database, SQL serves well. For business application development accessing a shared database, modern GP languages such as Java and C# are sufficient. Despite the need for an ORM and 'glue' code the combination is good enough, and no-one is looking to replace it. For everything else, it has (or should have) competition.
Features that get me looking for something other than SQL+GP include:
- embedded or in-process database (no server, not shared)
- non-business data eg images, audio, time series, geographic, real-time
- non-database data eg file hierarchies, CSV files, spreadsheets, documents
- anytime setting up and maintaining a server does not seem feasible/justified
- anytime writing SQL plus ORM/ODBC glue code does not seem feasible/justified.
There is competition, including:
- SQLite: in-process, but still SQL+ORM
- NoSQL: but you don't get relational queries
- LINQ/Java Streams: but you don't get update
- bare metal raw files plus various libraries.
So what does TTM offer? Stripping out the new language (which no-one seems to want), the main features seem to be:
- A type system that can be applied to attributes
- An in-language DQL based on the RA
- Updates based on relational assignment
- Database features including transactions and constraints.
So the addressable market is everyone using data that is not a good match for SQL+GP (because of any of the features listed above), especially those who are already using SQLite or NoSQL or raw files, and who would benefit from any of the TTM features. That's huge.
I agree. There are undoubtedly a vast number of tools that can be written -- in and/or for any number of different languages, platforms, and environments -- which embody one or more of the TTM & relational model & relational algebra ideas. SQL itself isn't yet ready to be replaced -- and promoting products solely on being faithful to the relational model doesn't work as nobody but us cares and the general assumption is that relational == SQL -- but languages, platforms, tools, utilities, you-name-it that do something good (i.e., that save/earn money and/or save effort and/or improve quality) and are based on TTM ideas even if not explicitly mentioned as such in the brochure?
Absolutely.
Quote from Erwin on July 23, 2021, 6:50 pmQuote from Dave Voorhis on July 23, 2021, 9:19 ampromoting products solely on being faithful to the relational model doesn't work as nobody but us cares and the general assumption is that relational == SQL -- but languages, platforms, tools, utilities, you-name-it that do something good (i.e., that save/earn money and/or save effort and/or improve quality) and are based on TTM ideas even if not explicitly mentioned as such in the brochure?
Absolutely.
cock and bull. Everyone managing data suffers from "data quality issues". EVERYONE. And "data quality issues" is a euphemism for "logical contradictions"as per how they can be shown/proved from "data as they are" and "formal specifications of rules". And the means to tackle that billion-dollar problem (once and for all) should be obvious to anyone with half a brain : make the DBMS (be able to) "understand" the [formal specifications of the] rules and make it enforce them. The solution exists for >10 yrs by this time and who cares ? Nobody. ***NOBODY*** believes there is "save/earn money" or "save effort" or "improve quality" (not even that !!!!!!) along those lines. The Ceri/Widom paper (1990's) that showed the way how to solve the first 25-30% of the problem was awarded a "best paper of a decade" prize. And where are we today ? The entire world is still suffering the very same "data quality issues" that caused that paper to be written in the first place, but we're just suffering it in larger volumes now. Moore's law does account for something ... The number of "data quality issues" we're suffering will double every 18 months ... Oh well if that's how they want it ...
Quote from Dave Voorhis on July 23, 2021, 9:19 ampromoting products solely on being faithful to the relational model doesn't work as nobody but us cares and the general assumption is that relational == SQL -- but languages, platforms, tools, utilities, you-name-it that do something good (i.e., that save/earn money and/or save effort and/or improve quality) and are based on TTM ideas even if not explicitly mentioned as such in the brochure?
Absolutely.
cock and bull. Everyone managing data suffers from "data quality issues". EVERYONE. And "data quality issues" is a euphemism for "logical contradictions"as per how they can be shown/proved from "data as they are" and "formal specifications of rules". And the means to tackle that billion-dollar problem (once and for all) should be obvious to anyone with half a brain : make the DBMS (be able to) "understand" the [formal specifications of the] rules and make it enforce them. The solution exists for >10 yrs by this time and who cares ? Nobody. ***NOBODY*** believes there is "save/earn money" or "save effort" or "improve quality" (not even that !!!!!!) along those lines. The Ceri/Widom paper (1990's) that showed the way how to solve the first 25-30% of the problem was awarded a "best paper of a decade" prize. And where are we today ? The entire world is still suffering the very same "data quality issues" that caused that paper to be written in the first place, but we're just suffering it in larger volumes now. Moore's law does account for something ... The number of "data quality issues" we're suffering will double every 18 months ... Oh well if that's how they want it ...
Quote from Dave Voorhis on July 23, 2021, 7:50 pmQuote from Erwin on July 23, 2021, 6:50 pmQuote from Dave Voorhis on July 23, 2021, 9:19 ampromoting products solely on being faithful to the relational model doesn't work as nobody but us cares and the general assumption is that relational == SQL -- but languages, platforms, tools, utilities, you-name-it that do something good (i.e., that save/earn money and/or save effort and/or improve quality) and are based on TTM ideas even if not explicitly mentioned as such in the brochure?
Absolutely.
cock and bull. Everyone managing data suffers from "data quality issues". EVERYONE. And "data quality issues" is a euphemism for "logical contradictions"as per how they can be shown/proved from "data as they are" and "formal specifications of rules". And the means to tackle that billion-dollar problem (once and for all) should be obvious to anyone with half a brain : make the DBMS (be able to) "understand" the rules and make it enforce them. The solution exists for >10 yrs by this time and who cares ? Nobody. ***NOBODY*** believes there is "save/earn money" or "save effort" or "improve quality" (not even that !!!!!!) along those lines. The Ceri/Widom paper
This one?
https://www.vldb.org/conf/1990/P566.PDF
(1990's) that showed the way how to solve the first 25-30% of the problem was awarded a "best paper of a decade" prize.
http://www.vldb.org/conf/2000/P254.pdf
And where are we today ? The entire world is still suffering the very same "data quality issues" that caused that paper to be written in the first place, but we're just suffering it in larger volumes now. Moore's law does account for something ...
Just improving quality isn't enough. I probably shouldn't have written "and/or".
I should have written "and".
Solutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
Quote from Erwin on July 23, 2021, 6:50 pmQuote from Dave Voorhis on July 23, 2021, 9:19 ampromoting products solely on being faithful to the relational model doesn't work as nobody but us cares and the general assumption is that relational == SQL -- but languages, platforms, tools, utilities, you-name-it that do something good (i.e., that save/earn money and/or save effort and/or improve quality) and are based on TTM ideas even if not explicitly mentioned as such in the brochure?
Absolutely.
cock and bull. Everyone managing data suffers from "data quality issues". EVERYONE. And "data quality issues" is a euphemism for "logical contradictions"as per how they can be shown/proved from "data as they are" and "formal specifications of rules". And the means to tackle that billion-dollar problem (once and for all) should be obvious to anyone with half a brain : make the DBMS (be able to) "understand" the rules and make it enforce them. The solution exists for >10 yrs by this time and who cares ? Nobody. ***NOBODY*** believes there is "save/earn money" or "save effort" or "improve quality" (not even that !!!!!!) along those lines. The Ceri/Widom paper
This one?
https://www.vldb.org/conf/1990/P566.PDF
(1990's) that showed the way how to solve the first 25-30% of the problem was awarded a "best paper of a decade" prize.
http://www.vldb.org/conf/2000/P254.pdf
And where are we today ? The entire world is still suffering the very same "data quality issues" that caused that paper to be written in the first place, but we're just suffering it in larger volumes now. Moore's law does account for something ...
Just improving quality isn't enough. I probably shouldn't have written "and/or".
I should have written "and".
Solutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
Quote from Erwin on July 23, 2021, 8:38 pmQuote from Dave Voorhis on July 23, 2021, 7:50 pmSolutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
The result of an assessment of "save/earn money" is too much crucially dependent on the time frame taken into account. And nowadays' managers operate on the premisse that "short term" is the next fiscal quarter and "long term" is two years because that's how long they intend to stay at the company where they're holding that manager position.
I contend that "burning the disc packs and start over" ***WILL*** prove to pay off, say on a period of 20 yrs, at least if you're clever enough to start the burning with only the least crucial disc packs. 20 yrs does give you the slack to do that.
Quote from Dave Voorhis on July 23, 2021, 7:50 pmSolutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
The result of an assessment of "save/earn money" is too much crucially dependent on the time frame taken into account. And nowadays' managers operate on the premisse that "short term" is the next fiscal quarter and "long term" is two years because that's how long they intend to stay at the company where they're holding that manager position.
I contend that "burning the disc packs and start over" ***WILL*** prove to pay off, say on a period of 20 yrs, at least if you're clever enough to start the burning with only the least crucial disc packs. 20 yrs does give you the slack to do that.
Quote from Dave Voorhis on July 23, 2021, 8:51 pmQuote from Erwin on July 23, 2021, 8:38 pmQuote from Dave Voorhis on July 23, 2021, 7:50 pmSolutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
The result of an assessment of "save/earn money" is too much crucially dependent on the time frame taken into account. And nowadays' managers operate on the premisse that "short term" is the next fiscal quarter and "long term" is two years because that's how long they intend to stay at the company where they're holding that manager position.
I contend that "burning the disc packs and start over" ***WILL*** prove to pay off, say on a period of 20 yrs, at least if you're clever enough to start the burning with only the least crucial disc packs. 20 yrs does give you the slack to do that.
Yeah, that's too long. No middle manager -- and very few senior managers -- will buy into that. Even two years is on the long side.
Maybe instead of "save/earn money, save effort, and improve quality" it need only be "enhance your CV", but disguised to be indistinguishable from technical documentation. Then instead of skepticism from potential clients' techies, they'll be demanding it.
This approach works amazingly well for the big cloud providers.
Quote from Erwin on July 23, 2021, 8:38 pmQuote from Dave Voorhis on July 23, 2021, 7:50 pmSolutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
The result of an assessment of "save/earn money" is too much crucially dependent on the time frame taken into account. And nowadays' managers operate on the premisse that "short term" is the next fiscal quarter and "long term" is two years because that's how long they intend to stay at the company where they're holding that manager position.
I contend that "burning the disc packs and start over" ***WILL*** prove to pay off, say on a period of 20 yrs, at least if you're clever enough to start the burning with only the least crucial disc packs. 20 yrs does give you the slack to do that.
Yeah, that's too long. No middle manager -- and very few senior managers -- will buy into that. Even two years is on the long side.
Maybe instead of "save/earn money, save effort, and improve quality" it need only be "enhance your CV", but disguised to be indistinguishable from technical documentation. Then instead of skepticism from potential clients' techies, they'll be demanding it.
This approach works amazingly well for the big cloud providers.
Quote from Erwin on July 23, 2021, 9:13 pmQuote from Dave Voorhis on July 23, 2021, 7:50 pmQuote from Erwin on July 23, 2021, 6:50 pmThe Ceri/Widom paper
This one?
https://www.vldb.org/conf/1990/P566.PDF
(1990's) that showed the way how to solve the first 25-30% of the problem was awarded a "best paper of a decade" prize.
http://www.vldb.org/conf/2000/P254.pdf
Yup, the first one. There appears to be another one "deriving production rules for incremental view maintenance", but De Haan/Koppelaars observe quite some time later that the problem of enforcing ASSERTIONs of any arbitrary data rule and that of maintaining the physical records for any arbitrary "materialized view" is "exactly the same". With apologies to those who want to nitpick on the words combo "exactly" and "same".
The second one seems to be an utterly un-scientific survey of what implementations (engineers) have been able to achieve (and what not - e.g. where they write "Even in the presence of well-defined trigger semantics, behavior can be surprising. For example, row-level triggers are activated once for each modified (inserted, deleted, or updated) tuple, but no triggers are activated until the modification statement is complete.", that seems to appeal to a predictably doomed attempt to mix RBAR semantics ("row-level triggers") with set-level semantics ("no triggers are activated until ...") ).
Quote from Dave Voorhis on July 23, 2021, 7:50 pmQuote from Erwin on July 23, 2021, 6:50 pmThe Ceri/Widom paper
This one?
https://www.vldb.org/conf/1990/P566.PDF
(1990's) that showed the way how to solve the first 25-30% of the problem was awarded a "best paper of a decade" prize.
http://www.vldb.org/conf/2000/P254.pdf
Yup, the first one. There appears to be another one "deriving production rules for incremental view maintenance", but De Haan/Koppelaars observe quite some time later that the problem of enforcing ASSERTIONs of any arbitrary data rule and that of maintaining the physical records for any arbitrary "materialized view" is "exactly the same". With apologies to those who want to nitpick on the words combo "exactly" and "same".
The second one seems to be an utterly un-scientific survey of what implementations (engineers) have been able to achieve (and what not - e.g. where they write "Even in the presence of well-defined trigger semantics, behavior can be surprising. For example, row-level triggers are activated once for each modified (inserted, deleted, or updated) tuple, but no triggers are activated until the modification statement is complete.", that seems to appeal to a predictably doomed attempt to mix RBAR semantics ("row-level triggers") with set-level semantics ("no triggers are activated until ...") ).
Quote from Erwin on July 23, 2021, 9:29 pmQuote from Dave Voorhis on July 23, 2021, 8:51 pmQuote from Erwin on July 23, 2021, 8:38 pmQuote from Dave Voorhis on July 23, 2021, 7:50 pmSolutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
The result of an assessment of "save/earn money" is too much crucially dependent on the time frame taken into account. And nowadays' managers operate on the premisse that "short term" is the next fiscal quarter and "long term" is two years because that's how long they intend to stay at the company where they're holding that manager position.
I contend that "burning the disc packs and start over" ***WILL*** prove to pay off, say on a period of 20 yrs, at least if you're clever enough to start the burning with only the least crucial disc packs. 20 yrs does give you the slack to do that.
Yeah, that's too long. No middle manager -- and very few senior managers -- will buy into that. Even two years is on the long side.
Maybe instead of "save/earn money, save effort, and improve quality" it need only be "enhance your CV", but disguised to be indistinguishable from technical documentation. Then instead of skepticism from potential clients' techies, they'll be demanding it.
This approach works amazingly well for the big cloud providers.
And even if it's provable that "burn and start over" really is the only option, they won't understand the proof. Parenthetical remark backspaced out again.
Quote from Dave Voorhis on July 23, 2021, 8:51 pmQuote from Erwin on July 23, 2021, 8:38 pmQuote from Dave Voorhis on July 23, 2021, 7:50 pmSolutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
The result of an assessment of "save/earn money" is too much crucially dependent on the time frame taken into account. And nowadays' managers operate on the premisse that "short term" is the next fiscal quarter and "long term" is two years because that's how long they intend to stay at the company where they're holding that manager position.
I contend that "burning the disc packs and start over" ***WILL*** prove to pay off, say on a period of 20 yrs, at least if you're clever enough to start the burning with only the least crucial disc packs. 20 yrs does give you the slack to do that.
Yeah, that's too long. No middle manager -- and very few senior managers -- will buy into that. Even two years is on the long side.
Maybe instead of "save/earn money, save effort, and improve quality" it need only be "enhance your CV", but disguised to be indistinguishable from technical documentation. Then instead of skepticism from potential clients' techies, they'll be demanding it.
This approach works amazingly well for the big cloud providers.
And even if it's provable that "burn and start over" really is the only option, they won't understand the proof. Parenthetical remark backspaced out again.
Quote from Dave Voorhis on July 23, 2021, 10:20 pmQuote from Erwin on July 23, 2021, 9:29 pmQuote from Dave Voorhis on July 23, 2021, 8:51 pmQuote from Erwin on July 23, 2021, 8:38 pmQuote from Dave Voorhis on July 23, 2021, 7:50 pmSolutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
The result of an assessment of "save/earn money" is too much crucially dependent on the time frame taken into account. And nowadays' managers operate on the premisse that "short term" is the next fiscal quarter and "long term" is two years because that's how long they intend to stay at the company where they're holding that manager position.
I contend that "burning the disc packs and start over" ***WILL*** prove to pay off, say on a period of 20 yrs, at least if you're clever enough to start the burning with only the least crucial disc packs. 20 yrs does give you the slack to do that.
Yeah, that's too long. No middle manager -- and very few senior managers -- will buy into that. Even two years is on the long side.
Maybe instead of "save/earn money, save effort, and improve quality" it need only be "enhance your CV", but disguised to be indistinguishable from technical documentation. Then instead of skepticism from potential clients' techies, they'll be demanding it.
This approach works amazingly well for the big cloud providers.
And even if it's provable that "burn and start over" really is the only option, they won't understand the proof. Parenthetical remark backspaced out again.
Is there any technical/engineering field that doesn't need a good "burn and start over"?
How many do it?
In electronics, the move from tubes/valves to discrete semiconductors, and from discrete semiconductors to integrated circuits, and from manual VLSIC definitions to using VHDL, and from point-to-point wiring to through-hole PC boards to SMDs, were arguably closer to "burn and start over" revolutions than evolutions, but in each case -- with some grumbling, admittedly -- the move was inevitable. The cost savings and performance gains for each step were remarkable.
Over here in IT, do we have an equivalent?
But then, in electronics when you want a new whuzzit, you buy a new one and throw the old one away (or sell it on eBay and a vintage electronics collector like me buys it) and keep going without a hiccup.
Try that with your banking software, or air traffic control software, or hospital records system, or ERP system, or ...
And everyone understands unreliability and instability in physical devices. It's surprising (or perhaps not) that a level of flakiness which would have most folks chucking a gadget or tool in the bin is not only tolerated but weirdly embraced when it's software.
Quote from Erwin on July 23, 2021, 9:29 pmQuote from Dave Voorhis on July 23, 2021, 8:51 pmQuote from Erwin on July 23, 2021, 8:38 pmQuote from Dave Voorhis on July 23, 2021, 7:50 pmSolutions need to save/earn money, save effort, and improve quality. The link between "improve quality" and "save/earn money" (and between "save effort" and "save/earn money") probably needs to be spelled out.
It helps too if it doesn't require burning the disk packs and starting over. Too many grand solutions require that, and there aren't many willing to pay for it let alone do it.
Even grand solutions that look like starting over ("we're going to scrap Java and switch to .NET Core!") aren't really.
The result of an assessment of "save/earn money" is too much crucially dependent on the time frame taken into account. And nowadays' managers operate on the premisse that "short term" is the next fiscal quarter and "long term" is two years because that's how long they intend to stay at the company where they're holding that manager position.
I contend that "burning the disc packs and start over" ***WILL*** prove to pay off, say on a period of 20 yrs, at least if you're clever enough to start the burning with only the least crucial disc packs. 20 yrs does give you the slack to do that.
Yeah, that's too long. No middle manager -- and very few senior managers -- will buy into that. Even two years is on the long side.
Maybe instead of "save/earn money, save effort, and improve quality" it need only be "enhance your CV", but disguised to be indistinguishable from technical documentation. Then instead of skepticism from potential clients' techies, they'll be demanding it.
This approach works amazingly well for the big cloud providers.
And even if it's provable that "burn and start over" really is the only option, they won't understand the proof. Parenthetical remark backspaced out again.
Is there any technical/engineering field that doesn't need a good "burn and start over"?
How many do it?
In electronics, the move from tubes/valves to discrete semiconductors, and from discrete semiconductors to integrated circuits, and from manual VLSIC definitions to using VHDL, and from point-to-point wiring to through-hole PC boards to SMDs, were arguably closer to "burn and start over" revolutions than evolutions, but in each case -- with some grumbling, admittedly -- the move was inevitable. The cost savings and performance gains for each step were remarkable.
Over here in IT, do we have an equivalent?
But then, in electronics when you want a new whuzzit, you buy a new one and throw the old one away (or sell it on eBay and a vintage electronics collector like me buys it) and keep going without a hiccup.
Try that with your banking software, or air traffic control software, or hospital records system, or ERP system, or ...
And everyone understands unreliability and instability in physical devices. It's surprising (or perhaps not) that a level of flakiness which would have most folks chucking a gadget or tool in the bin is not only tolerated but weirdly embraced when it's software.