The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

History and future of TTM

We all know the history of TTM.  We (the readers of this forum, past and present) were all massively impressed by its coherence and its "being obviously the right way to go".

But is there a future ?  With neither Hugh nor Chris pushing these ideas any longer (*), is there anyone around who can keep the spark alive ?

(*) They did what they had to do, and saw it through, without exemption.

Author of SIRA_PRISE

When the current AI hype fades -- and there's indications it's already going that way -- and we get tired of laboriously and slowly grinding out low-quality applications by shoving marbles through straws using a dozen awkward languages and a gaggle of poorly-documented, half-baked libraries...

Then we'll be ready to try something different, and the ideas of TTM -- necessarily combined with more elegant base languages -- will be ripe for resurgence.

But not yet.

I'm sometimes surprised that despite the obvious rubbishness of the typical enterprise application development toolset -- with its usual combination of modern Java/Kotlin/whatever or C# plus libraries to embed SQL (there are a lot of them, and notably the classic over-reliance on SQL-hiding ORMs seems to be falling out of favour) -- it can get the job done, again and again and again, like industrial machinery that's clunky, ugly, noisy, leaky, smelly, breaks down regularly, but...

...It works, and that's all it needs to do.

So it's not even that good is the enemy of great. It's that barely adequate is the enemy of better.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

It's on my list of jobs to do once I retire

  • empty out the loft
  • sort the foundation for future database systems
  • finish repointing the garden wall

(my Wife might have a few items to add to the list, so I'll have to see how it goes)

more seriously, I think it won't happen until folks stop thinking about "replacing SQL" (https://db.cs.cmu.edu/2025/01/sql-or-death-seminar-series-spring-2025/) and start thinking about throwing (almost) everything away and really just starting again

I gave a lot of thought to this, but it's been a while. I wrote about bits of it here: http://www.andl.org/ but it's incomplete.

IMO TTM is a step in the right direction but it's not a step on the right path. TTM is built on the right foundations but to the wrong plans.

So the Fourth manifesto concerns a language that fully implements an Extended RA. This ERA in effect is a full superset of SQL with a replaceable type system. It is implemented natively on a DBMS server that provides transaction support etc.

It is also implemented embedded in an end-user/client computer programming language supporting the same type system. ERA expressions are evaluated interchangeably and transparently locally or on the server. The client language is free to implement additional types, including OO. No existing 3GLs can do this, and Tutorial D falls well short.

As an aside, a 'full superset of SQL' must provide support for NULL-like missing values and operations on ordered datasets (such as moving average) which TTM refused to contemplate.

Andl - A New Database Language - andl.org
Quote from Erwin on March 24, 2025, 9:18 pm

But is there a future ?

No. CompSci education perpetuates the myth that 'Relational' or data modelling is tantamount to SQL. There's now such a barrage of tools or 'frameworks' [**] that hide SQL's most dangerous weaknesses, and such an investment in legacy code using them, nobody's going to boil the ocean to install some other theoretical structure behind the same framework.

To the extent that Haskell represents the bleeding edge in programming language design (especially typing), the work there over the past ~5 years, and planned beyond any foreseeable future, is providing no way to express the typing needed to support anything like "the Relational model of data". Indeed some 'Functional'-paradigm languages of the 80's/early 90's did better. Work is going into type-safely accessing inside more-and-more convoluted hierarchical structures, with fields defined positionally, not 'logically' by attribute name alone. I have given up on Haskell.

 

[**] AFAICT we haven't discussed on the forum 'Dataframes' -- or if we did it's in the write-only archives. At the moment, the term seems to be used rather amorphously, tied to different vendors' interpretations. Is there any 'there' there? Is it worth starting a thread? Sub-topics could be:

  • Is a dataframe any more than an in-memory dataset created programmatically?
  • Does it avoid duplicate rows (records)? Some vendors do, some don't AFAICT.
  • Does it avoid the perils of 'missing data'? All vendors seem to provide for missing data, with a variety of techniques AFAICT. Because ...
  • Dataframing seems to be a mechanism for clumping together data from heterogeneous sources: not only corporate databases, but also JSON, CSV, spreadsheet exports, and worse unstructured/unvalidated/potentially ill-typed cruft.
  • Pivot tables and other fancy statistical manipulation seem to be the main use cases. Does the RM provide a sufficient theoretical underpinning to distinguish meaningful results vs GIGO?