The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Clarification re local relvar and KEY

PreviousPage 4 of 5Next

To answer your objection to using Tropashko-style operators; and to teach you something about type inference and polymorphism:

The type for attributes removed by projection is the type they're at in the relation they're getting removed from. Just as the type for attributes projected-in is taken from the source relation. Then what you're waffling about being a "heading" as opposed to a relation could be merely a relation with attribute names at polymorphic types.

  • In S REMOVE REL{CITY a}{}, in which REMOVE is a relational operator that takes two relations as operands, returns the first with attributes of the second projected away; a is a polymorphic attribute type that gets unified with whatever type CITY is at in S (CHAR in this case).
  • In S ON REL{S# a, SNAME b, STATUS c}{}, in which ON is a relational operator that takes two relations as operands, returns the first projected on the attributes of the second; a, b, c are polymorphic attribute types that get unified with whatever types S#, SNAME, STATUS are at in S.

I prefer to use headings in those roles: less fattening. So REL{CITY a}{} is just {CITY}, but the effect is identical. In fact, you can mechanically translate my headings into T's relations.

  • Writing out a relation literal like that merely to give some attribute names is clunky; I'd expect there to be a shorthand for that; the shorthand might even look like Tutorial D's yeuch.
  • The semantics, though, is that everything is a relation. We don't need some different gizmo with so-far incomprehensible typing or semantics.
  • More realistically, the r.h. operand for those operators would be a variable (relvar or WITH ... definition), could be passed as an argument, returned from a function, etc, etc.

I don't think that's realistic or desirable.

If you don't understand my mention of type unification: it's what the programming language would need anyway to type-check that in a JOIN same-named attributes are at the same type in the operands.

I have no lack of understanding. Can you say the same?

I told you: I do, but it rests on a custom-built type system. The whole point of this exercise is to show that this not necessary, the type system of a regular GP language is quite good enough.

You have utterly failed to demonstrate that. If your "headings" are not types, then you can't be using the GP language's type system/type inference to infer types of (say) the result from a JOIN. Then you can't be using the type system to statically type-check the result of a JOIN is correct for assigning the result to some pre-declared relvar. Then whatever you're doing can't be industrial strength.

Headings are not types in their own right, but you could view them as a kind of type parameter. A specific relation type is a generic relation type parameterised by a heading.

The only language extension is: headings. They work fine as strings,  but they would work better if they were written differently and compiled into strings. Perhaps they could help with I/O and function calls, but that's about the extent of it. The rest of the language is fine just as is.

If anything works fine (we've only your word for it), then as strings they are not reaching type inference. This is the classic problem with passing dynamic SQL as strings from a program to the SQL engine: no type-checking of the request; not even syntax-checking until run-time; no type-checking of the result against the types expected by the calling program. Production applications failing in production with ghastly incomprehensible errors in the face of the users.

Write me a query in TD, I'll show you the same query and the output. I plan to publish it on GitHub soon, if that helps.

That's why industrial-strength SQL applications use stored procedures as far as possible/statically compiled and type-checked against the schema. So does your approach support a stored procedures mechanism?

They're just procedures aren't they?

Andl - A New Database Language - andl.org

No, not a kludge, a unifying principle across operators that need types and those that don't. What is the type for the attribute(s) removed by projection? Why should that be a type? I say no, it's a heading.

It's an attribute name list, not a heading. A heading is a set of name/type pairs. Projection (or its REMOVE inverse) only requires an attribute name list.

Call it GH for Generic Heading.]

At the risk of repeating myself, a GH is an attribute name list. It becomes a TTM heading when bound to values, which bring types with them.

You'd pass a different construct to RENAME: a set of renaming pairs.

You'd pass a different construct to WHERE: a lambda expression of type boolean.

And so on. There may be good reasons to have a first-class Heading -- to declare it once and define multiple tuples and/or relations from the same Heading, for example -- but what you're passing to RENAME, WHERE, Projection, REMOVE, etc., isn't a Heading.

So it's  a GH.

I think a relational model library should be universal, not application-specific. It should be the same whether it's used in novel language, or a language extension, or stand-alone.

Impossible. A library has to have an API. An API has to use types. As soon as you make choices about which types to use, it's not universal.

In other words, it's no different from creating, say, an encryption library or a matrix math library -- you'd use the same encryption library or matrix math library no matter what the application. It's not like matrix multiplication or encryption somehow differs in a language implementation vs, say, a video game.

Likewise, it's the same relational model whether it's a novel language, a language extension, or a stand-alone library.

Andl is in category (a), my 'C# as D' project is in category (c). Andl is 'implemented using' C#; this project is C#. I know you know all this, so I see no point in trying to explain further. Why do you say they are the same.?

I'm not sure why they wouldn't be the same.

So I guess with 40 years of doing languages and compilers I find this so intuitively obvious, I just don't know how to explain it any better. An implementation language is not a target language.

But that's not the point. This really depends on your definition of Industrial D, which to my recollection hasn't been formally defined other than to suggest features that it should include, like exception handling and authentication and connection management, any of which (along with other things) could -- along with other desired features -- easily be added to any implementation. You appear to be defining Industrial D around the popularity and richness of a particular syntax and ancillary toolset, which is not really about the language at all but the ecosystem in which the language resides.

My definition revolves around a language that is designed for and suitable for use by a wide community of users in building modern production software.

I told you: I do, but it rests on a custom-built type system. The whole point of this exercise is to show that this not necessary, the type system of a regular GP language is quite good enough.

Can't you just unplug the custom type system?

The RA engine itself is tiny, a few hundred lines of code. Most of the work goes into building the API, and most of the hard bits are deciding what types to use for the API. Once you unplug the type system and the related API, there is almost nothing left.

You've already seen the generated C#, near enough. It's plain simple readable, debuggable C# code, exactly as it was written. ...

You've got things like this:

S.WHERE("CITY,STATUS", (a1,a2) => a1 != "Athens" && a2 >= 20 && a2 <= 30);

Apparently, "CITY,STATUS" is some dynamic construct that has to (presumably) match some elements of S (I guess...) That's not readable, nor robust, nor easily debuggable, nor compile-time checkable.

You're reading it wrong, but it isn't complicated. The arguments of the lambda match the heading. It's not an 'open expression', the heading and lambda define a relcon/relation function per App-A. It's the same as this:

var fcs = Func<string,integer> = (city,status) => city != "Athens" && status >= 20 && status <= 30;
S.Where("CITY,STATUS", fcs);

 

But this...

S.WHERE("CITY,STATUS", (a1,a2) => a1 != "Athens" && a2 >= 20 && a2 <= 30);

...and this...

Extend("Qty,ExpQty,ExpQty", TupExtend.F(v => (decimal)v[0] * (decimal)v[1]))

...with its mysterious array references to 0 and 1 (which I presume would break at runtime if I put in the wrong numbers?) just highlights the usual limitations of popular general-purpose programming languages when implementing this kind of relational model.

Still a work in progress. Current version:

Extend("Qty,ExpQty,ExpQty", new FuncValue<decimal,decimal>((value,accum) => value * accum);

No matter how much it might in some fashion adhere to the TTM pre/pro-scriptions, I don't think this is what a D was meant to be.

Indeed. A D was obviously 'meant to be' an alternative to SQL, but we're way past that. What D could be now is the power of SQL in a language you already know and use.

Andl - A New Database Language - andl.org
Quote from dandl on June 8, 2020, 2:01 am

No, not a kludge, a unifying principle across operators that need types and those that don't. What is the type for the attribute(s) removed by projection? Why should that be a type? I say no, it's a heading.

It's an attribute name list, not a heading. A heading is a set of name/type pairs. Projection (or its REMOVE inverse) only requires an attribute name list.

Call it GH for Generic Heading.]

At the risk of repeating myself, a GH is an attribute name list. It becomes a TTM heading when bound to values, which bring types with them.

Of course you can call it anything you like, but this does differ from TTM terminology which clearly distinguishes attribute name lists from headings, both syntactically (e.g., in Tutorial D) and conceptually.

I'm not sure where it "becomes a TTM heading when bound to values" except in a tuple or relation. Or is that what you mean?

You'd pass a different construct to RENAME: a set of renaming pairs.

You'd pass a different construct to WHERE: a lambda expression of type boolean.

And so on. There may be good reasons to have a first-class Heading -- to declare it once and define multiple tuples and/or relations from the same Heading, for example -- but what you're passing to RENAME, WHERE, Projection, REMOVE, etc., isn't a Heading.

So it's  a GH.

I think a relational model library should be universal, not application-specific. It should be the same whether it's used in novel language, or a language extension, or stand-alone.

Impossible. A library has to have an API. An API has to use types. As soon as you make choices about which types to use, it's not universal.

Not only is it possible, it's conventional. Language standard libraries do it -- e.g., the .NET framework, the Java framework, C++'s standard library and Boost -- and so forth. The classic way to get around predefined types is to define your library to manipulate instances of Object, which means you can manipulate instances of anything -- though you may have to do run-time type checking. The modern way is to use templates/generics, or require implementation of an interface. Then type checking can be done at a compile-time.

In other words, it's no different from creating, say, an encryption library or a matrix math library -- you'd use the same encryption library or matrix math library no matter what the application. It's not like matrix multiplication or encryption somehow differs in a language implementation vs, say, a video game.

Likewise, it's the same relational model whether it's a novel language, a language extension, or a stand-alone library.

Andl is in category (a), my 'C# as D' project is in category (c). Andl is 'implemented using' C#; this project is C#. I know you know all this, so I see no point in trying to explain further. Why do you say they are the same.?

I'm not sure why they wouldn't be the same.

So I guess with 40 years of doing languages and compilers I find this so intuitively obvious, I just don't know how to explain it any better. An implementation language is not a target language.

I think we may be talking past each other. I meant that a library to implement the relational model should be the same whether its at the core of a new language, at the core of an existing language extension, or used on its own in various applications. That's "the same" (library) to which I was referring.

But that's not the point. This really depends on your definition of Industrial D, which to my recollection hasn't been formally defined other than to suggest features that it should include, like exception handling and authentication and connection management, any of which (along with other things) could -- along with other desired features -- easily be added to any implementation. You appear to be defining Industrial D around the popularity and richness of a particular syntax and ancillary toolset, which is not really about the language at all but the ecosystem in which the language resides.

My definition revolves around a language that is designed for and suitable for use by a wide community of users in building modern production software.

I think that may differ from how D&D defined Industrial D, which referred to specific features intentionally not addressed in Tutorial D but obviously needed for production use.

I told you: I do, but it rests on a custom-built type system. The whole point of this exercise is to show that this not necessary, the type system of a regular GP language is quite good enough.

Can't you just unplug the custom type system?

The RA engine itself is tiny, a few hundred lines of code. Most of the work goes into building the API, and most of the hard bits are deciding what types to use for the API. Once you unplug the type system and the related API, there is almost nothing left.

I wouldn't expect there to be much, but I would expect it to be templatable. I.e., unplug the Andl-specific types and replace them with templates, and then you should have something quite universal.

You've already seen the generated C#, near enough. It's plain simple readable, debuggable C# code, exactly as it was written. ...

You've got things like this:

S.WHERE("CITY,STATUS", (a1,a2) => a1 != "Athens" && a2 >= 20 && a2 <= 30);

Apparently, "CITY,STATUS" is some dynamic construct that has to (presumably) match some elements of S (I guess...) That's not readable, nor robust, nor easily debuggable, nor compile-time checkable.

You're reading it wrong, but it isn't complicated. The arguments of the lambda match the heading. It's not an 'open expression', the heading and lambda define a relcon/relation function per App-A. It's the same as this:

var fcs = Func<string,integer> = (city,status) => city != "Athens" && status >= 20 && status <= 30;
S.Where("CITY,STATUS", fcs);

I get it, but it's quite awkward. I can see it working as an internal library -- indeed, it has notional similarities with the Rel innards -- but as a developer-oriented library you're pushing much checking onto runtime, aren't you?

For example, what if I misspell "CITY,STATUS" as "CTIY;STASUS". Doesn't that break it at runtime?

But this...

S.WHERE("CITY,STATUS", (a1,a2) => a1 != "Athens" && a2 >= 20 && a2 <= 30);

...and this...

Extend("Qty,ExpQty,ExpQty", TupExtend.F(v => (decimal)v[0] * (decimal)v[1]))

...with its mysterious array references to 0 and 1 (which I presume would break at runtime if I put in the wrong numbers?) just highlights the usual limitations of popular general-purpose programming languages when implementing this kind of relational model.

Still a work in progress. Current version:

Extend("Qty,ExpQty,ExpQty", new FuncValue<decimal,decimal>((value,accum) => value * accum);

No matter how much it might in some fashion adhere to the TTM pre/pro-scriptions, I don't think this is what a D was meant to be.

Indeed. A D was obviously 'meant to be' an alternative to SQL, but we're way past that. What D could be now is the power of SQL in a language you already know and use.

In that sense, it already exists. It's Streams and/or jOOλ/JOOQ for Java. It's LINQ in C#. Indeed, for C#'s LINQ there is even a "query syntax" language extension to make LINQ look and feel like SQL.

But that's not what I meant. I meant that a D was meant to adhere to RM Pre 26. C# itself certainly does (though some might reasonably argue otherwise), and libraries written in C# certainly can (though again, some might argue otherwise.) I'm not convinced your library -- with its "heading" mini-language embedded in strings (e.g., "Qty,ExpQty,ExpQty") does.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on June 8, 2020, 9:12 am
Quote from dandl on June 8, 2020, 2:01 am

No, not a kludge, a unifying principle across operators that need types and those that don't. What is the type for the attribute(s) removed by projection? Why should that be a type? I say no, it's a heading.

It's an attribute name list, not a heading. A heading is a set of name/type pairs. Projection (or its REMOVE inverse) only requires an attribute name list.

Call it GH for Generic Heading.]

At the risk of repeating myself, a GH is an attribute name list. It becomes a TTM heading when bound to values, which bring types with them.

Of course you can call it anything you like, but this does differ from TTM terminology which clearly distinguishes attribute name lists from headings, both syntactically (e.g., in Tutorial D) and conceptually.

TTM has no syntax and it's hard to know what a heading should be until after it has been used to generate a type for a value. I'm happy to draw the distinction, but I don't think it's real.

I'm not sure where it "becomes a TTM heading when bound to values" except in a tuple or relation. Or is that what you mean?

You'd pass a different construct to RENAME: a set of renaming pairs.

You'd pass a different construct to WHERE: a lambda expression of type boolean.

And so on. There may be good reasons to have a first-class Heading -- to declare it once and define multiple tuples and/or relations from the same Heading, for example -- but what you're passing to RENAME, WHERE, Projection, REMOVE, etc., isn't a Heading.

So it's  a GH.

I think a relational model library should be universal, not application-specific. It should be the same whether it's used in novel language, or a language extension, or stand-alone.

Impossible. A library has to have an API. An API has to use types. As soon as you make choices about which types to use, it's not universal.

Not only is it possible, it's conventional. Language standard libraries do it -- e.g., the .NET framework, the Java framework, C++'s standard library and Boost -- and so forth. The classic way to get around predefined types is to define your library to manipulate instances of Object, which means you can manipulate instances of anything -- though you may have to do run-time type checking. The modern way is to use templates/generics, or require implementation of an interface. Then type checking can be done at a compile-time.

That's silly,you might just as well go back to BCPL where everything is a machine word. That's universal for you, universally useless.

The only reasonably universal API I know is defined by "C", and even that is not universal across architectures.

The Andl RA uses a type system based on a base class of DataType and 18 subclasses. It almost never uses object and I don't think there is any runtime type checking, except perhaps in external interfaces.

In other words, it's no different from creating, say, an encryption library or a matrix math library -- you'd use the same encryption library or matrix math library no matter what the application. It's not like matrix multiplication or encryption somehow differs in a language implementation vs, say, a video game.

Likewise, it's the same relational model whether it's a novel language, a language extension, or a stand-alone library.

Andl is in category (a), my 'C# as D' project is in category (c). Andl is 'implemented using' C#; this project is C#. I know you know all this, so I see no point in trying to explain further. Why do you say they are the same.?

I'm not sure why they wouldn't be the same.

So I guess with 40 years of doing languages and compilers I find this so intuitively obvious, I just don't know how to explain it any better. An implementation language is not a target language.

I think we may be talking past each other. I meant that a library to implement the relational model should be the same whether its at the core of a new language, at the core of an existing language extension, or used on its own in various applications. That's "the same" (library) to which I was referring.

I don't know any way to build a library of any kind without first deciding on a type system. My two implementations of the RA for this project expose two different type systems, one heavily based on generics and type parameters and the other more conventional function calls. Here are the API calls:

public RelValue<T> Join<T1, T>(RelValue<T1> other)
where T : TupBase, new()
where T1 : TupBase, new() {

 public RelNode Join(RelNode other) {

S.WHERE("CITY,STATUS", (a1,a2) => a1 != "Athens" && a2 >= 20 && a2 <= 30);

Apparently, "CITY,STATUS" is some dynamic construct that has to (presumably) match some elements of S (I guess...) That's not readable, nor robust, nor easily debuggable, nor compile-time checkable.

You're reading it wrong, but it isn't complicated. The arguments of the lambda match the heading. It's not an 'open expression', the heading and lambda define a relcon/relation function per App-A. It's the same as this:

var fcs = Func<string,integer> = (city,status) => city != "Athens" && status >= 20 && status <= 30;
S.Where("CITY,STATUS", fcs);

I get it, but it's quite awkward. I can see it working as an internal library -- indeed, it has notional similarities with the Rel innards -- but as a developer-oriented library you're pushing much checking onto runtime, aren't you?

For example, what if I misspell "CITY,STATUS" as "CTIY;STASUS". Doesn't that break it at runtime?

So correctly written programs work just fine. The whole RA is supported (and if I missed anything it's easy to add).

Yes, error checking is a weakness. A language extension/pre-processor would have to track headings, operators and relcon functions for correctness. There is no type inference as such, it's just a matter of tracking each attribute value from where it is created to where it is finally assigned to some typed C# variable. A small language extension would make that easier, and avoid having to parse raw C# code. (C#, Java, Rust, Go, all much the same).

But this...

S.WHERE("CITY,STATUS", (a1,a2) => a1 != "Athens" && a2 >= 20 && a2 <= 30);

...and this...

Extend("Qty,ExpQty,ExpQty", TupExtend.F(v => (decimal)v[0] * (decimal)v[1]))

...with its mysterious array references to 0 and 1 (which I presume would break at runtime if I put in the wrong numbers?) just highlights the usual limitations of popular general-purpose programming languages when implementing this kind of relational model.

Still a work in progress. Current version:

Extend("Qty,ExpQty,ExpQty", new FuncValue<decimal,decimal>((value,accum) => value * accum);

No matter how much it might in some fashion adhere to the TTM pre/pro-scriptions, I don't think this is what a D was meant to be.

Indeed. A D was obviously 'meant to be' an alternative to SQL, but we're way past that. What D could be now is the power of SQL in a language you already know and use.

In that sense, it already exists. It's Streams and/or jOOλ/JOOQ for Java. It's LINQ in C#. Indeed, for C#'s LINQ there is even a "query syntax" language extension to make LINQ look and feel like SQL.

Is it as good as LINQ/Streams? Arguably better, at least for non-SQL data. Certainly there are things it can do that LINQ can't, but is it enough? For now, the project is just to show it's possible, not that it's a good idea.

But that's not what I meant. I meant that a D was meant to adhere to RM Pre 26. C# itself certainly does (though some might reasonably argue otherwise), and libraries written in C# certainly can (though again, some might argue otherwise.) I'm not convinced your library -- with its "heading" mini-language embedded in strings (e.g., "Qty,ExpQty,ExpQty") does.

My aim was to demonstrate that 'C# is D'. Turns out you can't quite get there but you can get very close, with not too much effort. Also, you get some benefits (like GTC and generic RA ops). Is it worth going the extra mile for a 'real' D? I'm still pondering that one.

Andl - A New Database Language - andl.org
Quote from dandl on June 8, 2020, 11:27 am
Quote from Dave Voorhis on June 8, 2020, 9:12 am
Quote from dandl on June 8, 2020, 2:01 am

No, not a kludge, a unifying principle across operators that need types and those that don't. What is the type for the attribute(s) removed by projection? Why should that be a type? I say no, it's a heading.

It's an attribute name list, not a heading. A heading is a set of name/type pairs. Projection (or its REMOVE inverse) only requires an attribute name list.

Call it GH for Generic Heading.]

At the risk of repeating myself, a GH is an attribute name list. It becomes a TTM heading when bound to values, which bring types with them.

Of course you can call it anything you like, but this does differ from TTM terminology which clearly distinguishes attribute name lists from headings, both syntactically (e.g., in Tutorial D) and conceptually.

TTM has no syntax and it's hard to know what a heading should be until after it has been used to generate a type for a value. I'm happy to draw the distinction, but I don't think it's real.

It's quite clearly defined in the D&D writings. In DTATRM, "heading" first appears and is defined in Chapter 2 under "TUPLES". In short, "The ordered pair <Ai,Ti> is an attribute of [the tuple], and it is uniquely identified by the attribute name Ai. ... The type Ti is the corresponding attribute type. ... The complete set of attributes is the heading of [the tuple]."

So if your projection operator expects as input a set of attributes, where each attribute is a name/type pair, then it's reasonable to state that your input is a heading. If it's just a set of attribute names, then it's a set of attribute names, not a heading.

I'm not sure where it "becomes a TTM heading when bound to values" except in a tuple or relation. Or is that what you mean?

You'd pass a different construct to RENAME: a set of renaming pairs.

You'd pass a different construct to WHERE: a lambda expression of type boolean.

And so on. There may be good reasons to have a first-class Heading -- to declare it once and define multiple tuples and/or relations from the same Heading, for example -- but what you're passing to RENAME, WHERE, Projection, REMOVE, etc., isn't a Heading.

So it's  a GH.

I think a relational model library should be universal, not application-specific. It should be the same whether it's used in novel language, or a language extension, or stand-alone.

Impossible. A library has to have an API. An API has to use types. As soon as you make choices about which types to use, it's not universal.

Not only is it possible, it's conventional. Language standard libraries do it -- e.g., the .NET framework, the Java framework, C++'s standard library and Boost -- and so forth. The classic way to get around predefined types is to define your library to manipulate instances of Object, which means you can manipulate instances of anything -- though you may have to do run-time type checking. The modern way is to use templates/generics, or require implementation of an interface. Then type checking can be done at a compile-time.

That's silly,you might just as well go back to BCPL where everything is a machine word. That's universal for you, universally useless.

The only reasonably universal API I know is defined by "C", and even that is not universal across architectures.

The Andl RA uses a type system based on a base class of DataType and 18 subclasses. It almost never uses object and I don't think there is any runtime type checking, except perhaps in external interfaces.

Sounds like it should be straightforward to replace DataType with an interface, and then define any type you like to implement it and then you've got a nice universal implementation of the relational model that would work in a novel language, an extension to an existing language, or any application you like.

In other words, it's no different from creating, say, an encryption library or a matrix math library -- you'd use the same encryption library or matrix math library no matter what the application. It's not like matrix multiplication or encryption somehow differs in a language implementation vs, say, a video game.

Likewise, it's the same relational model whether it's a novel language, a language extension, or a stand-alone library.

Andl is in category (a), my 'C# as D' project is in category (c). Andl is 'implemented using' C#; this project is C#. I know you know all this, so I see no point in trying to explain further. Why do you say they are the same.?

I'm not sure why they wouldn't be the same.

So I guess with 40 years of doing languages and compilers I find this so intuitively obvious, I just don't know how to explain it any better. An implementation language is not a target language.

I think we may be talking past each other. I meant that a library to implement the relational model should be the same whether its at the core of a new language, at the core of an existing language extension, or used on its own in various applications. That's "the same" (library) to which I was referring.

I don't know any way to build a library of any kind without first deciding on a type system. My two implementations of the RA for this project expose two different type systems, one heavily based on generics and type parameters and the other more conventional function calls. Here are the API calls:

Obviously, any implementation in host language X is going to be within the type system of X. Of course, that doesn't preclude using the same language to create novel type systems, but you probably won't get the benefit of the compiler's static type checking in your novel type system.

I don't know what the following code is meant to show.

public RelValue<T> Join<T1, T>(RelValue<T1> other)
where T : TupBase, new()
where T1 : TupBase, new() {

 public RelNode Join(RelNode other) {

S.WHERE("CITY,STATUS", (a1,a2) => a1 != "Athens" && a2 >= 20 && a2 <= 30);

Apparently, "CITY,STATUS" is some dynamic construct that has to (presumably) match some elements of S (I guess...) That's not readable, nor robust, nor easily debuggable, nor compile-time checkable.

You're reading it wrong, but it isn't complicated. The arguments of the lambda match the heading. It's not an 'open expression', the heading and lambda define a relcon/relation function per App-A. It's the same as this:

var fcs = Func<string,integer> = (city,status) => city != "Athens" && status >= 20 && status <= 30;
S.Where("CITY,STATUS", fcs);

I get it, but it's quite awkward. I can see it working as an internal library -- indeed, it has notional similarities with the Rel innards -- but as a developer-oriented library you're pushing much checking onto runtime, aren't you?

For example, what if I misspell "CITY,STATUS" as "CTIY;STASUS". Doesn't that break it at runtime?

So correctly written programs work just fine. The whole RA is supported (and if I missed anything it's easy to add).

Yes, error checking is a weakness.

Then, once again, you've created exactly the same kind of library that is at the core of Rel, SIRA_PRISE, Duro, RAQUEL, and Andl, and have demonstrated -- though we'd discussed it often enough before -- the limitations of using the usual popular programming languages as a native D. Yes, you can meet the letter of the pre-/pro-scriptions, but there are limitations.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on June 8, 2020, 3:03 pm

Yes, you can meet the letter of the pre-/pro-scriptions, but there are limitations.

The essence of those limitations is : no you can't.

D doesn't condone any form of NULL, all of the popular GPPL offer it as a feature.

The TTM authors have been pretty adamant they won't condone any coercions, all of the popular GPPL offer it as a feature at least for their native numeric types.  (Maybe after Tony Hoare's billion dollar mistake has vanished from people's recollection, coercions might stand a chance of being recognized as the half-a-billion-dollar mistake made by someone who thought it would be a good thing to make things easy on the programmer.)

The TTM authors have gone out of their way to explain what they expect from a ***compiled*** language in the realm of static type checking, all our friend has to offer is "yes type checking is a weakness".  Some of us, e.g. me, will for that very same reason readily admit that their system is "not a D sensu strictu".  Our friend just wants to find a point of view that would allow him to claim that the existing languages of his preference "already comply, and there's really nothing to add".  "Lawyer your way into compliance" as you've appropriately put it.  (The fact that the authors cannot rule out interpreters from complying because interpreters, after all, are still a valid way of implementing a language, is a very fortuitious circumstance for that kind of lawyerese folk to exploit.)

The TTM authors have been pretty adamant they won't condone any kind of "pointer semantics" being exposed/available to the user (and the "nonsense" it leads to of variables being able to contain variables), all popular GPPL languages offer it as the very most fundamental feature of what it's like to be OO.

This is off of half a minute of trying to recollect what TTM was actually all about.  I'm sure I've missed lots of other similar issues.

And still our friend states things like "but you can get pretty close".  Well yeah, if you fail to see 99.99% of the problem, then probably you can.

Author of SIRA_PRISE

It's quite clearly defined in the D&D writings. In DTATRM, "heading" first appears and is defined in Chapter 2 under "TUPLES". In short, "The ordered pair <Ai,Ti> is an attribute of [the tuple], and it is uniquely identified by the attribute name Ai. ... The type Ti is the corresponding attribute type. ... The complete set of attributes is the heading of [the tuple]."

So if your projection operator expects as input a set of attributes, where each attribute is a name/type pair, then it's reasonable to state that your input is a heading. If it's just a set of attribute names, then it's a set of attribute names, not a heading.

Right at the outset I said it's not intended to be strictly TTM compliant. This is about seeing how far you get with generic headings, and inferring types from the values attached to attributes. The answer is: a good long way. Better than expected, good enough to question TTM.

The Andl RA uses a type system based on a base class of DataType and 18 subclasses. It almost never uses object and I don't think there is any runtime type checking, except perhaps in external interfaces.

Sounds like it should be straightforward to replace DataType with an interface, and then define any type you like to implement it and then you've got a nice universal implementation of the relational model that would work in a novel language, an extension to an existing language, or any application you like.

I already implemented generic type interfaces on the client side (different versions for direct API and Thrift). The problem is the library is closely tied to executing Andl code. I did storage interfaces to Postgres and Sqlite, and they cause enormous difficulties implementing open expressions.There is also a heavy dependency on maintaining the catalog, which is where all the compiled code lives.

A good part of the driving force for the current direction is getting away from open expressions. LINQ does an incredibly complex job to translate open expressions into an internal expression tree and thence into SQL: I don't want to do that either. Open expressions are a terrible idea from the implementors point of view -- they demand full compiler technology to make sense of them.

So generic headings and relcons based on pure functions are the key to a library that actually can be 'universal', using only readily available system types and standard compiler capabilities. With Andl compiled code in the mix, that's not possible.

Obviously, any implementation in host language X is going to be within the type system of X. Of course, that doesn't preclude using the same language to create novel type systems, but you probably won't get the benefit of the compiler's static type checking in your novel type system.

I know how to create a novel type system -- I did that, but you can't expose that as a 'universal' API. The users have to first buy into your type system.

Yes, error checking is a weakness.

Then, once again, you've created exactly the same kind of library that is at the core of Rel, SIRA_PRISE, Duro, RAQUEL, and Andl, and have demonstrated -- though we'd discussed it often enough before -- the limitations of using the usual popular programming languages as a native D. Yes, you can meet the letter of the pre-/pro-scriptions, but there are limitations.

For some weird definition of  'at the core of', and otherwise not even remotely similar. All of those demand code written in a foreign language manipulating objects in a foreign type system. Can you actually use your library in a Java program to do a full range of RA operations (eg WHERE, EXTEND or SUMMARIZE) on Java data types without writing a line of Rel code?

So I've gone domestic. No-one is buying those 5 foreigners, so my real competition is the ORMs of the world, plus other native RA libraries on non-SQL data. Do you know any?

 

Andl - A New Database Language - andl.org
Quote from dandl on June 9, 2020, 1:07 am

It's quite clearly defined in the D&D writings. In DTATRM, "heading" first appears and is defined in Chapter 2 under "TUPLES". In short, "The ordered pair <Ai,Ti> is an attribute of [the tuple], and it is uniquely identified by the attribute name Ai. ... The type Ti is the corresponding attribute type. ... The complete set of attributes is the heading of [the tuple]."

So if your projection operator expects as input a set of attributes, where each attribute is a name/type pair, then it's reasonable to state that your input is a heading. If it's just a set of attribute names, then it's a set of attribute names, not a heading.

Right at the outset I said it's not intended to be strictly TTM compliant. This is about seeing how far you get with generic headings, and inferring types from the values attached to attributes. The answer is: a good long way. Better than expected, good enough to question TTM.

Not sure what "good enough to question TTM" means.

I was only questioning your use of terminology. It seems rather nuanced to re-use the term "heading" -- which TTM clearly defines -- in a non-TTM way when creating an implementation of TTM ideas. A set of attributes is a heading. A set of attribute names is a set of attribute names.

I suppose you could argue that a set of attribute names denotes a heading, in the same way that (say) a string literal may denote an integer. I guess...

The Andl RA uses a type system based on a base class of DataType and 18 subclasses. It almost never uses object and I don't think there is any runtime type checking, except perhaps in external interfaces.

Sounds like it should be straightforward to replace DataType with an interface, and then define any type you like to implement it and then you've got a nice universal implementation of the relational model that would work in a novel language, an extension to an existing language, or any application you like.

I already implemented generic type interfaces on the client side (different versions for direct API and Thrift). The problem is the library is closely tied to executing Andl code.

Ah, that explains it. If your implementation of the relational model was tightly coupled to your language parser and ancillary mechanisms, then I can imagine that decoupling them might be difficult.

Obviously, any implementation in host language X is going to be within the type system of X. Of course, that doesn't preclude using the same language to create novel type systems, but you probably won't get the benefit of the compiler's static type checking in your novel type system.

I know how to create a novel type system -- I did that, but you can't expose that as a 'universal' API. The users have to first buy into your type system.

Usually, users only have to buy into the host language, like C# or Java. Once there, it's possible to design libraries to be as generic as possible -- i.e., 'universal' -- within the constraints of the host language type system. But, again, that doesn't preclude creating a new type system, though it might be forced to only do its type-checking at run-time.

In other words, a library implementing the relational model should only be restricted by the host language type system and the requirements of the relational model itself.

For example, you might specify attribute types via type parameters -- which allows them to be virtually anything -- but require that they implement IComparable (C#?) or Comparable (Java) interfaces so that you can compare one attribute value to another.

This is the standard modern approach to implementing generic libraries like those for encryption or linear algebra. A good library can then be as universal as a given host language will allow, which means the same library can be equally applicable to a new language, an existing language's extension, or any other type of application.

Different libraries may be created with different capabilities and different performance profiles (which is why there are so many different linear algebra libraries), but I don't know any case where some are particularly suited to, say, new languages whilst others are best for language extensions. I suppose it could happen, but I would consider that to be evidence of abuse of modularity and excessive coupling rather than a benefit.

Yes, error checking is a weakness.

Then, once again, you've created exactly the same kind of library that is at the core of Rel, SIRA_PRISE, Duro, RAQUEL, and Andl, and have demonstrated -- though we'd discussed it often enough before -- the limitations of using the usual popular programming languages as a native D. Yes, you can meet the letter of the pre-/pro-scriptions, but there are limitations.

For some weird definition of  'at the core of', and otherwise not even remotely similar. All of those demand code written in a foreign language manipulating objects in a foreign type system. Can you actually use your library in a Java program to do a full range of RA operations (eg WHERE, EXTEND or SUMMARIZE) on Java data types without writing a line of Rel code?

So I've gone domestic. No-one is buying those 5 foreigners, so my real competition is the ORMs of the world, plus other native RA libraries on non-SQL data. Do you know any?

Why would it be "some weird definition of 'at the core of'"?

Speaking for Rel, the underlying implementation of the relational model is quite generic and can be used stand-alone. It does require that values and types implement Value and Type interfaces, which provide the basic functionality required to support values and types in the relational algebra operators. But there's nothing in it that is specific to Tutorial D. I can indeed do a full range of relational algebra operations on types that implement Type (and values that implement Value) which can and currently do wrap standard Java types. For example, RATIONAL wraps Java's double. INTEGER wraps Java's long. The Rel language parser is dependent on this relational algebra library, but the relational algebra library isn't dependent on the Rel language parser, though I've provided hooks so it can call out to it. This, for example, allows Rel to implement the boolean expression of a WHERE operator using Rel language code.

But WHERE isn't dependent on Rel code; it can work equally well with pure Java code and does so in various places within Rel. Specifically, you pass an implementation of the TupleFilter interface to ValueRelation's select method. Implementations of TupleFilter can be pure Java or invocations of Rel / Tutorial D code, i.e., anything that can implement the TupleFilter interface, which defines a single boolean-returning method with a single tuple parameter.

As for native RA libraries on non-SQL data, there are various implementations depending on your definition of RA.

If you accept a rather stretched interpretation that is influenced by functional programming, then there's Java Streams and jOOλ and C#'s LINQ.

If you mean strictly implementing TTM, then there's the internals of Rel, SIRA_PRISE, Duro, RAQUEL, etc., though I don't know that any of these have released their internals as downloadable stand-alone libraries.

But the nice thing about LINQ and Java Streams is that these provide notionally equivalent facility to the TTM relational model in a fully statically type-safe manner. Yes, you do give up TTM's JOIN, projection, aggregate operators and the like, but you certainly don't give up joining (and you can create a notionally equivalent JOIN) and aggregation. You simply use their LINQ/Streams equivalents. It's a different model, but it's only different; it's neither better or worse in terms of overall capability, and you gain full static type safety in a mainstream programming language.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

I already implemented generic type interfaces on the client side (different versions for direct API and Thrift). The problem is the library is closely tied to executing Andl code.

Ah, that explains it. If your implementation of the relational model was tightly coupled to your language parser and ancillary mechanisms, then I can imagine that decoupling them might be difficult.

It's tightly coupled to the execution of code and to the type system.

Obviously, any implementation in host language X is going to be within the type system of X. Of course, that doesn't preclude using the same language to create novel type systems, but you probably won't get the benefit of the compiler's static type checking in your novel type system.

I know how to create a novel type system -- I did that, but you can't expose that as a 'universal' API. The users have to first buy into your type system.

Usually, users only have to buy into the host language, like C# or Java. Once there, it's possible to design libraries to be as generic as possible -- i.e., 'universal' -- within the constraints of the host language type system. But, again, that doesn't preclude creating a new type system, though it might be forced to only do its type-checking at run-time

In other words, a library implementing the relational model should only be restricted by the host language type system and the requirements of the relational model itself.

I disagree. Most non-trivial libraries implement, extend or depend on types. Simple example: the Console library in C# implements the Console class. This exposes objects in a variety of classes: TextWriter, TestReader, ConsoleColor, various Exception classes, Encoding classes and so on. You cannot use Console unless you first buy into that entire suite of dependencies and associations. Andl is no more or less: it comes with dependencies and associations.

An RA library has to choose a type for its tuples and relations. Since they don't come with a heading, it has to choose a mechanism to for that. You might choose reflection, but that's another class and another dependency. It's classes and dependencies all the way down. The C export namespace is moderately universal, everything else is choices on choices about what you provide, what you use and what you extend.

For example, you might specify attribute types via type parameters -- which allows them to be virtually anything -- but require that they implement IComparable (C#?) or Comparable (Java) interfaces so that you can compare one attribute value to another.

This is the standard modern approach to implementing generic libraries like those for encryption or linear algebra. A good library can then be as universal as a given host language will allow, which means the same library can be equally applicable to a new language, an existing language's extension, or any other type of application.

Of course I know all that stuff, but generics and type parameters have limitations too (and believe me, I've pushed on them!). Generics are not enough on their own to solve the problem of headings.

Different libraries may be created with different capabilities and different performance profiles (which is why there are so many different linear algebra libraries), but I don't know any case where some are particularly suited to, say, new languages whilst others are best for language extensions. I suppose it could happen, but I would consider that to be evidence of abuse of modularity and excessive coupling rather than a benefit.

Yes, error checking is a weakness.

Then, once again, you've created exactly the same kind of library that is at the core of Rel, SIRA_PRISE, Duro, RAQUEL, and Andl, and have demonstrated -- though we'd discussed it often enough before -- the limitations of using the usual popular programming languages as a native D. Yes, you can meet the letter of the pre-/pro-scriptions, but there are limitations.

For some weird definition of  'at the core of', and otherwise not even remotely similar. All of those demand code written in a foreign language manipulating objects in a foreign type system. Can you actually use your library in a Java program to do a full range of RA operations (eg WHERE, EXTEND or SUMMARIZE) on Java data types without writing a line of Rel code?

So I've gone domestic. No-one is buying those 5 foreigners, so my real competition is the ORMs of the world, plus other native RA libraries on non-SQL data. Do you know any?

Why would it be "some weird definition of 'at the core of'"?

Speaking for Rel, the underlying implementation of the relational model is quite generic and can be used stand-alone. It does require that values and types implement Value and Type interfaces, which provide the basic functionality required to support values and types in the relational algebra operators. But there's nothing in it that is specific to Tutorial D. I can indeed do a full range of relational algebra operations on types that implement Type (and values that implement Value) which can and currently do wrap standard Java types. For example, RATIONAL wraps Java's double. INTEGER wraps Java's long. The Rel language parser is dependent on this relational algebra library, but the relational algebra library isn't dependent on the Rel language parser, though I've provided hooks so it can call out to it. This, for example, allows Rel to implement the boolean expression of a WHERE operator using Rel language code.

But WHERE isn't dependent on Rel code; it can work equally well with pure Java code and does so in various places within Rel. Specifically, you pass an implementation of the TupleFilter interface to ValueRelation's select method. Implementations of TupleFilter can be pure Java or invocations of Rel / Tutorial D code, i.e., anything that can implement the TupleFilter interface, which defines a single boolean-returning method with a single tuple parameter.

So you wrote it to play nicely with its own language or Java, and its own storage engine and nothing else. I wrote Andl for greater portability: to work with any client either through .NET libraries or Thrift and with any storage engine, specifically Sqlite and Postgres. It has no dependence on any particular client types -- it will work with literally anything that compiles. But the price is: heavy internal dependencies on multiple technologies, nothing exposed.

As for native RA libraries on non-SQL data, there are various implementations depending on your definition of RA.

If you accept a rather stretched interpretation that is influenced by functional programming, then there's Java Streams and jOOλ and C#'s LINQ.

My definition is: everything, no omissions, no shortcuts. Everything in TTM (including VSS), DTATRM (GTC), DBE (image relations). Everything in SQL: CTE, outer joins, ordered queries, windows. Everything. I call it the ERA (or even the XRA).

If you mean strictly implementing TTM, then there's the internals of Rel, SIRA_PRISE, Duro, RAQUEL, etc., though I don't know that any of these have released their internals as downloadable stand-alone libraries.

But the nice thing about LINQ and Java Streams is that these provide notionally equivalent facility to the TTM relational model in a fully statically type-safe manner. Yes, you do give up TTM's JOIN, projection, aggregate operators and the like, but you certainly don't give up joining (and you can create a notionally equivalent JOIN) and aggregation. You simply use their LINQ/Streams equivalents. It's a different model, but it's only different; it's neither better or worse in terms of overall capability, and you gain full static type safety in a mainstream programming language.

It's not the basic RA let alone the ERA, and it's been done to death. I'm interested in breaking new ground.

As a side note, once you get rid of open expressions, it leaves a query language that can be built visually. No coding, just string together relops, headings, functions, data sources and sinks. But that's a topic for another time.

 

Andl - A New Database Language - andl.org
Quote from dandl on June 9, 2020, 2:28 pm

I already implemented generic type interfaces on the client side (different versions for direct API and Thrift). The problem is the library is closely tied to executing Andl code.

Ah, that explains it. If your implementation of the relational model was tightly coupled to your language parser and ancillary mechanisms, then I can imagine that decoupling them might be difficult.

It's tightly coupled to the execution of code and to the type system.

Obviously, any implementation in host language X is going to be within the type system of X. Of course, that doesn't preclude using the same language to create novel type systems, but you probably won't get the benefit of the compiler's static type checking in your novel type system.

I know how to create a novel type system -- I did that, but you can't expose that as a 'universal' API. The users have to first buy into your type system.

Usually, users only have to buy into the host language, like C# or Java. Once there, it's possible to design libraries to be as generic as possible -- i.e., 'universal' -- within the constraints of the host language type system. But, again, that doesn't preclude creating a new type system, though it might be forced to only do its type-checking at run-time

In other words, a library implementing the relational model should only be restricted by the host language type system and the requirements of the relational model itself.

I disagree. Most non-trivial libraries implement, extend or depend on types. Simple example: the Console library in C# implements the Console class. This exposes objects in a variety of classes: TextWriter, TestReader, ConsoleColor, various Exception classes, Encoding classes and so on. You cannot use Console unless you first buy into that entire suite of dependencies and associations. Andl is no more or less: it comes with dependencies and associations.

An RA library has to choose a type for its tuples and relations. Since they don't come with a heading, it has to choose a mechanism to for that. You might choose reflection, but that's another class and another dependency. It's classes and dependencies all the way down. The C export namespace is moderately universal, everything else is choices on choices about what you provide, what you use and what you extend.

For example, you might specify attribute types via type parameters -- which allows them to be virtually anything -- but require that they implement IComparable (C#?) or Comparable (Java) interfaces so that you can compare one attribute value to another.

This is the standard modern approach to implementing generic libraries like those for encryption or linear algebra. A good library can then be as universal as a given host language will allow, which means the same library can be equally applicable to a new language, an existing language's extension, or any other type of application.

Of course I know all that stuff, but generics and type parameters have limitations too (and believe me, I've pushed on them!). Generics are not enough on their own to solve the problem of headings.

Yes, as you've discovered (and has often been discussed on this forum in the past), the TTM approach to headings -- and some operators themselves -- is incompatible with at least C# and Java's (and probably C++'s) static typing, which are all essentially the same model. But other approaches don't require giving up static typing guarantees, and that's why we have LINQ, Streams, etc. If you're going to code natively in C# or Java, they are preferable, because they don't require giving up static typing guarantees.

Arguably, they don't give up anything else either -- at least in the context of C# or Java programming -- but they are certainly different from the TTM approach, as they are suited to their host languages.

The TTM approach is best suited to a very different kind of language.

Different libraries may be created with different capabilities and different performance profiles (which is why there are so many different linear algebra libraries), but I don't know any case where some are particularly suited to, say, new languages whilst others are best for language extensions. I suppose it could happen, but I would consider that to be evidence of abuse of modularity and excessive coupling rather than a benefit.

Yes, error checking is a weakness.

Then, once again, you've created exactly the same kind of library that is at the core of Rel, SIRA_PRISE, Duro, RAQUEL, and Andl, and have demonstrated -- though we'd discussed it often enough before -- the limitations of using the usual popular programming languages as a native D. Yes, you can meet the letter of the pre-/pro-scriptions, but there are limitations.

For some weird definition of  'at the core of', and otherwise not even remotely similar. All of those demand code written in a foreign language manipulating objects in a foreign type system. Can you actually use your library in a Java program to do a full range of RA operations (eg WHERE, EXTEND or SUMMARIZE) on Java data types without writing a line of Rel code?

So I've gone domestic. No-one is buying those 5 foreigners, so my real competition is the ORMs of the world, plus other native RA libraries on non-SQL data. Do you know any?

Why would it be "some weird definition of 'at the core of'"?

Speaking for Rel, the underlying implementation of the relational model is quite generic and can be used stand-alone. It does require that values and types implement Value and Type interfaces, which provide the basic functionality required to support values and types in the relational algebra operators. But there's nothing in it that is specific to Tutorial D. I can indeed do a full range of relational algebra operations on types that implement Type (and values that implement Value) which can and currently do wrap standard Java types. For example, RATIONAL wraps Java's double. INTEGER wraps Java's long. The Rel language parser is dependent on this relational algebra library, but the relational algebra library isn't dependent on the Rel language parser, though I've provided hooks so it can call out to it. This, for example, allows Rel to implement the boolean expression of a WHERE operator using Rel language code.

But WHERE isn't dependent on Rel code; it can work equally well with pure Java code and does so in various places within Rel. Specifically, you pass an implementation of the TupleFilter interface to ValueRelation's select method. Implementations of TupleFilter can be pure Java or invocations of Rel / Tutorial D code, i.e., anything that can implement the TupleFilter interface, which defines a single boolean-returning method with a single tuple parameter.

So you wrote it to play nicely with its own language or Java, and its own storage engine and nothing else.

No, I wrote it using the standard modern object-oriented approach, which is to code to abstractions -- interfaces and abstract base classes -- and write Liskov-substitutable implementations of these to provide concrete functionality. That makes the concrete functionality replaceable, so for example the core could use a different storage engine but thus far there hasn't been a compelling reason to do so.

Because it's written in Java, it can only realistically be hosted on a JVM, but the same relational core could be used with any language that runs on the JVM.

Obviously, there are always inadvertent dependencies and conversion complexities involved with any software, so I don't want to make it sound like significant adaptations to new use cases would be trivial. They almost certainly wouldn't be. But you appeared to suggest that genericity is generally infeasible and that a relational library designed for purpose X is inherently unsuited to purpose Y. It isn't; indeed, that's why languages have parametric types and inheritance hierarchies, so that with care we can achieve reasonable degrees of genericity.

I wrote Andl for greater portability: to work with any client either through .NET libraries or Thrift and with any storage engine, specifically Sqlite and Postgres. It has no dependence on any particular client types -- it will work with literally anything that compiles. But the price is: heavy internal dependencies on multiple technologies, nothing exposed.

As for native RA libraries on non-SQL data, there are various implementations depending on your definition of RA.

If you accept a rather stretched interpretation that is influenced by functional programming, then there's Java Streams and jOOλ and C#'s LINQ.

My definition is: everything, no omissions, no shortcuts. Everything in TTM (including VSS), DTATRM (GTC), DBE (image relations). Everything in SQL: CTE, outer joins, ordered queries, windows. Everything. I call it the ERA (or even the XRA).

If you mean strictly implementing TTM, then there's the internals of Rel, SIRA_PRISE, Duro, RAQUEL, etc., though I don't know that any of these have released their internals as downloadable stand-alone libraries.

But the nice thing about LINQ and Java Streams is that these provide notionally equivalent facility to the TTM relational model in a fully statically type-safe manner. Yes, you do give up TTM's JOIN, projection, aggregate operators and the like, but you certainly don't give up joining (and you can create a notionally equivalent JOIN) and aggregation. You simply use their LINQ/Streams equivalents. It's a different model, but it's only different; it's neither better or worse in terms of overall capability, and you gain full static type safety in a mainstream programming language.

It's not the basic RA let alone the ERA, and it's been done to death. I'm interested in breaking new ground.

As a side note, once you get rid of open expressions, it leaves a query language that can be built visually. No coding, just string together relops, headings, functions, data sources and sinks. But that's a topic for another time.

A new query language that involves no coding is, of course, a completely different thing.

Here, we're talking about implementing the relational model in a mainstream programming language. In that context, even if we set aside some of the other abominations that Erwin mentioned (like pointers/references, nulls, etc., in a D), giving up static typing is a fundamental obstacle. If you're going to treat heading manipulation as some sort of dynamic operation, then it's in the same category as another embedded mini-language -- regular expressions -- with all their attendant blecherousness, along with the grandaddy of embedded dynamic language horrors we'd all like to avoid: SQL.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
PreviousPage 4 of 5Next