The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Scope and lifetime

PreviousPage 3 of 3
Quote from dandl on September 23, 2019, 2:21 pm
Quote from Dave Voorhis on September 23, 2019, 1:18 pm
Quote from AntC on September 23, 2019, 6:06 am
Quote from Dave Voorhis on September 22, 2019, 12:16 pm
Quote from dandl on September 22, 2019, 4:17 am

...

I'm wrestling with this problem for Andl. Currently the limitations are similar to TD; I would prefer to ease them, but I'm troubled by the consequences.


Edit: I realise what the problem is: I really don't like the idea of a syntactic form controlling something that is not syntax. Persistence does not change the way the code executes, it only changes whether the data it accesses is external, in a database. I would prefer all globals (variables, types, etc) to be global or local according to syntax, but for persistence to be set by some other mechanism (such as metadata). So I don't like the REAL/PRIVATE feature in TD.

There's an argument here for having a separate data definition language -- akin to the DD statements of JCL on IBM mainframes -- that define or declare available relvars and expose them to the database language as no more than relvar names and lists of attribute names/types.

Yes, I'm keen on that idea. But it cuts again's Codd Rule 5 The comprehensive data sublanguage rule; and I can't help feel it would cut against Pres/Pros in TTM although I can't put my finger on it. How does it go with a program that wants to create a temp relvar, where its schema is set at run time from dynamic values?

Codd's Rules are neither regulations nor standards, so I think we can take or leave them as logic dictates. I don't recall any of the pre/pro-scriptions precluding a separate DDL, and indeed the separate-ness may simply be differences in privilege rather than a separate language parser recognising distinct syntax and semantics.

For temporary relvars, I see no problem with tuple-valued and relation-valued variables, same as any scalar (or other type-) valued variable. These have transient scope and lifetime like any other program variable. Once variables belong to the database (aka persistent relvars), and have a wider scope and lifetime than an individual program/script, then they need to be defined by the separate DDL.

Yes, this puts the finger on it. Creating variables in a database should be highly intentional, not just an accident of syntax. Ditto deleting. It's reasonably benign if a program connects to and consumes relvars in some database, not too troubling if it updates said database, and quite useful if it can connect to different databases at different times. But it's a real nuisance if the same program behaves differently because of the detritus left from a previous run. I've been having trouble writing test suites and useful programs, because of the need to guard against things that might or might not have been created on a previous run, and I think TD will have similar issues. [BTW types and operators actually cause more grief than relvars, because they can't be kept private.]

My Tutorial DRel test scripts tend to have setup, teardown, and multiple test portions for this reason. Setup creates definitions, teardown deletes them, and test assumes the definitions already exist. I suppose setup could invoke teardown as a first step to make sure cruft doesn't remain from previous failed runs, but I normally get around this by creating a new database for a full test run.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from johnwcowan on September 23, 2019, 1:47 pm

 

 

Quote from AntC on September 23, 2019, 6:06 am

At runtime you could point the application to some arbitrary database; on firing up the application, it checked the tables' hash/timestamp agreed to what was compiled into the application. The check applied whether or not the application ever opened the tables. So if any check failed, the application got booted out, no harm done. (At least not to the database ;-)

I don't understand how that worked.  "Arbitrary database" presumably means any database that conforms to the schema.  How can a mere timestamp check tell you whether a database conforms to a certain version of a schema or not?

No, "arbitrary database" just means some named database. On S/38 the file/table system was integrated with the operating system, so tables were identifiable elements of a directory. database = directory, but note single-level storage so no sub-directories.

Then the hash/timestamp was compiled into the table, not the directory. You could in one directory potentially have different tables from different sourcefiles/with different timestamps. (That was not recommended practice.)

And it was not a "mere timestamp": there was also a hash of the table's schema, and the timestamp was of the sourcefile DDL, not compile time of the table.

PreviousPage 3 of 3