## The Forum for Discussion about The Third Manifesto and Related Matters

Forum breadcrumbs - You are here:
Please or Register to create posts and topics.

# Which Result?

PreviousPage 6 of 8Next
Quote from Paul Vernon on November 17, 2021, 5:44 pm
Quote from dandl on November 17, 2021, 1:24 pm

for business users it's hard to escape the need for integers (counting), decimal (exact) and real (floating point)

How do you exactly represent the number `3⁴¹³⁴⁷⁄₆₂₂₆₅` by your decimal ("exact") type? Or, for that matter `¹⁄₃` ?

Easy: that collection of symbols is not a number in any programming language I know. It's a collection of symbols that presumably denote a value in some type you have in mind, and is not a valid argument for any operator on number. Quite possibly that type has a collection of operators similar to those on numbers, which in turn return other values in that same type (but not numbers).

How do you exactly represent the number `𝜋` in your "real" (floating point)  type?

Same answer. The symbol representing pi is not a number and is not a valid argument to any operator on number. The language may provide a conversion to some number type.

None of this prevents you from manipulating symbols such as these in a symbolical manipulation language, but if you have some business use for the value you better know how to convert it to a number.

Andl - A New Database Language - andl.org
Quote from Paul Vernon on November 17, 2021, 5:54 pm
Quote from dandl on November 17, 2021, 1:13 pm

No operator can take an input value of different types.

Not even `=` ? Oh. I see, if you have N types, you have N*N equals operators all of the same name but different input types. Well, if you like such conceptual complexity fine. I prefer to think that I have just one `=` operator that is defined for all values regardless of "type".

No, there are exactly N equals operators, one for each type. If the type is ordered there is a less-than operator, one for each type. If the type is a number there are the 4 arithmetic operators for each type. And so on. Each operator performs exactly one function and does so as simply as possible. It is, to paraphrase, as simple as possible but no simpler.

Pragmatically most languages have a single equals operator which combines many different ways of comparing values of various types. The single operator is simple on the surface, but there is much hidden complexity underlying that apparent simplicity.

Andl - A New Database Language - andl.org
Quote from Paul Vernon on November 17, 2021, 7:22 pm
Quote from dandl on November 17, 2021, 6:22 am
Quote from Paul Vernon on November 16, 2021, 10:48 pm

`2 + 3.54` evaluates to `5²⁷⁄₅₀` and I will explain why that is the only reasonable result.

By following this line of reasoning you are relying on choosing a particular type for those values. Given other types and operators values such as 5.54, 23.54 and 5.539999999 are equally possible.

Well, anything is possible. `"aardvark"` is a possible result, but the only reasonable result is the one that follows from the definition of plus. Ask a mathematician (or my Mum), I am sure they will say that `2 + 3.54` does not equal `5.539999999` . If the nines went on for infinity, then (as I understand) that would be another way of writing `5.54` but without such a notation, what you suggested as the answer is not reasonable. If your operator does not follow the definition of plus (https://en.wikipedia.org/wiki/Addition), calling it plus is (again) not reasonable.

The fact that there are very many unreasonable programming languages already out there is not my point. I'm interested in what should happen, not what does in various arbitrary (or not so arbitrary) existing systems. (Well, I'm am interested to know if there are any reasonable programming languages (in the above sense) out there that I might not be aware of)

The point you've completely missed is that there are perfectly reasonable programming languages in which every value is of type string (TRAC, TCL) and in which every non-text value is of type floating point number (Excel VBA). The values I gave are reasonable results in those reasonable languages.

I mean, have you ever had to explain to a normal person why almost all programming languages think that `0.1 + 0.2` does not equal `0.3`.  I can even right now click Inspect, Console and type in `0.1 + 0.2 === 0.3` or `0.1 + 0.2 == 0.3` and in both cases I get `false`.  My computer can do billions, nay trillions of calculations in the time it takes me to test that out. What on earth is the excuse for it not following simple definitions of mathematics? Yes, I know the history... I know why this is the case, but still, surly we can do better, can't we?

That is entirely the expected result if you chose a programming language with only a real number type. I can assure you that all  modern programming languages have a decimal numeric type in which 0.1+0.2==0.3. You just need to choose a different calculator/language. And stop trying to explain the unexplainable to normal people.

Andl - A New Database Language - andl.org
Quote from Paul Vernon on November 17, 2021, 7:22 pm
Quote from dandl on November 17, 2021, 6:22 am
Quote from Paul Vernon on November 16, 2021, 10:48 pm
I mean, have you ever had to explain to a normal person why almost all programming languages think that `0.1 + 0.2` does not equal `0.3`.  I can even right now click Inspect, Console and type in `0.1 + 0.2 === 0.3` or `0.1 + 0.2 == 0.3` and in both cases I get `false`.
`GHCi, version 8.10.2: https://www.haskell.org/ghc/ :? for help`
`Prelude> :info Rational`                                          -- request info re the standard Library type `Rational`
`type Rational :: *`
`type Rational = GHC.Real.Ratio Integer`
`-- Defined in `GHC.Real'`
`Prelude> 0.1 :: Rational`                                       -- the `::` says I'm giving an explicit type signature, otherwise, Haskell will default to `Float`
`1 % 10`                                                                                -- display the value using `Rational`'s formatting
`Prelude> (0.1 :: Rational) + 0.2`                     -- `+` requires both arguments the same type, so no need for a sig on `0.2`
`3 % 10`
`Prelude> (0.1 :: Rational) + 0.2 == 0.3`     -- likewise no need on `0.3`
`True`
I'm pretty sure you'll get similar behaviour from Idris -- since it follows Haskell very closely for 'bread and butter' types. beware that where Haskell uses `::`, Idris uses `:`, and v.v.
Haskell has supported this functionality since at least the 1998 standard. (Seems the syntax has changed a little over the years.)

`2 + 3.54` evaluates to `5²⁷⁄₅₀` and I will explain why that is the only reasonable result.

By following this line of reasoning you are relying on choosing a particular type for those values. Given other types and operators values such as 5.54, 23.54 and 5.539999999 are equally possible.

`Prelude> (2 :: Rational) + 3.54`
`277 % 50`

You can of course write your own overloading for displaying `Rational`s, with any amount of fancy super-/sub-scripting, Latex/etc. Haskell standard routines stick to UTF-8 output.

As @dandl points out: the programmer must choose (or declare) the specific type and behaviour for their purposes. `Rational` would be contra-indicated for trigonometric purposes.

(Well, I'm am interested to know if there are any reasonable programming languages (in the above sense) out there that I might not be aware of)

From your earlier remarks, there are large numbers of languages you're not aware of. Even Idris that you claimed to have used. The problem seems to be at the keyboard end.

Quote from dandl on November 17, 2021, 11:19 pm
Quote from Paul Vernon on November 17, 2021, 5:44 pm

How do you exactly represent the number `3⁴¹³⁴⁷⁄₆₂₂₆₅` by your decimal ("exact") type? Or, for that matter `¹⁄₃` ?

Easy: that collection of symbols is not a number in any programming language I know. It's a collection of symbols that presumably denote a value in some type you have in mind,

So I guess I did not explain the above symbols because (orthogonally to my argument) I was interested in if they would be self-explanatory or not, and indeed, if anyone knew of some precedent of them being using in a programming language that they know.

Yes the symbols are Unicode (internally, on this web browser, probably UTF-8 right?  but implementation matters not), and not Latex.

The number I denoted by `3⁴¹³⁴⁷⁄₆₂₂₆₅` could also "reasonably" be denoted by `3⁴¹³⁴⁷/₆₂₂₆₅`   or `²²⁸¹⁴²⁄₆₂₂₆₅`  or `3+⁴¹³⁴⁷/₆₂₂₆₅` or `228142/62265` or  `228142%62265` or by a repeating decimal of 1776  digits which I claim would not be very reasonable, and I don't show it here for that reason. See it here if you like https://www.wolframalpha.com/input/?i=228142%2F62265. . Other "unreasonable" (for business users wanting to write and consume numbers) representations are also shown on that page such a png image, a prime factorisation or a continued fraction.

I like my choice as it does not mix in any symbols typically reserved for operators (notably `/` , `+` and (to maybe lesser degree) `%` ); because it is pretty close to the notation that you get taught early in your schooling years.; that it needs more explanation to "programmer types" than it hopefully does to "business users" is a pro, not a con 😉;  that Unicode is (without getting into big debates) at least "reasonable" to use in moderation ( I would not, I think, use it for operators,  so no `σ` for selection etc);  that a "mixed fraction" representation is easier for a human to "sort" or compare - i.e. we know the above number is between `3` and `4`, and nothing that there are 6 digits above and below, that it is somewhat close to the number `3²⁄₃`

Quote from Paul Vernon on November 18, 2021, 10:03 am
Quote from dandl on November 17, 2021, 11:19 pm
Quote from Paul Vernon on November 17, 2021, 5:44 pm

How do you exactly represent the number `3⁴¹³⁴⁷⁄₆₂₂₆₅` by your decimal ("exact") type? Or, for that matter `¹⁄₃` ?

Easy: that collection of symbols is not a number in any programming language I know. It's a collection of symbols that presumably denote a value in some type you have in mind,

So I guess I did not explain the above symbols because (orthogonally to my argument) I was interested in if they would be self-explanatory or not, and indeed, if anyone knew of some precedent of them being using in a programming language that they know.

Yes the symbols are Unicode (internally, on this web browser, probably UTF-8 right?  but implementation matters not), and not Latex.

The number I denoted by `3⁴¹³⁴⁷⁄₆₂₂₆₅` could also "reasonably" be denoted by `3⁴¹³⁴⁷/₆₂₂₆₅`   or `²²⁸¹⁴²⁄₆₂₂₆₅`  or `3+⁴¹³⁴⁷/₆₂₂₆₅` or `228142/62265` or  `228142%62265` or by a repeating decimal of 1776  digits which I claim would not be very reasonable, and I don't show it here for that reason. See it here if you like https://www.wolframalpha.com/input/?i=228142%2F62265. . Other "unreasonable" (for business users wanting to write and consume numbers) representations are also shown on that page such a png image, a prime factorisation or a continued fraction.

I like my choice as it does not mix in any symbols typically reserved for operators (notably `/` , `+` and (to maybe lesser degree) `%` ); because it is pretty close to the notation that you get taught early in your schooling years.; that it needs more explanation to "programmer types" than it hopefully does to "business users" is a pro, not a con ;  that Unicode is (without getting into big debates) at least "reasonable" to use in moderation ( I would not, I think, use it for operators,  so no `σ` for selection etc);  that a "mixed fraction" representation is easier for a human to "sort" or compare - i.e. we know the above number is between `3` and `4`, and nothing that there are 6 digits above and below, that it is somewhat close to the number `3²⁄₃`

As a "programmer type" who has spent 35+ years writing code and teaching others to write code for "business users" (and who has been a "business user"), I note that I can count on one hand the number of times I've seen a fractional notation like `3²⁄₃` used in business.

A decimal approximation like 3.67 or 3.667 is far more typical, and whilst it may have been the case that in the early days of mechanised bookkeeping the use of decimal literals were grudgingly accepted because technical limitations precluded fractional notation -- and I'm only guessing that might have been the case -- all those who perhaps grudgingly endured it are long deceased now, so decimal notation is now not only technically easier, it's expected (for most uses) by business users.

Quote from dandl on November 17, 2021, 11:29 pm

Pragmatically most languages have a single equals operator which combines many different ways of comparing values of various types. The single operator is simple on the surface, but there is much hidden complexity underlying that apparent simplicity.

It is what is on the surface that matters. I am arguing about the model,  the Model of Data, the "Relational Model of Data" for the most part.  I'm not particularly concerned about  the implementation (or about the programming languages one might use to bootstrap an implementation)

to paraphrase, as simple as possible but no simpler.

Exactly. And to repeat, I strongly believe it is simpler - as simple as possible in fact -  to not take as an axiom that values are typed

TTM says it this way

• All scalar values shall be typed—i.e., such values shall always carry with them, at least conceptually, some identification of the type to which they belong.

I know it is possible in the model for values to not to be typed. For it not to be a fundamental concept. Yes it is a concept, but it is build on the foundation, it is not part of the foundation.

So, I guess, we are (as is typical of such matters) arguing at different levels. It is a plain fact, for example, that some set theories do not take values are typed as an axiom.I am obviously right at that level.

But at the level of existing programming languages, I am obviously wrong - they are mostly all explicitly typed, and the few odd ones that try to say they have only one type (character strings in some case, or binary strings in others) are not really untyped in practice.

So, if we can agree on the above. What we have left is where should some "future database system" sit? Close enough to set theory to not have axiomatics types, or close enough to existing programming languages to have to have types (whether it dam well likes it or not).

Again, I suspect we are arguing about the level - about where the model stops, and what you build on the model takes over.  I guess I'm saying that the model stops before types come into play, most (but not all) others here are saying, no, the model continues through typing and operators and stops at about the point the model has features that support the creation of "user defined" operators and types.

So, sure I would want the ability for users to create their own scalar values (or atoms to use the set theory term - well used by some set theories anyway.), and sure, users would want to create a bunch of values together, and give that bunch a name, and then some operators (either new ones and/or extending or "overloading" existing ones) on the new values, and hence yes that looks like a type, quacks like a type. And I would want to consider such facilities as "part of the model", but even then, I would say the users first create the new scalar values (using some unique representation that is not shared by any other value), and only then would they collect them together into a set that they can then name and nominate as a type. I would still maintain that values come first, types come second

Quote from Paul Vernon on November 18, 2021, 10:50 am
Quote from dandl on November 17, 2021, 11:29 pm

Pragmatically most languages have a single equals operator which combines many different ways of comparing values of various types. The single operator is simple on the surface, but there is much hidden complexity underlying that apparent simplicity.

It is what is on the surface that matters. I am arguing about the model,  the Model of Data, the "Relational Model of Data" for the most part.  I'm not particularly concerned about  the implementation (or about the programming languages one might use to bootstrap an implementation)

to paraphrase, as simple as possible but no simpler.

Exactly. And to repeat, I strongly believe it is simpler - as simple as possible in fact -  to not take as an axiom that values are typed

That's fine, mathematically or logically or conceptually. It's not much help if your goal is -- as was TTM's -- to direct design of a family of computer languages.

...

I would still maintain that values come first, types come second

Again, that's mathematically, logically, or conceptually fine.

But it doesn't work in real computer languages, though you can pragmatically -- and reasonably -- state that literals come first, and they (second) denote values of types.

In other words, the literal 2 or π is untyped (aside from trivially being type "literal" or "string", if you like) until used in some context that asserts it is a value and then pragmatically it must have a type.

Likewise for an expression like πe, which as you noted elsewhere is not proven whether it is irrational or not, but any practical and reasonably ergonomic computer language will assign it a type before or when it's evaluated. Until that point, it's trivially type "expression" or "string".

Quote from Dave Voorhis on November 18, 2021, 11:54 am

In other words, the literal 2 or π is untyped (aside from trivially being type "literal" or "string", if you like) until used in some context that asserts it is a value and then pragmatically it must have a type.

OK. Cool. I agree.

That is, give or take, what I've been trying to say: Values are (fundamentally) untyped until used.

Sure, there is not a lot of point of a value that you ain't going to use, but there is a point in making the point nonetheless. If types are not quite as fundamental as values, that can change your whole conception of things.

As Anthony said

the programmer must choose (or declare) the specific type and behaviour for their purposes

Types are about usage and behaviour. They are about how you use you values.  Values first, how you use them second.

Then all remains is, how does that viewpoint affect some future real computer language. I'm guessing you think it can't. I'm not quite so sure...

I would say the users first create the new scalar values (using some unique representation that is not shared by any other value), and only then would they collect them together into a set that they can then name and nominate as a type. I would still maintain that values come first, types come second

OK, then lets start at the other end: natural language.

I claim that words referring to things are meaningless unless they also come with a named type, implicitly or explicitly.

If I refer to a dog you infer type animal unless the context conveys type offensive person.

If I refer to a button you infer a control device unless context suggests a component of clothing.

If I refer to a value you will probably have no idea unless I specify what type of value I mean.

And so on. At every level of abstraction references to things come with type. Values might come first, but values are of no value without type.

Andl - A New Database Language - andl.org
PreviousPage 6 of 8Next