Are inclusion dependencies reducible to foreign-key dependencies?
Quote from Dave Voorhis on October 13, 2019, 1:18 pmQuote from dandl on October 13, 2019, 1:08 pmMy preference is marginally to treat comparisons like that as valid but constant, and issue a warning. By all means compare equal a whosit to a fizzbug, but it will always be false, the code does nothing, and the compiler might as well pick that up and tell you about it.
At least one recent Java compiler (the one integrated into Eclipse) does that; it issues a warning.
I'd like to go further than just a warning and statically disallow it, unless it either belongs to an explicitly-defined non-Object class hierarchy (if Java or other object-oriented language) or an explicit non-ALPHA type inheritance hierarchy if it's an implementation of the TTM IM. I'm quite happy for that to make Object or ALPHA exceptional.
Quote from dandl on October 13, 2019, 1:08 pmBut then what should we do if we compare a INT99 (range 0..99) to a INT9 (range 0..9)? It seems reasonable that if they share a common supertype they can be compared safely as values of that supertype. I don't think that rule runs foul of the ALPHA situation in the IM, does it?
Per the above, inheritance from INT is fine and comparisons would be unrestricted. Only ALPHA is "special".
Quote from dandl on October 13, 2019, 1:08 pmMy preference is marginally to treat comparisons like that as valid but constant, and issue a warning. By all means compare equal a whosit to a fizzbug, but it will always be false, the code does nothing, and the compiler might as well pick that up and tell you about it.
At least one recent Java compiler (the one integrated into Eclipse) does that; it issues a warning.
I'd like to go further than just a warning and statically disallow it, unless it either belongs to an explicitly-defined non-Object class hierarchy (if Java or other object-oriented language) or an explicit non-ALPHA type inheritance hierarchy if it's an implementation of the TTM IM. I'm quite happy for that to make Object or ALPHA exceptional.
Quote from dandl on October 13, 2019, 1:08 pmBut then what should we do if we compare a INT99 (range 0..99) to a INT9 (range 0..9)? It seems reasonable that if they share a common supertype they can be compared safely as values of that supertype. I don't think that rule runs foul of the ALPHA situation in the IM, does it?
Per the above, inheritance from INT is fine and comparisons would be unrestricted. Only ALPHA is "special".
Quote from Dave Voorhis on October 13, 2019, 1:22 pmQuote from dandl on October 13, 2019, 1:08 pmYou're in danger of muddying the waters by mentioning Java. In Java (as in C#) strings are reference types except when they aren't. The equals operator by default compares for identity (so objects never compare equal unless they're the 'very same object') except when they're strings. And equals() is not the same '==' and '!=' operator, and compare() and compareTo() are different again (or was that C#?). No matter. [C++ is better because it's less 'helpful'.]
I had much debate with myself over whether or not to mention Java (as an example of typical object oriented behaviour.) In the end, I went with it because I thought it might be helpful for those familiar with Java/C# and friends but not as familiar with the IM or vice versa, particularly as comparisons -- even if only kept to oneself -- are inevitable.
Quote from dandl on October 13, 2019, 1:08 pmYou're in danger of muddying the waters by mentioning Java. In Java (as in C#) strings are reference types except when they aren't. The equals operator by default compares for identity (so objects never compare equal unless they're the 'very same object') except when they're strings. And equals() is not the same '==' and '!=' operator, and compare() and compareTo() are different again (or was that C#?). No matter. [C++ is better because it's less 'helpful'.]
I had much debate with myself over whether or not to mention Java (as an example of typical object oriented behaviour.) In the end, I went with it because I thought it might be helpful for those familiar with Java/C# and friends but not as familiar with the IM or vice versa, particularly as comparisons -- even if only kept to oneself -- are inevitable.
Quote from johnwcowan on October 13, 2019, 3:19 pmQuote from Dave Voorhis on October 13, 2019, 11:22 amPer my recollection, that's an essay question, though as if I recall correctly, Date (at least) felt that it should be allowable to compare, say, INT to CHAR.
But given an expression like 3 = "blah" I'd argue that it should obviously and unconditionally be a static type mismatch under the IM, despite INT and CHAR being subtypes of ALPHA. In Rel, it throws an error.
Static typing is orthogonal to implicit conversion. Ever since about 1961, Fortran has combined static typing with a number of implicit conversions (int to float, for example), and most statically typed languages have followed its lead. By "implicit" I mean implicit at the time of call. Per contra, dynamically typed Scheme has (post-R5RS) a set of arithmetic operators that require their arguments to be small exact integers and another set that require their arguments to be inexact reals, aka floats, thus doing no implicit conversion. (The usual Scheme arithmetic operators do coercion to floats.) C++ has funky and difficult rules for deciding when implicit conversion is available, and so does Algol 68, but they are funky and difficult languages.
However, Java allows you to do value comparisons using the equals method. The comparison new Integer(3).equals(new String("blah")) is statically valid and returns false. The comparison new Integer(3).equals(new Integer(3)) returns true.
The vast majority of non-foundational Java classes, however, immediately check whether the two arguments of
equals
have the exact same type and return false if they don't. The general appearance of a modern Javaequals
method of classFoo
is this:@Override public boolean equals(Object o) { // self check if (this == o) return true; // null check if (o == null) return false; // type check and cast if (getClass() != o.getClass()) return false; Foo foo = (Foo) o; // field comparison return <some boolean expression performing the actual equality test>; }
This effectively splits the Assignment Principle in two, the Formal Assignment Principle and the Substantive Assignment Principle. If
SubFoo
is a subclass ofFoo
, thenaFoo = aSubFoo;
violates the Formal Assignment Principle, becauseaFoo.equal(aSubFoo)
returns false; but it preserves the Substantive Assignment Principle, since the value ofaFoo
is in fact aSubFoo
with the same properties asaSubFoo
, except for any methods applicable only toSubFoo
s.I'd argue that the same weakness applies to an IM that allows comparisons of INT to CHAR. It weakens type safety, and shouldn't be allowed.
That said, I wouldn't categorically preclude such comparisons, only require that they be made explicit. I.e., if you wish to be able to compare INT to CHAR, you are required to define an implementation of the '=' operator specific to INT and CHAR operands.
What difference does it make if the system predefines the conversion or the user defines it? I don't follow that.
Quote from Dave Voorhis on October 13, 2019, 11:22 am
Per my recollection, that's an essay question, though as if I recall correctly, Date (at least) felt that it should be allowable to compare, say, INT to CHAR.
But given an expression like 3 = "blah" I'd argue that it should obviously and unconditionally be a static type mismatch under the IM, despite INT and CHAR being subtypes of ALPHA. In Rel, it throws an error.
Static typing is orthogonal to implicit conversion. Ever since about 1961, Fortran has combined static typing with a number of implicit conversions (int to float, for example), and most statically typed languages have followed its lead. By "implicit" I mean implicit at the time of call. Per contra, dynamically typed Scheme has (post-R5RS) a set of arithmetic operators that require their arguments to be small exact integers and another set that require their arguments to be inexact reals, aka floats, thus doing no implicit conversion. (The usual Scheme arithmetic operators do coercion to floats.) C++ has funky and difficult rules for deciding when implicit conversion is available, and so does Algol 68, but they are funky and difficult languages.
However, Java allows you to do value comparisons using the equals method. The comparison new Integer(3).equals(new String("blah")) is statically valid and returns false. The comparison new Integer(3).equals(new Integer(3)) returns true.
The vast majority of non-foundational Java classes, however, immediately check whether the two arguments of equals
have the exact same type and return false if they don't. The general appearance of a modern Java equals
method of class Foo
is this:
@Override
public boolean equals(Object o) {
// self check
if (this == o)
return true;
// null check
if (o == null)
return false;
// type check and cast
if (getClass() != o.getClass())
return false;
Foo foo = (Foo) o;
// field comparison
return <some boolean expression performing the actual equality test>;
}
This effectively splits the Assignment Principle in two, the Formal Assignment Principle and the Substantive Assignment Principle. If SubFoo
is a subclass of Foo
, thenaFoo = aSubFoo;
violates the Formal Assignment Principle, because aFoo.equal(aSubFoo)
returns false; but it preserves the Substantive Assignment Principle, since the value of aFoo
is in fact a SubFoo
with the same properties as aSubFoo
, except for any methods applicable only to SubFoo
s.
I'd argue that the same weakness applies to an IM that allows comparisons of INT to CHAR. It weakens type safety, and shouldn't be allowed.
That said, I wouldn't categorically preclude such comparisons, only require that they be made explicit. I.e., if you wish to be able to compare INT to CHAR, you are required to define an implementation of the '=' operator specific to INT and CHAR operands.
What difference does it make if the system predefines the conversion or the user defines it? I don't follow that.
Quote from johnwcowan on October 13, 2019, 3:32 pmQuote from dandl on October 12, 2019, 11:43 pm
I think you're layering a whole bunch of unnecessary detail. The IM either does or does not allow subtypes to be compared without an explicit conversion. The rest follows. [...] The IM either considers comparisons between values of those types to be valid or not.
As I said above, whether values of different types can be compared with an implicit conversion is orthogonal to the type system proper. Go notoriously has no implicit conversions, not even between integers of separate widths, with two exceptions:
- A numeric literal can be arbitrarily large and precise and is implicitly converted to the width and precision of the variable it is assigned to.
- A value of a type with just one component can be assigned from and compared to a value of the component type provided the component type is built-in.
Both exceptions are concessions to convenient initialization. Ada on the other hand allows implicit conversions between types
int with range 0..9
andint with range 0..99
, throwing a runtime exception at any attempt to stuff a value larger than 9 into a variable of the first type.
Quote from dandl on October 12, 2019, 11:43 pm
I think you're layering a whole bunch of unnecessary detail. The IM either does or does not allow subtypes to be compared without an explicit conversion. The rest follows. [...] The IM either considers comparisons between values of those types to be valid or not.
As I said above, whether values of different types can be compared with an implicit conversion is orthogonal to the type system proper. Go notoriously has no implicit conversions, not even between integers of separate widths, with two exceptions:
- A numeric literal can be arbitrarily large and precise and is implicitly converted to the width and precision of the variable it is assigned to.
- A value of a type with just one component can be assigned from and compared to a value of the component type provided the component type is built-in.
Both exceptions are concessions to convenient initialization. Ada on the other hand allows implicit conversions between types int with range 0..9
and int with range 0..99
, throwing a runtime exception at any attempt to stuff a value larger than 9 into a variable of the first type.
Quote from Dave Voorhis on October 13, 2019, 4:48 pmQuote from johnwcowan on October 13, 2019, 3:19 pmQuote from Dave Voorhis on October 13, 2019, 11:22 amPer my recollection, that's an essay question, though as if I recall correctly, Date (at least) felt that it should be allowable to compare, say, INT to CHAR.
But given an expression like 3 = "blah" I'd argue that it should obviously and unconditionally be a static type mismatch under the IM, despite INT and CHAR being subtypes of ALPHA. In Rel, it throws an error.
Static typing is orthogonal to implicit conversion. Ever since about 1961, Fortran has combined static typing with a number of implicit conversions (int to float, for example), and most statically typed languages have followed its lead. By "implicit" I mean implicit at the time of call. Per contra, dynamically typed Scheme has (post-R5RS) a set of arithmetic operators that require their arguments to be small exact integers and another set that require their arguments to be inexact reals, aka floats, thus doing no implicit conversion. (The usual Scheme arithmetic operators do coercion to floats.) C++ has funky and difficult rules for deciding when implicit conversion is available, and so does Algol 68, but they are funky and difficult languages.
Implicit conversion of any kind is almost invariably bad.
Quote from johnwcowan on October 13, 2019, 3:19 pmWhat difference does it make if the system predefines the conversion or the user defines it? I don't follow that.
Those who give up safety for convenience deserve neither?
An explicit conversion expresses the programmer's intent. A predefined conversion might do something that is not the programmer's intent. Comparing, say, an Integer to a String -- which you can do because both are Object -- is almost invariably not the programmer's intent. Arguably, it's always a mistake, but I'll go with "almost invariably" because I can't anticipate all use cases. That's why newer Java compilers warn about it. The warning message in one example is "Unlikely argument type for equals(): Integer seems to be unrelated to String."
Exactly.
If you've constructed your program (badly, probably) so that you need to compare Integer instances to String instances -- which will invariably return false -- then you should be able to explicitly provide such a comparison mechanism, but the compiler shouldn't encourage mistakes and/or bad practice by doing it for you.
Quote from johnwcowan on October 13, 2019, 3:19 pmQuote from Dave Voorhis on October 13, 2019, 11:22 amPer my recollection, that's an essay question, though as if I recall correctly, Date (at least) felt that it should be allowable to compare, say, INT to CHAR.
But given an expression like 3 = "blah" I'd argue that it should obviously and unconditionally be a static type mismatch under the IM, despite INT and CHAR being subtypes of ALPHA. In Rel, it throws an error.
Static typing is orthogonal to implicit conversion. Ever since about 1961, Fortran has combined static typing with a number of implicit conversions (int to float, for example), and most statically typed languages have followed its lead. By "implicit" I mean implicit at the time of call. Per contra, dynamically typed Scheme has (post-R5RS) a set of arithmetic operators that require their arguments to be small exact integers and another set that require their arguments to be inexact reals, aka floats, thus doing no implicit conversion. (The usual Scheme arithmetic operators do coercion to floats.) C++ has funky and difficult rules for deciding when implicit conversion is available, and so does Algol 68, but they are funky and difficult languages.
Implicit conversion of any kind is almost invariably bad.
Quote from johnwcowan on October 13, 2019, 3:19 pmWhat difference does it make if the system predefines the conversion or the user defines it? I don't follow that.
Those who give up safety for convenience deserve neither?
An explicit conversion expresses the programmer's intent. A predefined conversion might do something that is not the programmer's intent. Comparing, say, an Integer to a String -- which you can do because both are Object -- is almost invariably not the programmer's intent. Arguably, it's always a mistake, but I'll go with "almost invariably" because I can't anticipate all use cases. That's why newer Java compilers warn about it. The warning message in one example is "Unlikely argument type for equals(): Integer seems to be unrelated to String."
Exactly.
If you've constructed your program (badly, probably) so that you need to compare Integer instances to String instances -- which will invariably return false -- then you should be able to explicitly provide such a comparison mechanism, but the compiler shouldn't encourage mistakes and/or bad practice by doing it for you.
Quote from johnwcowan on October 13, 2019, 5:40 pmQuote from Dave Voorhis on October 13, 2019, 4:48 pm
Implicit conversion of any kind is almost invariably bad.
Really? Given
int i; double d; long l; double_complex c
in some C/Java-type language, you would insist on writingd + (double)i
andl - (long)i
andc = double_complex((double) i, 0.0)
instead of simplyd + i
andl - i
andc = i
? You surprise me.Quote from johnwcowan on October 13, 2019, 3:19 pmWhat difference does it make if the system predefines the conversion or the user defines it? I don't follow that.
An explicit conversion expresses the programmer's intent.
I'm not sure we mean the same thing by explicit here. Is it explicit if you explicitly define the conversion function but don't explicitly mention it at the point of use, or is that still considered implicit conversion? In C++ every single-argument constructor is potentially usable for implicit conversion; in Scala you have to mark them; in Go (with the exceptions mentioned) all conversion functions, even the simplest ones like int32→int64, must be mentioned at the point of use.
A predefined conversion might do something that is not the programmer's intent.
This makes me think that you don't object to explicitly written (thus not "predefined") and implicitly applied conversions.
Quote from Dave Voorhis on October 13, 2019, 4:48 pm
Implicit conversion of any kind is almost invariably bad.
Really? Given int i; double d; long l; double_complex c
in some C/Java-type language, you would insist on writing d + (double)i
and l - (long)i
and c = double_complex((double) i, 0.0)
instead of simply d + i
and l - i
and c = i
? You surprise me.
Quote from johnwcowan on October 13, 2019, 3:19 pmWhat difference does it make if the system predefines the conversion or the user defines it? I don't follow that.
An explicit conversion expresses the programmer's intent.
I'm not sure we mean the same thing by explicit here. Is it explicit if you explicitly define the conversion function but don't explicitly mention it at the point of use, or is that still considered implicit conversion? In C++ every single-argument constructor is potentially usable for implicit conversion; in Scala you have to mark them; in Go (with the exceptions mentioned) all conversion functions, even the simplest ones like int32→int64, must be mentioned at the point of use.
A predefined conversion might do something that is not the programmer's intent.
This makes me think that you don't object to explicitly written (thus not "predefined") and implicitly applied conversions.
Quote from Dave Voorhis on October 13, 2019, 5:53 pmQuote from johnwcowan on October 13, 2019, 5:40 pmQuote from Dave Voorhis on October 13, 2019, 4:48 pmImplicit conversion of any kind is almost invariably bad.
Really? Given
int i; double d; long l; double_complex c
in some C/Java-type language, you would insist on writingd + (double)i
andl - (long)i
andc = double_complex((double) i, 0.0)
instead of simplyd + i
andl - i
andc = i
? You surprise me.Rel (and probably Tutorial D, though as I recall it's unspecified) requires the equivalent to d + (double)i and l - (long)i and c = double_complex((double)i, 0.0) and I've grown to love it. It's explicit, it shows me exactly where type conversion is happening, and it makes me think about why I'm mixing types and/or whether I should be mixing types. Slightly greater verbosity for greater clarity seems like a good thing.
Quote from johnwcowan on October 13, 2019, 5:40 pmQuote from johnwcowan on October 13, 2019, 3:19 pmWhat difference does it make if the system predefines the conversion or the user defines it? I don't follow that.
An explicit conversion expresses the programmer's intent.
I'm not sure we mean the same thing by explicit here. Is it explicit if you explicitly define the conversion function but don't explicitly mention it at the point of use, or is that still considered implicit conversion? In C++ every single-argument constructor is potentially usable for implicit conversion; in Scala you have to mark them; in Go (with the exceptions mentioned) all conversion functions, even the simplest ones like int32→int64, must be mentioned at the point of use.
I like Go's approach. I prefer that all conversions be explicitly defined -- though the system may provide some for you, like Tutorial D's CAST_AS_CHAR(INT) and so forth -- and explicitly used.
Quote from johnwcowan on October 13, 2019, 5:40 pmQuote from Dave Voorhis on October 13, 2019, 4:48 pmA predefined conversion might do something that is not the programmer's intent.
This makes me think that you don't object to explicitly written (thus not "predefined") and implicitly applied conversions.
Sorry, I should have been clearer. I object to implicitly-applied conversions.
Quote from johnwcowan on October 13, 2019, 5:40 pmQuote from Dave Voorhis on October 13, 2019, 4:48 pmImplicit conversion of any kind is almost invariably bad.
Really? Given
int i; double d; long l; double_complex c
in some C/Java-type language, you would insist on writingd + (double)i
andl - (long)i
andc = double_complex((double) i, 0.0)
instead of simplyd + i
andl - i
andc = i
? You surprise me.
Rel (and probably Tutorial D, though as I recall it's unspecified) requires the equivalent to d + (double)i and l - (long)i and c = double_complex((double)i, 0.0) and I've grown to love it. It's explicit, it shows me exactly where type conversion is happening, and it makes me think about why I'm mixing types and/or whether I should be mixing types. Slightly greater verbosity for greater clarity seems like a good thing.
Quote from johnwcowan on October 13, 2019, 5:40 pmQuote from johnwcowan on October 13, 2019, 3:19 pmWhat difference does it make if the system predefines the conversion or the user defines it? I don't follow that.
An explicit conversion expresses the programmer's intent.
I'm not sure we mean the same thing by explicit here. Is it explicit if you explicitly define the conversion function but don't explicitly mention it at the point of use, or is that still considered implicit conversion? In C++ every single-argument constructor is potentially usable for implicit conversion; in Scala you have to mark them; in Go (with the exceptions mentioned) all conversion functions, even the simplest ones like int32→int64, must be mentioned at the point of use.
I like Go's approach. I prefer that all conversions be explicitly defined -- though the system may provide some for you, like Tutorial D's CAST_AS_CHAR(INT) and so forth -- and explicitly used.
Quote from johnwcowan on October 13, 2019, 5:40 pmQuote from Dave Voorhis on October 13, 2019, 4:48 pmA predefined conversion might do something that is not the programmer's intent.
This makes me think that you don't object to explicitly written (thus not "predefined") and implicitly applied conversions.
Sorry, I should have been clearer. I object to implicitly-applied conversions.
Quote from johnwcowan on October 13, 2019, 6:52 pmQuote from Dave Voorhis on October 13, 2019, 5:53 pm
Sorry, I should have been clearer. I object to implicitly-applied conversions.
Okay, clear enough. In languages with multiple sizes/precisions of numbers, though, I think Go's treatment of numeric literals is good: they pick up their type from the context, so both
var i int32 = 30
andvar j int64 = 30
and evenvar k float64 = 30
are all valid. Otherwise you have to have separate literal markers for eight kinds of integers and two kinds of floats.
Quote from Dave Voorhis on October 13, 2019, 5:53 pm
Sorry, I should have been clearer. I object to implicitly-applied conversions.
Okay, clear enough. In languages with multiple sizes/precisions of numbers, though, I think Go's treatment of numeric literals is good: they pick up their type from the context, so both var i int32 = 30
and var j int64 = 30
and even var k float64 = 30
are all valid. Otherwise you have to have separate literal markers for eight kinds of integers and two kinds of floats.
Quote from Dave Voorhis on October 13, 2019, 7:42 pmQuote from johnwcowan on October 13, 2019, 6:52 pmQuote from Dave Voorhis on October 13, 2019, 5:53 pm
Sorry, I should have been clearer. I object to implicitly-applied conversions.
Okay, clear enough. In languages with multiple sizes/precisions of numbers, though, I think Go's treatment of numeric literals is good: they pick up their type from the context, so both
var i int32 = 30
andvar j int64 = 30
and evenvar k float64 = 30
are all valid. Otherwise you have to have separate literal markers for eight kinds of integers and two kinds of floats.That seems reasonable, as there's nothing hidden.
Though if the Go designers had instead decided to provide literal markers for eight kinds of integers and two kinds of floats, that would be fine.
Quote from johnwcowan on October 13, 2019, 6:52 pmQuote from Dave Voorhis on October 13, 2019, 5:53 pm
Sorry, I should have been clearer. I object to implicitly-applied conversions.
Okay, clear enough. In languages with multiple sizes/precisions of numbers, though, I think Go's treatment of numeric literals is good: they pick up their type from the context, so both
var i int32 = 30
andvar j int64 = 30
and evenvar k float64 = 30
are all valid. Otherwise you have to have separate literal markers for eight kinds of integers and two kinds of floats.
That seems reasonable, as there's nothing hidden.
Though if the Go designers had instead decided to provide literal markers for eight kinds of integers and two kinds of floats, that would be fine.
Quote from dandl on October 13, 2019, 11:49 pmQuote from johnwcowan on October 13, 2019, 3:19 pmQuote from Dave Voorhis on October 13, 2019, 11:22 amPer my recollection, that's an essay question, though as if I recall correctly, Date (at least) felt that it should be allowable to compare, say, INT to CHAR.
But given an expression like 3 = "blah" I'd argue that it should obviously and unconditionally be a static type mismatch under the IM, despite INT and CHAR being subtypes of ALPHA. In Rel, it throws an error.
Static typing is orthogonal to implicit conversion.
Not entirely. Both explicit and implicit conversions may be applied by the compiler before or during evaluating a type and performing static checking. If ints are implicitly widened then it is never a static type error to compare them; if they're not, it might be.
Ever since about 1961, Fortran has combined static typing with a number of implicit conversions (int to float, for example), and most statically typed languages have followed its lead.
Fortran first appeared in 1957. Are you saying that it did something different for the first 4 years?
I don't recall any type declarations in Fortran II/IV. Integers and reals were implicitly declared by usage in context and the first letter of the name. The only type conversions I recall were (a) int to real as needed (b) to match the destination in assignment. I don't think the Fortran type system (or lack of it) can be blamed for what others did.
Yes, the vast majority of languages permit implicit arithmetic widening on the grounds that it's relatively safe and highly convenient. Many languages convert during assignment. I don't think you can blame Fortran for what many language designers thought of as just common sense.
C++ has funky and difficult rules for deciding when implicit conversion is available, and so does Algol 68, but they are funky and difficult languages.
Is funky supposed to be good or bad? C++ has an extraordinary range of choices in its type system and a correspondingly complex set of rules, but mostly quite straightforward once you understand the philosophy. I really miss the sophisticated value types, type aliases and templates in some of the work I do. OTOH I don't miss Algol 68 at all.
However, Java allows you to do value comparisons using the equals method. The comparison new Integer(3).equals(new String("blah")) is statically valid and returns false. The comparison new Integer(3).equals(new Integer(3)) returns true.
The vast majority of non-foundational Java classes, however, immediately check whether the two arguments of
equals
have the exact same type and return false if they don't. The general appearance of a modern Javaequals
method of classFoo
is this:<horrible code omitted>
And you don't see this as 'funky and difficult'?
Quote from johnwcowan on October 13, 2019, 3:19 pmQuote from Dave Voorhis on October 13, 2019, 11:22 amPer my recollection, that's an essay question, though as if I recall correctly, Date (at least) felt that it should be allowable to compare, say, INT to CHAR.
But given an expression like 3 = "blah" I'd argue that it should obviously and unconditionally be a static type mismatch under the IM, despite INT and CHAR being subtypes of ALPHA. In Rel, it throws an error.
Static typing is orthogonal to implicit conversion.
Not entirely. Both explicit and implicit conversions may be applied by the compiler before or during evaluating a type and performing static checking. If ints are implicitly widened then it is never a static type error to compare them; if they're not, it might be.
Ever since about 1961, Fortran has combined static typing with a number of implicit conversions (int to float, for example), and most statically typed languages have followed its lead.
Fortran first appeared in 1957. Are you saying that it did something different for the first 4 years?
I don't recall any type declarations in Fortran II/IV. Integers and reals were implicitly declared by usage in context and the first letter of the name. The only type conversions I recall were (a) int to real as needed (b) to match the destination in assignment. I don't think the Fortran type system (or lack of it) can be blamed for what others did.
Yes, the vast majority of languages permit implicit arithmetic widening on the grounds that it's relatively safe and highly convenient. Many languages convert during assignment. I don't think you can blame Fortran for what many language designers thought of as just common sense.
C++ has funky and difficult rules for deciding when implicit conversion is available, and so does Algol 68, but they are funky and difficult languages.
Is funky supposed to be good or bad? C++ has an extraordinary range of choices in its type system and a correspondingly complex set of rules, but mostly quite straightforward once you understand the philosophy. I really miss the sophisticated value types, type aliases and templates in some of the work I do. OTOH I don't miss Algol 68 at all.
However, Java allows you to do value comparisons using the equals method. The comparison new Integer(3).equals(new String("blah")) is statically valid and returns false. The comparison new Integer(3).equals(new Integer(3)) returns true.
The vast majority of non-foundational Java classes, however, immediately check whether the two arguments of
equals
have the exact same type and return false if they don't. The general appearance of a modern Javaequals
method of classFoo
is this:
<horrible code omitted>
And you don't see this as 'funky and difficult'?