Life after D with Safe Java
Quote from Erwin on April 22, 2021, 11:15 amQuote from tobega on April 22, 2021, 8:17 amQuote from Erwin on April 21, 2021, 10:38 pmQuote from tobega on April 21, 2021, 3:27 pmI keep waiting for the revolutionary vision of how we should program in a safer-shorter-higher way ...
One thing that always comes to mind when I see such proposals, is that we might have a language that instead of forcing the developer into
am_price_net := int(am_price_gross * pct_reduc * 100 + 0.5) / 100;
allows one to write
COMPUTE AM-PRICE-NET ROUNDED = AM-PRICE-GROSS * PCT-REDUC;
But for some strange reason people always think I'm being facetious when I say that.
Except that it's not necessarily an equivalent statement since it seems default rounded mode is NEAREST-TOWARD-ZERO
And if someone sets default rounded mode to NEAREST-EVEN is that when the second example becomes really correct or when it goes wrong? So is the feature a benefit or a bug-source?
"Settings" and how to set them and how [not] to define the scope of source lines that the setting, once set, applies to (and hell, even whether or not to have them at all in the first place), is a different matter and worth a discussion in its own right.
My point was about how languages (even those that are about to celebrate their 70th birthday) can mark that spot of "just declare _what_ needs to be done and leave the rest to the compiler". Which seems to me like what this whole valueless thread has been about.
Quote from tobega on April 22, 2021, 8:17 amQuote from Erwin on April 21, 2021, 10:38 pmQuote from tobega on April 21, 2021, 3:27 pmI keep waiting for the revolutionary vision of how we should program in a safer-shorter-higher way ...
One thing that always comes to mind when I see such proposals, is that we might have a language that instead of forcing the developer into
am_price_net := int(am_price_gross * pct_reduc * 100 + 0.5) / 100;
allows one to write
COMPUTE AM-PRICE-NET ROUNDED = AM-PRICE-GROSS * PCT-REDUC;
But for some strange reason people always think I'm being facetious when I say that.
Except that it's not necessarily an equivalent statement since it seems default rounded mode is NEAREST-TOWARD-ZERO
And if someone sets default rounded mode to NEAREST-EVEN is that when the second example becomes really correct or when it goes wrong? So is the feature a benefit or a bug-source?
"Settings" and how to set them and how [not] to define the scope of source lines that the setting, once set, applies to (and hell, even whether or not to have them at all in the first place), is a different matter and worth a discussion in its own right.
My point was about how languages (even those that are about to celebrate their 70th birthday) can mark that spot of "just declare _what_ needs to be done and leave the rest to the compiler". Which seems to me like what this whole valueless thread has been about.
Quote from AntC on April 22, 2021, 11:50 amQuote from Dave Voorhis on April 22, 2021, 8:29 amQuote from dandl on April 22, 2021, 6:28 amQuote from Dave Voorhis on April 21, 2021, 11:36 amQuote from dandl on April 21, 2021, 10:56 amQuote from Dave Voorhis on April 21, 2021, 8:22 amQuote from dandl on April 21, 2021, 12:44 am
I never said they were. I want safe unions, not C unions.
There are languages out there which claim to not need annotations: that topic will have to wait until I can take a look.
"There are languages" is the sort of unhelpful talk that's making these threads valueless. A language might deliver one feature precisely because it restricts some other feature.
I claim Haskell does not need type annotations in monomorphic code.
There might well be, but are they statically typed?
Yes Haskell is rigorously statically typed.
It includes safe unions -- that is tagged Algebraic Datatypes.
The important thing is that I would like to retain type annotations for readability.
In Haskell you can optionally put a type annotation on any variable/function/introduced name, any expression or sub-expression. The effect is the compiler will anyway carry out its type inference (no annotation needed); then compare the inferred type to the annotation; complain if they're different.
There are exceptions: if you want your introduced-name to be less polymorphic than would be inferred, you need to declare so, and the compiler accepts and validates so. If you want your introduced-name to be overloaded, you put a non-omittable declaration of its most general type, plus overrides for each type(s) at which it's overloaded. There's a bunch of restrictions, essentially so that auto-inference (for expressions-in-general without annotations) is tractable.
That's an example of delivering one feature precisely by restricting some other feature. There's plenty of q's on StackOverflow from newbies who don't get why annotations are needed in some places not others. (And the advice to newbies is: there's no harm putting extra annotations; do so as machine-checked documentation -- IOW exactly the reasons Dave wants annotations.)
But again: You do you. Create what you would like to use.
The type theory to support Haskell's inference/checking is horribly obtuse. (In Haskell vintage 1998, it's Hindley-Milner, which I can just about grok. In 2021 it's something called System FCΩ-with-overloading. And/or with other variants/decorations on the F. The variants are constantly balanced on a knife-edge of coherence/consistency/terminating/decidable vs expressivity.) It's unlikely if anybody round here creates what they "would like to use" it'll still balance.
Quote from Dave Voorhis on April 22, 2021, 8:29 amQuote from dandl on April 22, 2021, 6:28 amQuote from Dave Voorhis on April 21, 2021, 11:36 amQuote from dandl on April 21, 2021, 10:56 amQuote from Dave Voorhis on April 21, 2021, 8:22 amQuote from dandl on April 21, 2021, 12:44 am
I never said they were. I want safe unions, not C unions.
There are languages out there which claim to not need annotations: that topic will have to wait until I can take a look.
"There are languages" is the sort of unhelpful talk that's making these threads valueless. A language might deliver one feature precisely because it restricts some other feature.
I claim Haskell does not need type annotations in monomorphic code.
There might well be, but are they statically typed?
Yes Haskell is rigorously statically typed.
It includes safe unions -- that is tagged Algebraic Datatypes.
The important thing is that I would like to retain type annotations for readability.
In Haskell you can optionally put a type annotation on any variable/function/introduced name, any expression or sub-expression. The effect is the compiler will anyway carry out its type inference (no annotation needed); then compare the inferred type to the annotation; complain if they're different.
There are exceptions: if you want your introduced-name to be less polymorphic than would be inferred, you need to declare so, and the compiler accepts and validates so. If you want your introduced-name to be overloaded, you put a non-omittable declaration of its most general type, plus overrides for each type(s) at which it's overloaded. There's a bunch of restrictions, essentially so that auto-inference (for expressions-in-general without annotations) is tractable.
That's an example of delivering one feature precisely by restricting some other feature. There's plenty of q's on StackOverflow from newbies who don't get why annotations are needed in some places not others. (And the advice to newbies is: there's no harm putting extra annotations; do so as machine-checked documentation -- IOW exactly the reasons Dave wants annotations.)
But again: You do you. Create what you would like to use.
The type theory to support Haskell's inference/checking is horribly obtuse. (In Haskell vintage 1998, it's Hindley-Milner, which I can just about grok. In 2021 it's something called System FCΩ-with-overloading. And/or with other variants/decorations on the F. The variants are constantly balanced on a knife-edge of coherence/consistency/terminating/decidable vs expressivity.) It's unlikely if anybody round here creates what they "would like to use" it'll still balance.
Quote from AntC on April 22, 2021, 9:23 pmApparently all programming (languages) are equally wrong.
Apparently all programming (languages) are equally wrong.
Quote from tobega on April 23, 2021, 7:28 amQuote from dandl on April 22, 2021, 7:53 amQuote from tobega on April 21, 2021, 3:27 pmQuote from dandl on April 21, 2021, 10:56 amQuote from Dave Voorhis on April 21, 2021, 8:22 amQuote from dandl on April 21, 2021, 12:44 amIt wasn't meant to be an analysis. Just pointing out that your bullet points are vague enough to mean anything from cutting-edge future advancements to crude legacy languages. E.g., "getting rid of ... type declarations" could mean type inference per modern languages, or literally no type declarations and "type inference" of a handful of primitive types per vintage BASIC.
I think you're trying to read them backwards, as a list of inclusions rather than a list of exclusions. Assume you start with the latest Java/C#/etc: this is what we try to take out. There is another list of things we might want to add, including getting things back from C++ (meta, unions, type aliases, value types) and elsewhere.
I just mean they're vague, and subject to very broad interpretation.
Please, please, please do not bring back C++-style "meta", unions (if you mean what I think you mean), and type aliases. Java not having these is one of its strengths.
C++lets you do things (unsafely) that you can't do in Java. I want to be able to do them, but safely.
The meta I want is to solve this problem. I have just spent a couple of hours writing code like this for a large number of named variables:
Console.Writeln($"version="{a.Version}");
b.version = a.Version
Console.Writeln($"version="{b.version}");This is debugging code, not production, and I don't care if the macro that does this for me offends your sensibilities. Simple text substitution is enough to save a mountain of typing, and we now have the tech that will allow us to do this safely. I want it.
C unions did two things: structural overlays (which is actually undefined behaviour) and simple algebraic types (along with struct). I want the algebraic types: don't you?
C typedefs allow you to type check usage of primitive types. We used them extensively in Powerflex to manage the various compiler differences (such as signed/unsigned, short/long) but they're also really useful to distinguish (say) between an integer used as a count, one used as an index and one used as a length. There is no downside.
But I mention that not to get into a debate about what should/shouldn't be included, but to point out that it's almost inevitable that everyone you ask will have a different vision of what should/shouldn't be included in some C++/C#/Java successor. What I think should/shouldn't be there will differ from what you think should/shouldn't be there, and likewise for Ant, Erwin, Tobega, etc., etc.
I think the best you can do here is write something that scratches your own personal itches, and hope that enough other developers share the same itches to appreciate your, er, itch-scratcher.
I have very few specific itches: whatever gives me safer-shorter-higher, meta and fits in will be just fine. But it's totally wrong to think about what to add until you know for sure what you're prepared to give up to get it.
I keep waiting for the revolutionary vision of how we should program in a safer-shorter-higher way, but we're still at a laundry list of incremental changes which we can't all agree are improvements.
I'm still working on the laundry list. See post.
My presumption is that you need to write less code to leave room for getting more done. But I have a couple of targets:
- a data structure that embodies structure at a higher level than the types we know and love. I have a data model of 20 tables, with various key and other constraints, and I would like to check at compile time that the operations I code for it will not violate any constraints.
- Ditto for a graph structure, etc.
- A template that generates an HTML page from an SQL query (the query, not the result set).
- Templates to transform SQL <=> Json <=> XML <=> etc.
So far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
Quote from dandl on April 22, 2021, 7:53 amQuote from tobega on April 21, 2021, 3:27 pmQuote from dandl on April 21, 2021, 10:56 amQuote from Dave Voorhis on April 21, 2021, 8:22 amQuote from dandl on April 21, 2021, 12:44 amIt wasn't meant to be an analysis. Just pointing out that your bullet points are vague enough to mean anything from cutting-edge future advancements to crude legacy languages. E.g., "getting rid of ... type declarations" could mean type inference per modern languages, or literally no type declarations and "type inference" of a handful of primitive types per vintage BASIC.
I think you're trying to read them backwards, as a list of inclusions rather than a list of exclusions. Assume you start with the latest Java/C#/etc: this is what we try to take out. There is another list of things we might want to add, including getting things back from C++ (meta, unions, type aliases, value types) and elsewhere.
I just mean they're vague, and subject to very broad interpretation.
Please, please, please do not bring back C++-style "meta", unions (if you mean what I think you mean), and type aliases. Java not having these is one of its strengths.
C++lets you do things (unsafely) that you can't do in Java. I want to be able to do them, but safely.
The meta I want is to solve this problem. I have just spent a couple of hours writing code like this for a large number of named variables:
Console.Writeln($"version="{a.Version}");
b.version = a.Version
Console.Writeln($"version="{b.version}");This is debugging code, not production, and I don't care if the macro that does this for me offends your sensibilities. Simple text substitution is enough to save a mountain of typing, and we now have the tech that will allow us to do this safely. I want it.
C unions did two things: structural overlays (which is actually undefined behaviour) and simple algebraic types (along with struct). I want the algebraic types: don't you?
C typedefs allow you to type check usage of primitive types. We used them extensively in Powerflex to manage the various compiler differences (such as signed/unsigned, short/long) but they're also really useful to distinguish (say) between an integer used as a count, one used as an index and one used as a length. There is no downside.
But I mention that not to get into a debate about what should/shouldn't be included, but to point out that it's almost inevitable that everyone you ask will have a different vision of what should/shouldn't be included in some C++/C#/Java successor. What I think should/shouldn't be there will differ from what you think should/shouldn't be there, and likewise for Ant, Erwin, Tobega, etc., etc.
I think the best you can do here is write something that scratches your own personal itches, and hope that enough other developers share the same itches to appreciate your, er, itch-scratcher.
I have very few specific itches: whatever gives me safer-shorter-higher, meta and fits in will be just fine. But it's totally wrong to think about what to add until you know for sure what you're prepared to give up to get it.
I keep waiting for the revolutionary vision of how we should program in a safer-shorter-higher way, but we're still at a laundry list of incremental changes which we can't all agree are improvements.
I'm still working on the laundry list. See post.
My presumption is that you need to write less code to leave room for getting more done. But I have a couple of targets:
- a data structure that embodies structure at a higher level than the types we know and love. I have a data model of 20 tables, with various key and other constraints, and I would like to check at compile time that the operations I code for it will not violate any constraints.
- Ditto for a graph structure, etc.
- A template that generates an HTML page from an SQL query (the query, not the result set).
- Templates to transform SQL <=> Json <=> XML <=> etc.
So far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
Quote from Dave Voorhis on April 23, 2021, 8:41 amQuote from tobega on April 23, 2021, 7:28 amQuote from dandl on April 22, 2021, 7:53 amQuote from tobega on April 21, 2021, 3:27 pmQuote from dandl on April 21, 2021, 10:56 amQuote from Dave Voorhis on April 21, 2021, 8:22 amQuote from dandl on April 21, 2021, 12:44 amIt wasn't meant to be an analysis. Just pointing out that your bullet points are vague enough to mean anything from cutting-edge future advancements to crude legacy languages. E.g., "getting rid of ... type declarations" could mean type inference per modern languages, or literally no type declarations and "type inference" of a handful of primitive types per vintage BASIC.
I think you're trying to read them backwards, as a list of inclusions rather than a list of exclusions. Assume you start with the latest Java/C#/etc: this is what we try to take out. There is another list of things we might want to add, including getting things back from C++ (meta, unions, type aliases, value types) and elsewhere.
I just mean they're vague, and subject to very broad interpretation.
Please, please, please do not bring back C++-style "meta", unions (if you mean what I think you mean), and type aliases. Java not having these is one of its strengths.
C++lets you do things (unsafely) that you can't do in Java. I want to be able to do them, but safely.
The meta I want is to solve this problem. I have just spent a couple of hours writing code like this for a large number of named variables:
Console.Writeln($"version="{a.Version}");
b.version = a.Version
Console.Writeln($"version="{b.version}");This is debugging code, not production, and I don't care if the macro that does this for me offends your sensibilities. Simple text substitution is enough to save a mountain of typing, and we now have the tech that will allow us to do this safely. I want it.
C unions did two things: structural overlays (which is actually undefined behaviour) and simple algebraic types (along with struct). I want the algebraic types: don't you?
C typedefs allow you to type check usage of primitive types. We used them extensively in Powerflex to manage the various compiler differences (such as signed/unsigned, short/long) but they're also really useful to distinguish (say) between an integer used as a count, one used as an index and one used as a length. There is no downside.
But I mention that not to get into a debate about what should/shouldn't be included, but to point out that it's almost inevitable that everyone you ask will have a different vision of what should/shouldn't be included in some C++/C#/Java successor. What I think should/shouldn't be there will differ from what you think should/shouldn't be there, and likewise for Ant, Erwin, Tobega, etc., etc.
I think the best you can do here is write something that scratches your own personal itches, and hope that enough other developers share the same itches to appreciate your, er, itch-scratcher.
I have very few specific itches: whatever gives me safer-shorter-higher, meta and fits in will be just fine. But it's totally wrong to think about what to add until you know for sure what you're prepared to give up to get it.
I keep waiting for the revolutionary vision of how we should program in a safer-shorter-higher way, but we're still at a laundry list of incremental changes which we can't all agree are improvements.
I'm still working on the laundry list. See post.
My presumption is that you need to write less code to leave room for getting more done. But I have a couple of targets:
- a data structure that embodies structure at a higher level than the types we know and love. I have a data model of 20 tables, with various key and other constraints, and I would like to check at compile time that the operations I code for it will not violate any constraints.
- Ditto for a graph structure, etc.
- A template that generates an HTML page from an SQL query (the query, not the result set).
- Templates to transform SQL <=> Json <=> XML <=> etc.
So far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
I'd guess at any given moment there are probably 1,000 similar efforts, all roughly equivalent, that we'll never hear about, ever.
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
I'll just note that from #66 onward the list gets weird. Perhaps they should have limited it to general-purpose programming languages.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
I note as well that Excel is #78 (2nd before last on the list), but see https://www.cassotis.com/insights/88-of-the-excel-spreadsheets-have-errors.
Quote from tobega on April 23, 2021, 7:28 amQuote from dandl on April 22, 2021, 7:53 amQuote from tobega on April 21, 2021, 3:27 pmQuote from dandl on April 21, 2021, 10:56 amQuote from Dave Voorhis on April 21, 2021, 8:22 amQuote from dandl on April 21, 2021, 12:44 amIt wasn't meant to be an analysis. Just pointing out that your bullet points are vague enough to mean anything from cutting-edge future advancements to crude legacy languages. E.g., "getting rid of ... type declarations" could mean type inference per modern languages, or literally no type declarations and "type inference" of a handful of primitive types per vintage BASIC.
I think you're trying to read them backwards, as a list of inclusions rather than a list of exclusions. Assume you start with the latest Java/C#/etc: this is what we try to take out. There is another list of things we might want to add, including getting things back from C++ (meta, unions, type aliases, value types) and elsewhere.
I just mean they're vague, and subject to very broad interpretation.
Please, please, please do not bring back C++-style "meta", unions (if you mean what I think you mean), and type aliases. Java not having these is one of its strengths.
C++lets you do things (unsafely) that you can't do in Java. I want to be able to do them, but safely.
The meta I want is to solve this problem. I have just spent a couple of hours writing code like this for a large number of named variables:
Console.Writeln($"version="{a.Version}");
b.version = a.Version
Console.Writeln($"version="{b.version}");This is debugging code, not production, and I don't care if the macro that does this for me offends your sensibilities. Simple text substitution is enough to save a mountain of typing, and we now have the tech that will allow us to do this safely. I want it.
C unions did two things: structural overlays (which is actually undefined behaviour) and simple algebraic types (along with struct). I want the algebraic types: don't you?
C typedefs allow you to type check usage of primitive types. We used them extensively in Powerflex to manage the various compiler differences (such as signed/unsigned, short/long) but they're also really useful to distinguish (say) between an integer used as a count, one used as an index and one used as a length. There is no downside.
But I mention that not to get into a debate about what should/shouldn't be included, but to point out that it's almost inevitable that everyone you ask will have a different vision of what should/shouldn't be included in some C++/C#/Java successor. What I think should/shouldn't be there will differ from what you think should/shouldn't be there, and likewise for Ant, Erwin, Tobega, etc., etc.
I think the best you can do here is write something that scratches your own personal itches, and hope that enough other developers share the same itches to appreciate your, er, itch-scratcher.
I have very few specific itches: whatever gives me safer-shorter-higher, meta and fits in will be just fine. But it's totally wrong to think about what to add until you know for sure what you're prepared to give up to get it.
I keep waiting for the revolutionary vision of how we should program in a safer-shorter-higher way, but we're still at a laundry list of incremental changes which we can't all agree are improvements.
I'm still working on the laundry list. See post.
My presumption is that you need to write less code to leave room for getting more done. But I have a couple of targets:
- a data structure that embodies structure at a higher level than the types we know and love. I have a data model of 20 tables, with various key and other constraints, and I would like to check at compile time that the operations I code for it will not violate any constraints.
- Ditto for a graph structure, etc.
- A template that generates an HTML page from an SQL query (the query, not the result set).
- Templates to transform SQL <=> Json <=> XML <=> etc.
So far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
I'd guess at any given moment there are probably 1,000 similar efforts, all roughly equivalent, that we'll never hear about, ever.
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
I'll just note that from #66 onward the list gets weird. Perhaps they should have limited it to general-purpose programming languages.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
I note as well that Excel is #78 (2nd before last on the list), but see https://www.cassotis.com/insights/88-of-the-excel-spreadsheets-have-errors.
Quote from dandl on April 23, 2021, 9:09 amSo far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
Sorry, of course I should look at Ada to see how they solved these problems, just not for much else. I'm not looking for solutions so much as pain points.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Now that is interesting. Thanks!
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Which rather suggests that the metric may be the problem. I've known about FP for decades, but I've never really figured out if theyit's useful/usable. This ref didn't help much.
My stated goal is shorter, because my presumption is that doing the same thing in less lines of code is always more productive, regardless of whether the FP metric agrees.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
That's why my stated goal is safer. Bugs are easier to fix if you find them earlier: my assumption is that finding errors at compile time beats anything else hands down.
But comparing existing languages isn't the point. I'm trying to find pain points in current languages that are bad enough to trigger new ones.
So far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
Sorry, of course I should look at Ada to see how they solved these problems, just not for much else. I'm not looking for solutions so much as pain points.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Now that is interesting. Thanks!
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Which rather suggests that the metric may be the problem. I've known about FP for decades, but I've never really figured out if theyit's useful/usable. This ref didn't help much.
My stated goal is shorter, because my presumption is that doing the same thing in less lines of code is always more productive, regardless of whether the FP metric agrees.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
That's why my stated goal is safer. Bugs are easier to fix if you find them earlier: my assumption is that finding errors at compile time beats anything else hands down.
But comparing existing languages isn't the point. I'm trying to find pain points in current languages that are bad enough to trigger new ones.
Quote from dandl on April 23, 2021, 9:16 amQuote from AntC on April 22, 2021, 11:50 amQuote from Dave Voorhis on April 22, 2021, 8:29 amQuote from dandl on April 22, 2021, 6:28 amQuote from Dave Voorhis on April 21, 2021, 11:36 amQuote from dandl on April 21, 2021, 10:56 amQuote from Dave Voorhis on April 21, 2021, 8:22 amQuote from dandl on April 21, 2021, 12:44 am
I never said they were. I want safe unions, not C unions.
There are languages out there which claim to not need annotations: that topic will have to wait until I can take a look.
"There are languages" is the sort of unhelpful talk that's making these threads valueless. A language might deliver one feature precisely because it restricts some other feature.
I claim Haskell does not need type annotations in monomorphic code.
There might well be, but are they statically typed?
Yes Haskell is rigorously statically typed.
It includes safe unions -- that is tagged Algebraic Datatypes.
The important thing is that I would like to retain type annotations for readability.
In Haskell you can optionally put a type annotation on any variable/function/introduced name, any expression or sub-expression. The effect is the compiler will anyway carry out its type inference (no annotation needed); then compare the inferred type to the annotation; complain if they're different.
There are exceptions: if you want your introduced-name to be less polymorphic than would be inferred, you need to declare so, and the compiler accepts and validates so. If you want your introduced-name to be overloaded, you put a non-omittable declaration of its most general type, plus overrides for each type(s) at which it's overloaded. There's a bunch of restrictions, essentially so that auto-inference (for expressions-in-general without annotations) is tractable.
That's an example of delivering one feature precisely by restricting some other feature. There's plenty of q's on StackOverflow from newbies who don't get why annotations are needed in some places not others. (And the advice to newbies is: there's no harm putting extra annotations; do so as machine-checked documentation -- IOW exactly the reasons Dave wants annotations.)
Haskell is brilliant. It was my son's first language at Uni, and it was a fascinating journey for both of us. It's just that it doesn't seem to be the solution to anything. Everyone knows it's out there, and some people use it, but it has a strongly academic flavour and the learning curve is nearly vertical. Adoption rates are low (but see https://earthly.dev/blog/brown-green-language/).
So Haskell tells me it can be done, but it doesn't tell me whether I want to do it that way, or at all, of if I'm willing to pay the price.
But again: You do you. Create what you would like to use.
The type theory to support Haskell's inference/checking is horribly obtuse. (In Haskell vintage 1998, it's Hindley-Milner, which I can just about grok. In 2021 it's something called System FCΩ-with-overloading. And/or with other variants/decorations on the F. The variants are constantly balanced on a knife-edge of coherence/consistency/terminating/decidable vs expressivity.) It's unlikely if anybody round here creates what they "would like to use" it'll still balance.
Like I said.
Quote from AntC on April 22, 2021, 11:50 amQuote from Dave Voorhis on April 22, 2021, 8:29 amQuote from dandl on April 22, 2021, 6:28 amQuote from Dave Voorhis on April 21, 2021, 11:36 amQuote from dandl on April 21, 2021, 10:56 amQuote from Dave Voorhis on April 21, 2021, 8:22 amQuote from dandl on April 21, 2021, 12:44 am
I never said they were. I want safe unions, not C unions.
There are languages out there which claim to not need annotations: that topic will have to wait until I can take a look.
"There are languages" is the sort of unhelpful talk that's making these threads valueless. A language might deliver one feature precisely because it restricts some other feature.
I claim Haskell does not need type annotations in monomorphic code.
There might well be, but are they statically typed?
Yes Haskell is rigorously statically typed.
It includes safe unions -- that is tagged Algebraic Datatypes.
The important thing is that I would like to retain type annotations for readability.
In Haskell you can optionally put a type annotation on any variable/function/introduced name, any expression or sub-expression. The effect is the compiler will anyway carry out its type inference (no annotation needed); then compare the inferred type to the annotation; complain if they're different.
There are exceptions: if you want your introduced-name to be less polymorphic than would be inferred, you need to declare so, and the compiler accepts and validates so. If you want your introduced-name to be overloaded, you put a non-omittable declaration of its most general type, plus overrides for each type(s) at which it's overloaded. There's a bunch of restrictions, essentially so that auto-inference (for expressions-in-general without annotations) is tractable.
That's an example of delivering one feature precisely by restricting some other feature. There's plenty of q's on StackOverflow from newbies who don't get why annotations are needed in some places not others. (And the advice to newbies is: there's no harm putting extra annotations; do so as machine-checked documentation -- IOW exactly the reasons Dave wants annotations.)
Haskell is brilliant. It was my son's first language at Uni, and it was a fascinating journey for both of us. It's just that it doesn't seem to be the solution to anything. Everyone knows it's out there, and some people use it, but it has a strongly academic flavour and the learning curve is nearly vertical. Adoption rates are low (but see https://earthly.dev/blog/brown-green-language/).
So Haskell tells me it can be done, but it doesn't tell me whether I want to do it that way, or at all, of if I'm willing to pay the price.
But again: You do you. Create what you would like to use.
The type theory to support Haskell's inference/checking is horribly obtuse. (In Haskell vintage 1998, it's Hindley-Milner, which I can just about grok. In 2021 it's something called System FCΩ-with-overloading. And/or with other variants/decorations on the F. The variants are constantly balanced on a knife-edge of coherence/consistency/terminating/decidable vs expressivity.) It's unlikely if anybody round here creates what they "would like to use" it'll still balance.
Like I said.
Quote from Dave Voorhis on April 23, 2021, 9:22 amQuote from dandl on April 23, 2021, 9:09 amSo far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
Sorry, of course I should look at Ada to see how they solved these problems, just not for much else. I'm not looking for solutions so much as pain points.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Now that is interesting. Thanks!
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Which rather suggests that the metric may be the problem. I've known about FP for decades, but I've never really figured out if theyit's useful/usable. This ref didn't help much.
My stated goal is shorter, because my presumption is that doing the same thing in less lines of code is always more productive, regardless of whether the FP metric agrees.
It's not quite that simple, or APL would be the most productive language, hands down.
It isn't, and it's why APL was coined a "write-only language" back in the day. Shooting strictly for "shorter" makes code harder to read, and that harms productivity.
There's a balance to be struck between terseness, brevity, readability, and verbosity. Likewise for composability, expressivity, intuitiveness, ease of learning, or simply simplicity.
And so on.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
That's why my stated goal is safer. Bugs are easier to fix if you find them earlier: my assumption is that finding errors at compile time beats anything else hands down.
It does. See Kotlin, Scala, Haskell, Prolog...
Quote from dandl on April 23, 2021, 9:09 amSo far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
Sorry, of course I should look at Ada to see how they solved these problems, just not for much else. I'm not looking for solutions so much as pain points.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Now that is interesting. Thanks!
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Which rather suggests that the metric may be the problem. I've known about FP for decades, but I've never really figured out if theyit's useful/usable. This ref didn't help much.
My stated goal is shorter, because my presumption is that doing the same thing in less lines of code is always more productive, regardless of whether the FP metric agrees.
It's not quite that simple, or APL would be the most productive language, hands down.
It isn't, and it's why APL was coined a "write-only language" back in the day. Shooting strictly for "shorter" makes code harder to read, and that harms productivity.
There's a balance to be struck between terseness, brevity, readability, and verbosity. Likewise for composability, expressivity, intuitiveness, ease of learning, or simply simplicity.
And so on.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
That's why my stated goal is safer. Bugs are easier to fix if you find them earlier: my assumption is that finding errors at compile time beats anything else hands down.
It does. See Kotlin, Scala, Haskell, Prolog...
Quote from dandl on April 23, 2021, 10:31 amQuote from Dave Voorhis on April 23, 2021, 9:22 amQuote from dandl on April 23, 2021, 9:09 amSo far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
Sorry, of course I should look at Ada to see how they solved these problems, just not for much else. I'm not looking for solutions so much as pain points.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Now that is interesting. Thanks!
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Which rather suggests that the metric may be the problem. I've known about FP for decades, but I've never really figured out if theyit's useful/usable. This ref didn't help much.
My stated goal is shorter, because my presumption is that doing the same thing in less lines of code is always more productive, regardless of whether the FP metric agrees.
It's not quite that simple, or APL would be the most productive language, hands down.
It isn't, and it's why APL was coined a "read-only language" back in the day. Shooting strictly for "shorter" makes code harder to read, and that harms productivity.
APL occupies a unique position because so many of its operators are single characters. If we normalise the language by restricting operators to alphabetic words, function notation with parentheses, adding whitespace does it still stand out? My impression is no. Logically, an APL operator chain is not so different from Linq operations on a collection.
As it happens, read only code is the dominant strain. See https://earthly.dev/blog/brown-green-language/. Most languages are designed to be written rather than read, and most programmers would prefer a rewrite over maintenance. APL is superbly suited to the problem of writing new code to solve a small problem, but fails badly in maintenance. My contention is that less code also means less to read, provided it's done right.
I'm not a fan of 'agile' but I'm a big fan of XP, which predated it. The idea in XP is 'the simplest thing that can possibly work' followed by 'refactor without mercy'. A good language would be one you can write fast to get it working, and then refactor safely to make it readable and maintainable. Writing to satisfy future readers gets in the way of writing to get things to work.
There's a balance to be struck between terseness, brevity, readability, and verbosity. Likewise for composability, expressivity, intuitiveness, ease of learning, or simply simplicity.
Of course. Current languages are out of balance in being verbose, unreadable and complex. We can do better.
And so on.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
That's why my stated goal is safer. Bugs are easier to fix if you find them earlier: my assumption is that finding errors at compile time beats anything else hands down.
It does. See Kotlin, Scala, Haskell, Prolog...
But you refuse to allow the compiler to do meta-programming, and prefer to have external code generation and runtime reflection.
Quote from Dave Voorhis on April 23, 2021, 9:22 amQuote from dandl on April 23, 2021, 9:09 amSo far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
Sorry, of course I should look at Ada to see how they solved these problems, just not for much else. I'm not looking for solutions so much as pain points.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Now that is interesting. Thanks!
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Which rather suggests that the metric may be the problem. I've known about FP for decades, but I've never really figured out if theyit's useful/usable. This ref didn't help much.
My stated goal is shorter, because my presumption is that doing the same thing in less lines of code is always more productive, regardless of whether the FP metric agrees.
It's not quite that simple, or APL would be the most productive language, hands down.
It isn't, and it's why APL was coined a "read-only language" back in the day. Shooting strictly for "shorter" makes code harder to read, and that harms productivity.
APL occupies a unique position because so many of its operators are single characters. If we normalise the language by restricting operators to alphabetic words, function notation with parentheses, adding whitespace does it still stand out? My impression is no. Logically, an APL operator chain is not so different from Linq operations on a collection.
As it happens, read only code is the dominant strain. See https://earthly.dev/blog/brown-green-language/. Most languages are designed to be written rather than read, and most programmers would prefer a rewrite over maintenance. APL is superbly suited to the problem of writing new code to solve a small problem, but fails badly in maintenance. My contention is that less code also means less to read, provided it's done right.
I'm not a fan of 'agile' but I'm a big fan of XP, which predated it. The idea in XP is 'the simplest thing that can possibly work' followed by 'refactor without mercy'. A good language would be one you can write fast to get it working, and then refactor safely to make it readable and maintainable. Writing to satisfy future readers gets in the way of writing to get things to work.
There's a balance to be struck between terseness, brevity, readability, and verbosity. Likewise for composability, expressivity, intuitiveness, ease of learning, or simply simplicity.
Of course. Current languages are out of balance in being verbose, unreadable and complex. We can do better.
And so on.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
That's why my stated goal is safer. Bugs are easier to fix if you find them earlier: my assumption is that finding errors at compile time beats anything else hands down.
It does. See Kotlin, Scala, Haskell, Prolog...
But you refuse to allow the compiler to do meta-programming, and prefer to have external code generation and runtime reflection.
Quote from Dave Voorhis on April 23, 2021, 10:48 amQuote from dandl on April 23, 2021, 10:31 amQuote from Dave Voorhis on April 23, 2021, 9:22 amQuote from dandl on April 23, 2021, 9:09 amSo far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
Sorry, of course I should look at Ada to see how they solved these problems, just not for much else. I'm not looking for solutions so much as pain points.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Now that is interesting. Thanks!
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Which rather suggests that the metric may be the problem. I've known about FP for decades, but I've never really figured out if theyit's useful/usable. This ref didn't help much.
My stated goal is shorter, because my presumption is that doing the same thing in less lines of code is always more productive, regardless of whether the FP metric agrees.
It's not quite that simple, or APL would be the most productive language, hands down.
It isn't, and it's why APL was coined a "read-only language" back in the day. Shooting strictly for "shorter" makes code harder to read, and that harms productivity.
APL occupies a unique position because so many of its operators are single characters. If we normalise the language by restricting operators to alphabetic words, function notation with parentheses, adding whitespace does it still stand out? My impression is no. Logically, an APL operator chain is not so different from Linq operations on a collection.
As it happens, read only code is the dominant strain. See https://earthly.dev/blog/brown-green-language/. Most languages are designed to be written rather than read, and most programmers would prefer a rewrite over maintenance. APL is superbly suited to the problem of writing new code to solve a small problem, but fails badly in maintenance. My contention is that less code also means less to read, provided it's done right.
I'm not a fan of 'agile' but I'm a big fan of XP, which predated it. The idea in XP is 'the simplest thing that can possibly work' followed by 'refactor without mercy'. A good language would be one you can write fast to get it working, and then refactor safely to make it readable and maintainable. Writing to satisfy future readers gets in the way of writing to get things to work.
A common myth is that writing clearly gets in the way of writing productively.
It doesn't. Writing clear code improves productivity, because the first reader is the author.
The "simplest thing that can possibly work" refers to algorithmic and structural simplicity. It doesn't mean the fewest keystrokes, single-letter variable names, monolithic functions, and unreadably-chained eye-watering expressions.
There's a balance to be struck between terseness, brevity, readability, and verbosity. Likewise for composability, expressivity, intuitiveness, ease of learning, or simply simplicity.
Of course. Current languages are out of balance in being verbose, unreadable and complex. We can do better.
And so on.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
That's why my stated goal is safer. Bugs are easier to fix if you find them earlier: my assumption is that finding errors at compile time beats anything else hands down.
It does. See Kotlin, Scala, Haskell, Prolog...
But you refuse to allow the compiler to do meta-programming, and prefer to have external code generation and runtime reflection.
I refuse to embrace macros, and metaprogramming almost invariably reflects fundamental language limitations. Fix those, and you don't need it.
I don't prefer code generation or runtime reflection. I endeavour to avoid both. Occasionally, I accept that either or both are preferable to the alternatives, which are usually manually writing code or macros.
Macros are an abomination.
Quote from dandl on April 23, 2021, 10:31 amQuote from Dave Voorhis on April 23, 2021, 9:22 amQuote from dandl on April 23, 2021, 9:09 amSo far, based on your requirements, I think I would recommend you to take a look at Ada. I think a lot would be won if we all did, but it's not exactly a new revolutionary thing.
I don't think so.
Just saying, Ada has your complete list of C features in a safe way.
Sorry, of course I should look at Ada to see how they solved these problems, just not for much else. I'm not looking for solutions so much as pain points.
BTW, if you're interested in what someone else is busy developing as a "better Java" you can look at ecstasylang
Now that is interesting. Thanks!
Shouldn't you really try to find some metrics to support which languages to look at, though? E.g. productivity: In Table 16 you can see function points per month for various languages Fascinatingly, Visual Basic seems to be 160% as productive as C#, so according to that metric C# was a complete fiasco.
Which rather suggests that the metric may be the problem. I've known about FP for decades, but I've never really figured out if theyit's useful/usable. This ref didn't help much.
My stated goal is shorter, because my presumption is that doing the same thing in less lines of code is always more productive, regardless of whether the FP metric agrees.
It's not quite that simple, or APL would be the most productive language, hands down.
It isn't, and it's why APL was coined a "read-only language" back in the day. Shooting strictly for "shorter" makes code harder to read, and that harms productivity.
APL occupies a unique position because so many of its operators are single characters. If we normalise the language by restricting operators to alphabetic words, function notation with parentheses, adding whitespace does it still stand out? My impression is no. Logically, an APL operator chain is not so different from Linq operations on a collection.
As it happens, read only code is the dominant strain. See https://earthly.dev/blog/brown-green-language/. Most languages are designed to be written rather than read, and most programmers would prefer a rewrite over maintenance. APL is superbly suited to the problem of writing new code to solve a small problem, but fails badly in maintenance. My contention is that less code also means less to read, provided it's done right.
I'm not a fan of 'agile' but I'm a big fan of XP, which predated it. The idea in XP is 'the simplest thing that can possibly work' followed by 'refactor without mercy'. A good language would be one you can write fast to get it working, and then refactor safely to make it readable and maintainable. Writing to satisfy future readers gets in the way of writing to get things to work.
A common myth is that writing clearly gets in the way of writing productively.
It doesn't. Writing clear code improves productivity, because the first reader is the author.
The "simplest thing that can possibly work" refers to algorithmic and structural simplicity. It doesn't mean the fewest keystrokes, single-letter variable names, monolithic functions, and unreadably-chained eye-watering expressions.
There's a balance to be struck between terseness, brevity, readability, and verbosity. Likewise for composability, expressivity, intuitiveness, ease of learning, or simply simplicity.
Of course. Current languages are out of balance in being verbose, unreadable and complex. We can do better.
And so on.
Or how about defect rate? I read in various sources that for every 100 bugs in a C program you would have 50 bugs in the equivalent Java program and only 4 bugs in the equivalent Ada program.
That's why my stated goal is safer. Bugs are easier to fix if you find them earlier: my assumption is that finding errors at compile time beats anything else hands down.
It does. See Kotlin, Scala, Haskell, Prolog...
But you refuse to allow the compiler to do meta-programming, and prefer to have external code generation and runtime reflection.
I refuse to embrace macros, and metaprogramming almost invariably reflects fundamental language limitations. Fix those, and you don't need it.
I don't prefer code generation or runtime reflection. I endeavour to avoid both. Occasionally, I accept that either or both are preferable to the alternatives, which are usually manually writing code or macros.
Macros are an abomination.