The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

What do set-based operations buy us?

PreviousPage 5 of 6Next
Quote from dandl on February 20, 2021, 11:54 pm
Quote from tobega on February 20, 2021, 12:51 pm
Quote from dandl on February 20, 2021, 10:22 am

I have always been firmly in the static typing camp, but I have to acknowledge that the research doesn't necessarily support a very strong stance. I mentioned how Uncle Bob has swung over and the basis of the argument seems to be that TDD mandates about the same amount of tests whatever your type system, but you avoid having to do all the type declaration stuff in a dynamic system. (On trying to look up solid arguments I found this which was somewhat interesting: https://labs.ig.com/static-typing-promise )

You still haven't quoted any research of any authority. What I see here is bunk.

The problem with this argument (apart from the close resemblance to a religious war) is that it lumps every kind of programming in together. It's like comparing apples to oranges, battleships and the colour yellow. You just can't.

Speaking for myself, I like to have a scripting language for massaging text files or iterating over files. For this purpose I like Ruby, and I hate C#.

Then I like to have a language at the level of bits and bytes, to diddle with ports and memory, use protocols. I like C++, Python is a dog.

Then I write a few thousand lines of code for a language compiler and VM, hack it until it works, refactor until it's right. I want a static type compiler, modules and interfaces, assertions and tests (but not TDD). C# fits the bill. And so on.

When you show me research that reflects that, you have my attention.

You have a point about different languages being suited for different scenarios.

As regards your asking for solid research that supports my claim that the advantages of static typing are a lot less significant than what we generally want to believe, that is understandable and I suppose quite reasonable. Except I have no interest in proving anything to you and it really is your choice whether you take me seriously or not.

I am, however, interested in learning things, so your personal preferences listed above are of some interest given that they reflect your experience. And if you should want to make a claim that static typing is vastly superior in some sense, I would be happy to see what research you can provide to support that claim. It should be easy to find if it were  true, certainly a lot easier than for me to provide dozens of papers that I've come across over dozens of years that fail to prove a huge advantage of static typing.

The claim I would make is that for substantial pieces of software under active use and development and change over a prolonged period of time, on platforms that permit it, stronger typing is always the better choice, usually by a good margin. The margin is at its greatest for 'system' software such as compilers and major utilities, and for 'product' software such as games, ERP, POS and the like. The test of that claim would be to find pairs of products, comparable in most respects, at least 100KLOC, but one static and one dynamic. Comparing them would involve comparing the rate of opening and closing issues, new features versus bugs, and the effort involved.

I only know one big dynamic product: VS Code written in JS, but I assume there are others. I don't think there are very many, and that says something.

My impression is that the main advocates of dynamically typed languages for (large-scale, particularly) application development are amateurs, students, academics, junior developers, etc., who have never built and maintained (large) applications. That notable large Web sites are based on PHP, Python, Node/JavaScript etc., suggests other factors at work -- legacy inertia being a big one, I'd guess -- and they run in spite of dynamically typed languages, not because of them.

There's a common view that dynamically typed languages are much faster to code than statically typed languages, which is what you need to get your startup off the ground and keep it flying. But that isn't a reflection of some fundamental truth; it simply demonstrates that startups are often coded by junior developers who don't know any better.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on February 21, 2021, 10:02 am
Quote from dandl on February 20, 2021, 11:54 pm
Quote from tobega on February 20, 2021, 12:51 pm
Quote from dandl on February 20, 2021, 10:22 am

I have always been firmly in the static typing camp, but I have to acknowledge that the research doesn't necessarily support a very strong stance. I mentioned how Uncle Bob has swung over and the basis of the argument seems to be that TDD mandates about the same amount of tests whatever your type system, but you avoid having to do all the type declaration stuff in a dynamic system. (On trying to look up solid arguments I found this which was somewhat interesting: https://labs.ig.com/static-typing-promise )

 

The claim I would make is that for substantial pieces of software under active use and development and change over a prolonged period of time, on platforms that permit it, stronger typing is always the better choice, usually by a good margin. The margin is at its greatest for 'system' software such as compilers and major utilities, and for 'product' software such as games, ERP, POS and the like. The test of that claim would be to find pairs of products, comparable in most respects, at least 100KLOC, but one static and one dynamic. Comparing them would involve comparing the rate of opening and closing issues, new features versus bugs, and the effort involved.

I only know one big dynamic product: VS Code written in JS, but I assume there are others. I don't think there are very many, and that says something.

My impression is that the main advocates of dynamically typed languages for (large-scale, particularly) application development are amateurs, students, academics, junior developers, etc., who have never built and maintained (large) applications. That notable large Web sites are based on PHP, Python, Node/JavaScript etc., suggests other factors at work -- legacy inertia being a big one, I'd guess -- and they run in spite of dynamically typed languages, not because of them.

There's a common view that dynamically typed languages are much faster to code than statically typed languages, which is what you need to get your startup off the ground and keep it flying. But that isn't a reflection of some fundamental truth; it simply demonstrates that startups are often coded by junior developers who don't know any better.

Are these database-centric applications? Are the columns and tables given types and keys and foreign keys? Or is every application field left as String -- with of course every table having a 64-bit auto-allocated Id column as surrogate key?

I note that Haskellers think Haskell is an excellent scripting language, because even though you don't need to declare types for your variables, the compiler will infer them anyway. And then check your usages. (It's the more advanced polymorphism and overloading that needs type decls, but you typically don't do that in scripting.)

If these allegedly 'dynamic' applications don't use a database or don't manipulate structured data, I'm not seeing that as any sort of evidence round here.

Quote from AntC on February 21, 2021, 10:18 am
Quote from Dave Voorhis on February 21, 2021, 10:02 am
Quote from dandl on February 20, 2021, 11:54 pm
Quote from tobega on February 20, 2021, 12:51 pm
Quote from dandl on February 20, 2021, 10:22 am

I have always been firmly in the static typing camp, but I have to acknowledge that the research doesn't necessarily support a very strong stance. I mentioned how Uncle Bob has swung over and the basis of the argument seems to be that TDD mandates about the same amount of tests whatever your type system, but you avoid having to do all the type declaration stuff in a dynamic system. (On trying to look up solid arguments I found this which was somewhat interesting: https://labs.ig.com/static-typing-promise )

 

The claim I would make is that for substantial pieces of software under active use and development and change over a prolonged period of time, on platforms that permit it, stronger typing is always the better choice, usually by a good margin. The margin is at its greatest for 'system' software such as compilers and major utilities, and for 'product' software such as games, ERP, POS and the like. The test of that claim would be to find pairs of products, comparable in most respects, at least 100KLOC, but one static and one dynamic. Comparing them would involve comparing the rate of opening and closing issues, new features versus bugs, and the effort involved.

I only know one big dynamic product: VS Code written in JS, but I assume there are others. I don't think there are very many, and that says something.

My impression is that the main advocates of dynamically typed languages for (large-scale, particularly) application development are amateurs, students, academics, junior developers, etc., who have never built and maintained (large) applications. That notable large Web sites are based on PHP, Python, Node/JavaScript etc., suggests other factors at work -- legacy inertia being a big one, I'd guess -- and they run in spite of dynamically typed languages, not because of them.

There's a common view that dynamically typed languages are much faster to code than statically typed languages, which is what you need to get your startup off the ground and keep it flying. But that isn't a reflection of some fundamental truth; it simply demonstrates that startups are often coded by junior developers who don't know any better.

Are these database-centric applications? Are the columns and tables given types and keys and foreign keys? Or is every application field left as String -- with of course every table having a 64-bit auto-allocated Id column as surrogate key?

Most applications are database-centric, but the same folks who argue that only dynamic typing is fast enough to get products to market on time are the ones choosing MongoDB because "relational" requires unacceptable schema definitions, column types, etc.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org

The claim I would make is that for substantial pieces of software under active use and development and change over a prolonged period of time, on platforms that permit it, stronger typing is always the better choice, usually by a good margin. The margin is at its greatest for 'system' software such as compilers and major utilities, and for 'product' software such as games, ERP, POS and the like. The test of that claim would be to find pairs of products, comparable in most respects, at least 100KLOC, but one static and one dynamic. Comparing them would involve comparing the rate of opening and closing issues, new features versus bugs, and the effort involved.

I only know one big dynamic product: VS Code written in JS, but I assume there are others. I don't think there are very many, and that says something.

My impression is that the main advocates of dynamically typed languages for (large-scale, particularly) application development are amateurs, students, academics, junior developers, etc., who have never built and maintained (large) applications. That notable large Web sites are based on PHP, Python, Node/JavaScript etc., suggests other factors at work -- legacy inertia being a big one, I'd guess -- and they run in spite of dynamically typed languages, not because of them.

I think the Web verges on being a special case. PHP for a long time was the only native hosting language that was cheap/free and 'good enough', despite being an abomination (then). Ruby is nice, but it only exists because of Rails. And remember Cold Fusion?

And JS is the only language that runs in the browser sandbox (although Java and MS Silverlight tried to get a foot in). NodeJS is the most impressive way to fill your application with foreign code that you cannot know or trust, but really, it would be nowhere but for the desperate need to write UI code fpr the browser. Overall, the web can drive a lot of dubious decisions.

But if the heart of your ginormous application is complex domain logic with lots of data and the UI is but a small part, I can't imagine that an early decision to write it all in JS or any dynamic language is going to turn out well.

There's a common view that dynamically typed languages are much faster to code than statically typed languages, which is what you need to get your startup off the ground and keep it flying. But that isn't a reflection of some fundamental truth; it simply demonstrates that startups are often coded by junior developers who don't know any better.

I can assure you that investors are not keen on the idea that the first use of funds is to rewrite the crap prototype. And the technical debt goes on piling up...

Andl - A New Database Language - andl.org
Quote from tobega on February 21, 2021, 7:52 am

if we were able to specify the way the data relates to each other, as part of the type system

I want to comment on that one because imo it betrays a fundamental lack of understanding of how we ever arrived at making these digital computers do for us what they do.

Computers compute.  Nothing more and nothing less.  They carry out operations of some algebra and to have an algebra in the first place requires to have a "system" of types that the algebra is defined over.  No meaning, no interpretation, just the pure simple fact that 1+1=2, regardless of whether it's humans or furlongs or grains of sand in the desert.

Interpretation and "meaning" and concepts such as "data relating to other data" is ***tacked onto that system of algebraic computation***, and in the "constructionist view" of how things are built, it means the algebraic system must exist before any question of "data relating to other data" can be answered, and therefore making "data relating to other data" a problem to be solved by the type system, necessarily leads to circular dependencies in whatever it is that gets set up this way.

Or maybe I'm just too deeply entrenched in my "constructionist view".

According to TTM a type is a set of values. Types may have features to allow types to be derived from other types, but it's still just values all the way down.

Functions compute values (of some type) from other values (of some type). That's computation, not types (operators in TTM).

'Relates to' is not a value. It might be a function, if it can be  computed. I think we have an algebra that does that...you might want to try it.

Andl - A New Database Language - andl.org
Quote from Erwin on February 21, 2021, 11:15 pm
Quote from tobega on February 21, 2021, 7:52 am

if we were able to specify the way the data relates to each other, as part of the type system

I want to comment on that one because imo it betrays a fundamental lack of understanding of how we ever arrived at making these digital computers do for us what they do.

Computers compute.  Nothing more and nothing less.  They carry out operations of some algebra and to have an algebra in the first place requires to have a "system" of types that the algebra is defined over.  No meaning, no interpretation, just the pure simple fact that 1+1=2, regardless of whether it's humans or furlongs or grains of sand in the desert.

Interpretation and "meaning" and concepts such as "data relating to other data" is ***tacked onto that system of algebraic computation***, and in the "constructionist view" of how things are built, it means the algebraic system must exist before any question of "data relating to other data" can be answered, and therefore making "data relating to other data" a problem to be solved by the type system, necessarily leads to circular dependencies in whatever it is that gets set up this way.

Or maybe I'm just too deeply entrenched in my "constructionist view"

A type system is something that is outside the calculation/program itself in an attempt to prove, or partially prove, that the program is correct.

Adding 1 + 1 = 2, with types 1m+1m = 2m, but adding 1m + 1s = what? And dividing 1m/1s = 1 m/s, another type derived from the first two. This is types in action.

Each attribute of a relation can be assigned a type to help us prove that we are doing something sane. Extending this thought, it's at least thinkable that a relation can be viewed as a type in itself, and that such a type can be something more specific than just being a collection of attributes. If it were possible to specify such things, then they could be used to more completely prove the correctness of a program.

Then the whole question is if it is worth doing the extra work of specifying types or not, which conflates with personal preferences and, when you bring in economics, a trade-off between the cost of a bug versus the cost of the extra work. Interestingly, I just came across a study that seems to indicate that it takes less time to fix type errors in a dynamic language than it takes to write the type information in the statically typed languaged, https://courses.cs.washington.edu/courses/cse590n/10au/hanenberg-oopsla2010.pdf (in there is also referenced an experiment comparing Java and Groovy, which is essentially dynamically typed Java, as Dave postulated earlier). It also seems from other sources that type errors generally are a very small part of the totality of bugs, 1-3%. Although it must be stated that this is usually very anemic type systems, there exist those that provide better proofs of correctness, but they also require more effort to specify. Of course, if you're writing software where human lives are at stake, I hope you use something like the SPARK version of Ada to formally prove the correctness of your program. Sadly, the vast majority of bugs come from misinterpreted requirements and flawed design, which no formal proof will save you from.

As for what we actually know about software engineering, as opposed to what we believe, this is interesting: https://www.hillelwayne.com/talks/what-we-know-we-dont-know/

IIRC, the only thing in language technology and engineering practice that is proven to improve software quality is code reviews. And the effects of human factors like stress, overwork and sleep deprivation are orders of magnitude larger than the effects of any technology choice or process.

Quote from tobega on February 22, 2021, 11:59 am
Quote from Erwin on February 21, 2021, 11:15 pm
Quote from tobega on February 21, 2021, 7:52 am

if we were able to specify the way the data relates to each other, as part of the type system

I want to comment on that one because imo it betrays a fundamental lack of understanding of how we ever arrived at making these digital computers do for us what they do.

Computers compute.  Nothing more and nothing less.  They carry out operations of some algebra and to have an algebra in the first place requires to have a "system" of types that the algebra is defined over.  No meaning, no interpretation, just the pure simple fact that 1+1=2, regardless of whether it's humans or furlongs or grains of sand in the desert.

Interpretation and "meaning" and concepts such as "data relating to other data" is ***tacked onto that system of algebraic computation***, and in the "constructionist view" of how things are built, it means the algebraic system must exist before any question of "data relating to other data" can be answered, and therefore making "data relating to other data" a problem to be solved by the type system, necessarily leads to circular dependencies in whatever it is that gets set up this way.

Or maybe I'm just too deeply entrenched in my "constructionist view"

A type system is something that is outside the calculation/program itself in an attempt to prove, or partially prove, that the program is correct.

Adding 1 + 1 = 2, with types 1m+1m = 2m, but adding 1m + 1s = what? And dividing 1m/1s = 1 m/s, another type derived from the first two. This is types in action.

Each attribute of a relation can be assigned a type to help us prove that we are doing something sane. Extending this thought, it's at least thinkable that a relation can be viewed as a type in itself, and that such a type can be something more specific than just being a collection of attributes. If it were possible to specify such things, then they could be used to more completely prove the correctness of a program.

Then the whole question is if it is worth doing the extra work of specifying types or not, which conflates with personal preferences and, when you bring in economics, a trade-off between the cost of a bug versus the cost of the extra work. Interestingly, I just came across a study that seems to indicate that it takes less time to fix type errors in a dynamic language than it takes to write the type information in the statically typed languaged, https://courses.cs.washington.edu/courses/cse590n/10au/hanenberg-oopsla2010.pdf (in there is also referenced an experiment comparing Java and Groovy, which is essentially dynamically typed Java, as Dave postulated earlier). It also seems from other sources that type errors generally are a very small part of the totality of bugs, 1-3%.

Two things:

  1. Type errors aren't really the point. Researchers often seem to think it's all about type safety. It's not all about type safety, though for many mission critical applications if the compiler catches 1% to 3% of bugs that might otherwise not get caught by unit tests or code reviews or whatever, then that's enough to justify static typing right there -- particularly on systems where any downtime due to run-time bugs (or delays due to development bugfix blockers) may represent significant cost.
  2. The real point is enforced readability. That's more about appropriate use of type annotations vs no type annotations; the former usually being associated with statically typed languages and the latter with dynamically typed languages, though they're notionally orthogonal. What would be more meaningful than comparing fixing type bugs vs writing type annotations would be a comparison between the time it takes to write type annotations vs the time it takes to grok complex code in their absence.

But again, no amount of presented research is likely to shift any individual developer's personal preference, and (I point out somewhat pessimistically) it's not likely to shift management choices until it's so wrapped in vendor marketing efforts as to eliminate any real value.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on February 22, 2021, 1:51 pm
Quote from tobega on February 22, 2021, 11:59 am
Quote from Erwin on February 21, 2021, 11:15 pm
Quote from tobega on February 21, 2021, 7:52 am

if we were able to specify the way the data relates to each other, as part of the type system

I want to comment on that one because imo it betrays a fundamental lack of understanding of how we ever arrived at making these digital computers do for us what they do.

Computers compute.  Nothing more and nothing less.  They carry out operations of some algebra and to have an algebra in the first place requires to have a "system" of types that the algebra is defined over.  No meaning, no interpretation, just the pure simple fact that 1+1=2, regardless of whether it's humans or furlongs or grains of sand in the desert.

Interpretation and "meaning" and concepts such as "data relating to other data" is ***tacked onto that system of algebraic computation***, and in the "constructionist view" of how things are built, it means the algebraic system must exist before any question of "data relating to other data" can be answered, and therefore making "data relating to other data" a problem to be solved by the type system, necessarily leads to circular dependencies in whatever it is that gets set up this way.

Or maybe I'm just too deeply entrenched in my "constructionist view"

A type system is something that is outside the calculation/program itself in an attempt to prove, or partially prove, that the program is correct.

Adding 1 + 1 = 2, with types 1m+1m = 2m, but adding 1m + 1s = what? And dividing 1m/1s = 1 m/s, another type derived from the first two. This is types in action.

Each attribute of a relation can be assigned a type to help us prove that we are doing something sane. Extending this thought, it's at least thinkable that a relation can be viewed as a type in itself, and that such a type can be something more specific than just being a collection of attributes. If it were possible to specify such things, then they could be used to more completely prove the correctness of a program.

Then the whole question is if it is worth doing the extra work of specifying types or not, which conflates with personal preferences and, when you bring in economics, a trade-off between the cost of a bug versus the cost of the extra work. Interestingly, I just came across a study that seems to indicate that it takes less time to fix type errors in a dynamic language than it takes to write the type information in the statically typed languaged, https://courses.cs.washington.edu/courses/cse590n/10au/hanenberg-oopsla2010.pdf (in there is also referenced an experiment comparing Java and Groovy, which is essentially dynamically typed Java, as Dave postulated earlier). It also seems from other sources that type errors generally are a very small part of the totality of bugs, 1-3%.

Two things:

  1. Type errors aren't really the point. Researchers often seem to think it's all about type safety. It's not all about type safety, though for many mission critical applications if the compiler catches 1% to 3% of bugs that might otherwise not get caught by unit tests or code reviews or whatever, then that's enough to justify static typing right there -- particularly on systems where any downtime due to run-time bugs (or delays due to development bugfix blockers) may represent significant cost.
  2. The real point is enforced readability. That's more about appropriate use of type annotations vs no type annotations; the former usually being associated with statically typed languages and the latter with dynamically typed languages, though they're notionally orthogonal. What would be more meaningful than comparing fixing type bugs vs writing type annotations would be a comparison between the time it takes to write type annotations vs the time it takes to grok complex code in their absence.

But again, no amount of presented research is likely to shift any individual developer's personal preference, and (I point out somewhat pessimistically) it's not likely to shift management choices until it's so wrapped in vendor marketing efforts as to eliminate any real value.

  1. 99% of software isn't mission critical, but you are right, it is a cost function and a trade-off. Hopefully not as cynical as when an automobile manufacturer purportedly decided that it was cheaper to pay punitive damages for deaths from brake failure than it was to recall the line.
  2. And there is support for that:

According to Hanenberg, S., Kleinschmager, S., Robbes, R. et al. in An empirical study on the impact of static typing on software maintainability. (2014):

This paper describes an experiment that tests whether static type systems improve the maintainability of software systems, in terms of understanding undocumented code, fixing type errors, and fixing semantic errors. The results show rigorous empirical evidence that static types are indeed beneficial to these activities, except when fixing semantic errors.

(found that quote, haven't paid for the paper)

 

Quote from tobega on February 22, 2021, 4:31 pm
Quote from Dave Voorhis on February 22, 2021, 1:51 pm
Quote from tobega on February 22, 2021, 11:59 am
Quote from Erwin on February 21, 2021, 11:15 pm
Quote from tobega on February 21, 2021, 7:52 am

if we were able to specify the way the data relates to each other, as part of the type system

I want to comment on that one because imo it betrays a fundamental lack of understanding of how we ever arrived at making these digital computers do for us what they do.

Computers compute.  Nothing more and nothing less.  They carry out operations of some algebra and to have an algebra in the first place requires to have a "system" of types that the algebra is defined over.  No meaning, no interpretation, just the pure simple fact that 1+1=2, regardless of whether it's humans or furlongs or grains of sand in the desert.

Interpretation and "meaning" and concepts such as "data relating to other data" is ***tacked onto that system of algebraic computation***, and in the "constructionist view" of how things are built, it means the algebraic system must exist before any question of "data relating to other data" can be answered, and therefore making "data relating to other data" a problem to be solved by the type system, necessarily leads to circular dependencies in whatever it is that gets set up this way.

Or maybe I'm just too deeply entrenched in my "constructionist view"

A type system is something that is outside the calculation/program itself in an attempt to prove, or partially prove, that the program is correct.

Adding 1 + 1 = 2, with types 1m+1m = 2m, but adding 1m + 1s = what? And dividing 1m/1s = 1 m/s, another type derived from the first two. This is types in action.

Each attribute of a relation can be assigned a type to help us prove that we are doing something sane. Extending this thought, it's at least thinkable that a relation can be viewed as a type in itself, and that such a type can be something more specific than just being a collection of attributes. If it were possible to specify such things, then they could be used to more completely prove the correctness of a program.

Then the whole question is if it is worth doing the extra work of specifying types or not, which conflates with personal preferences and, when you bring in economics, a trade-off between the cost of a bug versus the cost of the extra work. Interestingly, I just came across a study that seems to indicate that it takes less time to fix type errors in a dynamic language than it takes to write the type information in the statically typed languaged, https://courses.cs.washington.edu/courses/cse590n/10au/hanenberg-oopsla2010.pdf (in there is also referenced an experiment comparing Java and Groovy, which is essentially dynamically typed Java, as Dave postulated earlier). It also seems from other sources that type errors generally are a very small part of the totality of bugs, 1-3%.

Two things:

  1. Type errors aren't really the point. Researchers often seem to think it's all about type safety. It's not all about type safety, though for many mission critical applications if the compiler catches 1% to 3% of bugs that might otherwise not get caught by unit tests or code reviews or whatever, then that's enough to justify static typing right there -- particularly on systems where any downtime due to run-time bugs (or delays due to development bugfix blockers) may represent significant cost.
  2. The real point is enforced readability. That's more about appropriate use of type annotations vs no type annotations; the former usually being associated with statically typed languages and the latter with dynamically typed languages, though they're notionally orthogonal. What would be more meaningful than comparing fixing type bugs vs writing type annotations would be a comparison between the time it takes to write type annotations vs the time it takes to grok complex code in their absence.

But again, no amount of presented research is likely to shift any individual developer's personal preference, and (I point out somewhat pessimistically) it's not likely to shift management choices until it's so wrapped in vendor marketing efforts as to eliminate any real value.

  1. 99% of software isn't mission critical, but you are right, it is a cost function and a trade-off. Hopefully not as cynical as when an automobile manufacturer purportedly decided that it was cheaper to pay punitive damages for deaths from brake failure than it was to recall the line.
  2. And there is support for that:

According to Hanenberg, S., Kleinschmager, S., Robbes, R. et al. in An empirical study on the impact of static typing on software maintainability. (2014):

This paper describes an experiment that tests whether static type systems improve the maintainability of software systems, in terms of understanding undocumented code, fixing type errors, and fixing semantic errors. The results show rigorous empirical evidence that static types are indeed beneficial to these activities, except when fixing semantic errors.

(found that quote, haven't paid for the paper)

I suspect almost any working developer who's had to maintain legacy code in both statically (manifestly) typed languages and dynamically typed languages would say you don't need to buy the paper...

Because it's bleedin' obvious.

I suspect the majority of the same developers would point out that dynamic typing doesn't really gain you any development time, either -- or at least it doesn't save enough keystrokes to make up for the additional mental load and readability effort (unless your keyboard skillz are really sl-o-o-o--o--o---o----w.)

Perhaps it saves time if you're writing wee scripts and toy programs.

Typical applications, no.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
PreviousPage 5 of 6Next