The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Life after D with Safe Java

PreviousPage 9 of 11Next
Quote from dandl on April 24, 2021, 12:08 am

Much of this refactoring happens during testing, as unit tests uncover weaknesses in the original code.

No, most of that refactoring ***does not happen at all***, because fixing the weaknesses is [expected by management to be] done on a schedule that was planned on the same unrealistic and unreasonable assumptions about speed of development that caused those weaknesses to exist in the first place.

Quote from Dave Voorhis on April 24, 2021, 9:41 am
Quote from dandl on April 24, 2021, 12:08 am
Quote from Erwin on April 23, 2021, 7:07 pm
Quote from Dave Voorhis on April 23, 2021, 10:48 am

A common myth is that writing clearly gets in the way of writing productively.

It doesn't. Writing clear code improves productivity, because the first reader is the author.

The "simplest thing that can possibly work" refers to algorithmic and structural simplicity. It doesn't mean the fewest keystrokes, single-letter variable names, monolithic functions, and unreadably-chained eye-watering expressions.

I beg to annotate.

"Writing productively" clearly refers ***exclusively*** to the process of writing, i.e. the process of creating new code.  And yes, in that particular narrow view of "writing", "writing clearly" absolutely and undoubtedly does get in the way of "productivity".  The loss of productivity that comes along with "not writing clearly" is incurred only later.

And so yes, "writing clear code" does indeed "improve productivity", but the "because" you provide is ***TOTALLY OFF***.  The "improved productivity" that comes with "writing clear code" manifests itself only on a timescale of the entire ultimate lifetime of the code being written.  "Writing clear code" "improves productivity" ***once that code is getting subjected to the process of keeping it alive and running, i.e. maintenance, i.e. something the "author" is typically ***very much not concerned with*** (at least personally and within the timeframe of the project wihin which said author is doing the writing).

And besides, what will or will not be "clear" to any subsequent reader other than the original author himself, is itself a variable that is just as well dependent on said subsequent readers's ability to understand.

The entire problem of software creation & maintenance in a nutshell.

 

Fred Books said "plan to throw one away, you will anyway". I treat all first attempts as a prototype, the 'simplest thing that can possibly work', and I want super short. It's only the code I leave after refactoring that has to 'good enough', and  usually longer.

There's an unfortunate tendency in the industry to never get around to refactoring or even openly deprecate it ("I told you before, we don't have time for that 'Agile' crap around here, Voorhis!") and push such prototypes into production.

Yeah. Isn't that how the industry ended up with SQL?

IBM hardware-oriented engineer: the simplest thing that can possibly work is a SELECT statement. We'll refactor when we've got one of those PLT guys around.

Ellison: we don't have time for that academic crap; I want that prototype in production before IBM pushes it out.

Quote from Dave Voorhis on April 24, 2021, 9:38 am
Quote from dandl on April 24, 2021, 12:20 am

But you refuse to allow the compiler to do meta-programming, and prefer to have external code generation and runtime reflection.

I refuse to embrace macros, and metaprogramming almost invariably reflects fundamental language limitations. Fix those, and you don't need it.

Fortunately others realise that metaprogramming exists precisely to address those limitations that the techniques you find to some comforting cannot address.

What "others realise" doesn't matter to me.

That's just fine, but up to now  you've been quoting 'others' as if they were the authority. I'm quite happy for you to stick to your own personal views backed up by specific authority as needed.

Metaprogramming in general refers to treating programs as data, which includes facilities like reflection, self-modification, and various forms of macros, of which only reflection is relatively safe. The other two all-too-often and all-too-easily lead to code that is complicated, obfuscated, obscure, and without appropriate compiler safety-nets, potentially unsafe.

Yes, maybe, but not in this case. I'm only concerned with 'safe' meta-programming, by which I mean at compile time, in such a way that runtime errors cannot happen. And I'm not concerned with people who write bad code: the solution is easy, we won't do that. And BTW I have already excluded text macros as being irrelevant.

But that's not the crucial point, which is that (in the case of macros, at least) they almost invariably reflect fundamental limitations of the language.

A common limitation that encourages use of macros: being unable to parametrically reference arbitrary variables.

Another: being unable to return/emit some parametrically-defined and statically-checked arbitrary construct like a class or block of structured code.

Fix those, and the apparent need for macros goes away, subsumed into the reasonable (easy-to-reason-about, unlike macros) function/procedure/method mechanism. But it does mean rethinking what is first-class (ideally everything) and what isn't (as little as possible.)  Smalltalk is a notable example of this.

I agree that those are potential use cases, but not with the idea of 'first class everything'. I have used Smalltalk, Lisp and Forth, and they are deeply unsafe. In TTM, Haskell and most other 'safer' languages values are first class and types and other syntactic elements are not. There are languages with first class types (Coq, Idris, Agda) but this does not extend to every name, every syntactic element.

So why should compilers be fixed and immutable? We accept that a language comes with a runtime library and we expect to be able to add our own libraries and have them treated on very much the same footing. Why not have compiler plugins which add specific features to a language to deal with specific situations? I propose meta as a means to that end.

The basic idea is to open up the compiler internals in a way that allows writing language extensions, specifically to include:

  • name-mangling, to generate new names from those defined in the program
  • shorthands, of the kind described in TTM (INSERT/UPDATE/DELETE instead of relational assignment
  • iteration over syntactic elements such as names of members, arguments, functions and so on

My current project involves an API in 4 formats. Essentially it is a data model of 3 entities, around 30 fields and a 10 function calls. The components are:

  • a base in C++ (API calls in a header file)
  • C# interop ([DllImport] functions)
  • C# data model used by REST server (DotNet Core)
  • Json data model emitted by REST server (generated at runtime)
  • C# data model reconstructed from Json (NewtonSoft)
  • Final C# data model including local state (Unity).

Four of these are in C#, but they are 4 distinct pieces of code. A sufficiently powerful C# compiler could check them for correctness, instead of (as now) getting runtime crashes if I make a mistake. The meta I am proposing could do that.

A different approach is to provide such a seamlessly integrated macro facility that the usual macro/language impedance mismatches (which are at the root of making it abominable) are reduced. The Lisp family is a notable example of this, but it's perhaps still a bit too easy create a morass of undesirably "meta" incomprehensibility.

Agreed.

I don't prefer code generation or runtime reflection. I endeavour to avoid both. Occasionally, I accept that either or both are preferable to the alternatives, which are usually manually writing code or macros.

Macros are an abomination.

The true abomination is hiding stuff from the compiler and leaving the mess to be sorted out at runtime. Things like reflection and annotations targeting runtime code guarantee that bugs will be later and more expensive than anything you could have done at compile-time.

I agree that runtime failure needs to be minimised. Reflection, like macros, are a mechanism to work around fundamental language limitations. Those limitations are what need to be addressed.

What language are you thinking of where annotations target runtime code?

In Java, annotations are primarily a compile-time mechanism, but a given annotation type can be optionally instructed to set runtime metadata so it can be introspected at runtime.

I don't think much of annotations, either.

I have more tolerance for external compilation/transpilation -- specification in language x goes in, code in language y comes out -- than bodgy in-language mechanisms like macros. That does mean the overhead of a compiler/transpiler to maintain, but at least it's decoupled and standalone. And, again, ideally whatever the external compiler/transpiler does should be something we can do in-language with programming (and have it statically checked) rather than metaprogramming.

However, I do recognise the difficulty in achieving that, particularly when we want to have statically-verified complex constructs appear at runtime as a result of something that's happened at runtime, like retrieving a structured value from some external service where we can't know the structure until we retrieve it.

Transpiling is an implementation strategy for languages, especially those building on a base. It might suit my meta proposal, but the issue is orthogonal.

My current approach to such things (in Java, at least) is (for some purposes) to receive the value, invoke a transpiler that converts it to Java source, invoke the Java compiler to generate Java binaries from the Java source, then use reflection (plus a bit of horror called a custom ClassLoader) to use the generated Java at runtime. Note that this is entirely in code, running within the Java program. No human interaction is involved.

Or (for other purposes), invoke a transpiler that converts the value to Java source, then integrate the generated Java source into the current project. There may be some negligible human involvement to launch the transpiler and invoke 'Refresh' in the IDE afterward.

Neither of these are ideal, but the alternatives (e.g., use a dynamically-typed language!) are worse, and macros wouldn't help.

But meta would solve the problem, and be safer.

Andl - A New Database Language - andl.org

I agree. But I think you missed my point. There are 3 'readers': (1) me while I'm working on the code, (2) me 12 months from now and (3) anyone else anytime. The most productive way to write is short and fast, and only reader (1) needs to read my first attempt at a solution. Reader (2) has forgotten everything but knows my style, so I need to refactor private code to leave it in that condition. Reader (3) is a stranger and perhaps less skilled, so the code has to be top quality. Much of this refactoring happens during testing, as unit tests uncover weaknesses in the original code.

Fred Books said "plan to throw one away, you will anyway". I treat all first attempts as a prototype, the 'simplest thing that can possibly work', and I want super short. It's only the code I leave after refactoring that has to 'good enough', and  usually longer.

There's an unfortunate tendency in the industry to never get around to refactoring or even openly deprecate it ("I told you before, we don't have time for that 'Agile' crap around here, Voorhis!") and push such prototypes into production.

You may be right, but you're describing bad management, and it will always outsmart good technology. If you work out that your boss will push the first prototype into release, so you spend longer on the prototype, the boss will still find a way to get you to write and ship poorer (cheaper) code. You need to find a new boss, not a new development methodology.

I was describing how I write code, and how programmers who have worked for me have been instructed. I dislike Agile, but I know the code is never right the first time it seems to work, and we usually found and fixed that as we built up the formal test suite. But we're writing product code and bugs down the track are all at our cost, so the time spent on good code paid for itself.

Andl - A New Database Language - andl.org
Quote from dandl on April 25, 2021, 2:22 am

I agree. But I think you missed my point. There are 3 'readers': (1) me while I'm working on the code, (2) me 12 months from now and (3) anyone else anytime. The most productive way to write is short and fast, and only reader (1) needs to read my first attempt at a solution. Reader (2) has forgotten everything but knows my style, so I need to refactor private code to leave it in that condition. Reader (3) is a stranger and perhaps less skilled, so the code has to be top quality. Much of this refactoring happens during testing, as unit tests uncover weaknesses in the original code.

Fred Books said "plan to throw one away, you will anyway". I treat all first attempts as a prototype, the 'simplest thing that can possibly work', and I want super short. It's only the code I leave after refactoring that has to 'good enough', and  usually longer.

There's an unfortunate tendency in the industry to never get around to refactoring or even openly deprecate it ("I told you before, we don't have time for that 'Agile' crap around here, Voorhis!") and push such prototypes into production.

You may be right, but you're describing bad management, and it will always outsmart good technology. If you work out that your boss will push the first prototype into release, so you spend longer on the prototype, the boss will still find a way to get you to write and ship poorer (cheaper) code. You need to find a new boss, not a new development methodology.

My bosses -- when I've had them -- have been and are great (in case they're reading this ;-)

From decades as a consultant software engineer, I've seen it from client project managers on individual projects -- usually not expressed so bluntly; my summary was only a tad facetious -- but unquestionably there in spirit, driven by looming deadlines and desperation and the apparent fact (whether it is or not) that we don't have enough time (or ability!) to write the code, let alone rewrite it.

I was describing how I write code, and how programmers who have worked for me have been instructed.

Were they junior developers?

I imagine some seniors might take issue with being instructed how to write code.

I dislike Agile, but I know the code is never right the first time it seems to work, and we usually found and fixed that as we built up the formal test suite. But we're writing product code and bugs down the track are all at our cost, so the time spent on good code paid for itself.

I know you dislike Agile, but you might consider trying test driven development as a way to evade "never right the first time", and ensure that the formal test suite not only exists, but exists before the code it needs to test.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from dandl on April 25, 2021, 2:11 am
Quote from Dave Voorhis on April 24, 2021, 9:38 am
Quote from dandl on April 24, 2021, 12:20 am

But you refuse to allow the compiler to do meta-programming, and prefer to have external code generation and runtime reflection.

I refuse to embrace macros, and metaprogramming almost invariably reflects fundamental language limitations. Fix those, and you don't need it.

Fortunately others realise that metaprogramming exists precisely to address those limitations that the techniques you find to some comforting cannot address.

What "others realise" doesn't matter to me.

That's just fine, but up to now  you've been quoting 'others' as if they were the authority. I'm quite happy for you to stick to your own personal views backed up by specific authority as needed.

Metaprogramming in general refers to treating programs as data, which includes facilities like reflection, self-modification, and various forms of macros, of which only reflection is relatively safe. The other two all-too-often and all-too-easily lead to code that is complicated, obfuscated, obscure, and without appropriate compiler safety-nets, potentially unsafe.

Yes, maybe, but not in this case. I'm only concerned with 'safe' meta-programming, by which I mean at compile time, in such a way that runtime errors cannot happen. And I'm not concerned with people who write bad code: the solution is easy, we won't do that. And BTW I have already excluded text macros as being irrelevant.

But that's not the crucial point, which is that (in the case of macros, at least) they almost invariably reflect fundamental limitations of the language.

A common limitation that encourages use of macros: being unable to parametrically reference arbitrary variables.

Another: being unable to return/emit some parametrically-defined and statically-checked arbitrary construct like a class or block of structured code.

Fix those, and the apparent need for macros goes away, subsumed into the reasonable (easy-to-reason-about, unlike macros) function/procedure/method mechanism. But it does mean rethinking what is first-class (ideally everything) and what isn't (as little as possible.)  Smalltalk is a notable example of this.

I agree that those are potential use cases, but not with the idea of 'first class everything'. I have used Smalltalk, Lisp and Forth, and they are deeply unsafe. In TTM, Haskell and most other 'safer' languages values are first class and types and other syntactic elements are not. There are languages with first class types (Coq, Idris, Agda) but this does not extend to every name, every syntactic element.

So why should compilers be fixed and immutable? We accept that a language comes with a runtime library and we expect to be able to add our own libraries and have them treated on very much the same footing. Why not have compiler plugins which add specific features to a language to deal with specific situations? I propose meta as a means to that end.

The basic idea is to open up the compiler internals in a way that allows writing language extensions, specifically to include:

  • name-mangling, to generate new names from those defined in the program
  • shorthands, of the kind described in TTM (INSERT/UPDATE/DELETE instead of relational assignment
  • iteration over syntactic elements such as names of members, arguments, functions and so on

That's in essence what the Java annotation processor allows. See https://en.wikipedia.org/wiki/Java_annotation#Processing for a brief summary and https://www.baeldung.com/java-annotation-processing-builder for an example.

My current project involves an API in 4 formats. Essentially it is a data model of 3 entities, around 30 fields and a 10 function calls. The components are:

  • a base in C++ (API calls in a header file)
  • C# interop ([DllImport] functions)
  • C# data model used by REST server (DotNet Core)
  • Json data model emitted by REST server (generated at runtime)
  • C# data model reconstructed from Json (NewtonSoft)
  • Final C# data model including local state (Unity).

Four of these are in C#, but they are 4 distinct pieces of code. A sufficiently powerful C# compiler could check them for correctness, instead of (as now) getting runtime crashes if I make a mistake. The meta I am proposing could do that.

A different approach is to provide such a seamlessly integrated macro facility that the usual macro/language impedance mismatches (which are at the root of making it abominable) are reduced. The Lisp family is a notable example of this, but it's perhaps still a bit too easy create a morass of undesirably "meta" incomprehensibility.

Agreed.

I don't prefer code generation or runtime reflection. I endeavour to avoid both. Occasionally, I accept that either or both are preferable to the alternatives, which are usually manually writing code or macros.

Macros are an abomination.

The true abomination is hiding stuff from the compiler and leaving the mess to be sorted out at runtime. Things like reflection and annotations targeting runtime code guarantee that bugs will be later and more expensive than anything you could have done at compile-time.

I agree that runtime failure needs to be minimised. Reflection, like macros, are a mechanism to work around fundamental language limitations. Those limitations are what need to be addressed.

What language are you thinking of where annotations target runtime code?

In Java, annotations are primarily a compile-time mechanism, but a given annotation type can be optionally instructed to set runtime metadata so it can be introspected at runtime.

I don't think much of annotations, either.

I have more tolerance for external compilation/transpilation -- specification in language x goes in, code in language y comes out -- than bodgy in-language mechanisms like macros. That does mean the overhead of a compiler/transpiler to maintain, but at least it's decoupled and standalone. And, again, ideally whatever the external compiler/transpiler does should be something we can do in-language with programming (and have it statically checked) rather than metaprogramming.

However, I do recognise the difficulty in achieving that, particularly when we want to have statically-verified complex constructs appear at runtime as a result of something that's happened at runtime, like retrieving a structured value from some external service where we can't know the structure until we retrieve it.

Transpiling is an implementation strategy for languages, especially those building on a base. It might suit my meta proposal, but the issue is orthogonal.

My current approach to such things (in Java, at least) is (for some purposes) to receive the value, invoke a transpiler that converts it to Java source, invoke the Java compiler to generate Java binaries from the Java source, then use reflection (plus a bit of horror called a custom ClassLoader) to use the generated Java at runtime. Note that this is entirely in code, running within the Java program. No human interaction is involved.

Or (for other purposes), invoke a transpiler that converts the value to Java source, then integrate the generated Java source into the current project. There may be some negligible human involvement to launch the transpiler and invoke 'Refresh' in the IDE afterward.

Neither of these are ideal, but the alternatives (e.g., use a dynamically-typed language!) are worse, and macros wouldn't help.

But meta would solve the problem, and be safer.

I'm not sure it would, at least for the examples I'm thinking of, because they're generating, compiling, and running code at run-time. The generated code doesn't require any particular special handling. It's ordinary Java code; it just happens to have been written, compiled and run by the application at runtime.

The same essential approach is often used for regex processors, which typically compile a regex (often generated at run-time) into some performant representation, either compiled host language code (e.g., Java) or LLVM or whatever.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on April 25, 2021, 9:16 am
Quote from dandl on April 25, 2021, 2:11 am
Quote from Dave Voorhis on April 24, 2021, 9:38 am
Quote from dandl on April 24, 2021, 12:20 am

But you refuse to allow the compiler to do meta-programming, and prefer to have external code generation and runtime reflection.

I refuse to embrace macros, and metaprogramming almost invariably reflects fundamental language limitations. Fix those, and you don't need it.

Fortunately others realise that metaprogramming exists precisely to address those limitations that the techniques you find to some comforting cannot address.

What "others realise" doesn't matter to me.

That's just fine, but up to now  you've been quoting 'others' as if they were the authority. I'm quite happy for you to stick to your own personal views backed up by specific authority as needed.

Metaprogramming in general refers to treating programs as data, which includes facilities like reflection, self-modification, and various forms of macros, of which only reflection is relatively safe. The other two all-too-often and all-too-easily lead to code that is complicated, obfuscated, obscure, and without appropriate compiler safety-nets, potentially unsafe.

Yes, maybe, but not in this case. I'm only concerned with 'safe' meta-programming, by which I mean at compile time, in such a way that runtime errors cannot happen. And I'm not concerned with people who write bad code: the solution is easy, we won't do that. And BTW I have already excluded text macros as being irrelevant.

But that's not the crucial point, which is that (in the case of macros, at least) they almost invariably reflect fundamental limitations of the language.

A common limitation that encourages use of macros: being unable to parametrically reference arbitrary variables.

Another: being unable to return/emit some parametrically-defined and statically-checked arbitrary construct like a class or block of structured code.

Fix those, and the apparent need for macros goes away, subsumed into the reasonable (easy-to-reason-about, unlike macros) function/procedure/method mechanism. But it does mean rethinking what is first-class (ideally everything) and what isn't (as little as possible.)  Smalltalk is a notable example of this.

I agree that those are potential use cases, but not with the idea of 'first class everything'. I have used Smalltalk, Lisp and Forth, and they are deeply unsafe. In TTM, Haskell and most other 'safer' languages values are first class and types and other syntactic elements are not. There are languages with first class types (Coq, Idris, Agda) but this does not extend to every name, every syntactic element.

So why should compilers be fixed and immutable? We accept that a language comes with a runtime library and we expect to be able to add our own libraries and have them treated on very much the same footing. Why not have compiler plugins which add specific features to a language to deal with specific situations? I propose meta as a means to that end.

The basic idea is to open up the compiler internals in a way that allows writing language extensions, specifically to include:

  • name-mangling, to generate new names from those defined in the program
  • shorthands, of the kind described in TTM (INSERT/UPDATE/DELETE instead of relational assignment
  • iteration over syntactic elements such as names of members, arguments, functions and so on

That's in essence what the Java annotation processor allows. See https://en.wikipedia.org/wiki/Java_annotation#Processing for a brief summary and https://www.baeldung.com/java-annotation-processing-builder for an example.

I note, by the way, that despite my dislike for annotations I may grudgingly accept using an annotation or two in my Wrapd "SQL Amplifier" project, as it might be cleaner to have the user-developer annotate certain user-defined methods, rather than require the user-developer to explicitly invoke them from a specified method.

But I need to think more about this.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from Dave Voorhis on April 25, 2021, 8:58 am

I imagine some seniors might take issue with being instructed how to write code.

AMEN.  Especially since the more some given individual ***believes*** himself to be in a position to be able to do so [instruct others on how to write code], the less they actually are, typically.  These gentlemen called Dunning & Kruger have studied a very similar phenomenon quite extensively.  Intensively, too.

From decades as a consultant software engineer, I've seen it from client project managers on individual projects -- usually not expressed so bluntly; my summary was only a tad facetious -- but unquestionably there in spirit, driven by looming deadlines and desperation and the apparent fact (whether it is or not) that we don't have enough time (or ability!) to write the code, let alone rewrite it.

Then you're working for the wrong bosses.

I was describing how I write code, and how programmers who have worked for me have been instructed.

Were they junior developers?

I imagine some seniors might take issue with being instructed how to write code.

I think you read into that something I didn't say.

I have never hired a junior developer. I look for people who are better than I am, and then I assign work to them and hold them responsible for completing it satisfactorily. I choose the mix of the 4 big factors: scope, budget, time and quality, and since I place a high value on quality I make sure they do too. And I review all the code.

I dislike Agile, but I know the code is never right the first time it seems to work, and we usually found and fixed that as we built up the formal test suite. But we're writing product code and bugs down the track are all at our cost, so the time spent on good code paid for itself.

I know you dislike Agile, but you might consider trying test driven development as a way to evade "never right the first time", and ensure that the formal test suite not only exists, but exists before the code it needs to test.

I have, and I don't like it, because it wants me to write tests when I don't yet know what the code will finally do. Almost everything I write starts out experimental, I have an idea rather than a spec, and I want to get something working fast so I can see if I'm on the right track. If I write tests and then write code and then throw it away, that's double the work. I don't write the formal tests until I know what the final code will do, which is often quite late.

Andl vs Rel would be a good comparison. I started writing Andl with no particular language in mind, all experimental. With Rel you had the TD spec as a starting point. Chalk and cheese.

Andl - A New Database Language - andl.org
Quote from dandl on April 26, 2021, 2:00 am

From decades as a consultant software engineer, I've seen it from client project managers on individual projects -- usually not expressed so bluntly; my summary was only a tad facetious -- but unquestionably there in spirit, driven by looming deadlines and desperation and the apparent fact (whether it is or not) that we don't have enough time (or ability!) to write the code, let alone rewrite it.

Then you're working for the wrong bosses.

My bosses have been fine. It's clients that sometimes have issues with refactoring/rewriting.

I was describing how I write code, and how programmers who have worked for me have been instructed.

Were they junior developers?

I imagine some seniors might take issue with being instructed how to write code.

I think you read into that something I didn't say.

You wrote, "I was describing how I write code, and how programmers who have worked for me have been instructed."

It looks like you meant how you instructed the programmers who have worked for you.

I have never hired a junior developer. I look for people who are better than I am, and then I assign work to them and hold them responsible for completing it satisfactorily. I choose the mix of the 4 big factors: scope, budget, time and quality, and since I place a high value on quality I make sure they do too. And I review all the code.

Who reviews your code?

I dislike Agile, but I know the code is never right the first time it seems to work, and we usually found and fixed that as we built up the formal test suite. But we're writing product code and bugs down the track are all at our cost, so the time spent on good code paid for itself.

I know you dislike Agile, but you might consider trying test driven development as a way to evade "never right the first time", and ensure that the formal test suite not only exists, but exists before the code it needs to test.

I have, and I don't like it, because it wants me to write tests when I don't yet know what the code will finally do. Almost everything I write starts out experimental, I have an idea rather than a spec, and I want to get something working fast so I can see if I'm on the right track. If I write tests and then write code and then throw it away, that's double the work. I don't write the formal tests until I know what the final code will do, which is often quite late.

Andl vs Rel would be a good comparison. I started writing Andl with no particular language in mind, all experimental. With Rel you had the TD spec as a starting point. Chalk and cheese.

Whilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.

That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.

I like it, but if it doesn't work for you, then it doesn't work for you.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
PreviousPage 9 of 11Next