Life after D with Safe Java
Quote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
Whilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
Quote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
Quote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
Quote from dandl on May 1, 2021, 2:59 pmQuote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
Just to clarify:
- the C++ code has no API, it cannot be tested. [Actually, it no longer builds.]
- You cannot expose classes through a "C" API.
- The big challenge is how to pass arrays and strings without using any malloc.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
That's the bit I don't understand. 50 data items means at least 50 tests at each level, which is a lot of work. And since the API is volatile, much of that will require rework as the design firms up. What I've seen is one major change break a whole slew of tests, and nobody willing to fix them. Maintaining one set is hard enough, but maintaining all those levels is hard to justify.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I agree that the killer app for a test suite is picking up regressions. You're adding new stuff over here and broke something over there and didn't find out until much later. Powerflex has many thousands of tests, and they've paid for themselves many times over. We found that the sweet spot for writing really good tests is alongside writing the documentation. Not everyone gets round to doing that...
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
But that really is only going to work if you test every level and maintain every test, and even then you often don't get enough information.
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
Quote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
Just to clarify:
- the C++ code has no API, it cannot be tested. [Actually, it no longer builds.]
- You cannot expose classes through a "C" API.
- The big challenge is how to pass arrays and strings without using any malloc.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
That's the bit I don't understand. 50 data items means at least 50 tests at each level, which is a lot of work. And since the API is volatile, much of that will require rework as the design firms up. What I've seen is one major change break a whole slew of tests, and nobody willing to fix them. Maintaining one set is hard enough, but maintaining all those levels is hard to justify.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I agree that the killer app for a test suite is picking up regressions. You're adding new stuff over here and broke something over there and didn't find out until much later. Powerflex has many thousands of tests, and they've paid for themselves many times over. We found that the sweet spot for writing really good tests is alongside writing the documentation. Not everyone gets round to doing that...
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
But that really is only going to work if you test every level and maintain every test, and even then you often don't get enough information.
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
Quote from Dave Voorhis on May 1, 2021, 7:33 pmQuote from dandl on May 1, 2021, 2:59 pmQuote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
Just to clarify:
- the C++ code has no API, it cannot be tested. [Actually, it no longer builds.]
- You cannot expose classes through a "C" API.
- The big challenge is how to pass arrays and strings without using any malloc.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
That's the bit I don't understand. 50 data items means at least 50 tests at each level, which is a lot of work.
If the 50 data items are important, aren't they worth 50 tests to make sure they work and keep working as you refactor and make additions or changes?
And since the API is volatile, much of that will require rework as the design firms up. What I've seen is one major change break a whole slew of tests, and nobody willing to fix them. Maintaining one set is hard enough, but maintaining all those levels is hard to justify.
If testing the levels is hard to justify, maybe you've either got too many levels, or level generation should be automated and you test the level generator.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I agree that the killer app for a test suite is picking up regressions. You're adding new stuff over here and broke something over there and didn't find out until much later. Powerflex has many thousands of tests, and they've paid for themselves many times over. We found that the sweet spot for writing really good tests is alongside writing the documentation. Not everyone gets round to doing that...
Those are integration or end-to-end tests, which are also important, but they're not really unit tests.
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
But that really is only going to work if you test every level and maintain every test, and even then you often don't get enough information.
In the absence of information, I find creating tests to be a helpful way to find what information I don't have and need, or what information I don't have and don't need. If I can write some minimal code to use an idea before the idea is implemented, then I'm on my way to solving the problem. If I can't even write minimal code to use the idea, then I almost certainly need more information.
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I find it vital for creating new things, because it guides development based on implementing "what is the least code I can write to test if this (idea) works?"
That, in turn, leads to writing the minimal code needed to make the tests pass.
I find it very useful for upgrading/modifying/improving existing things, because it guides development of new features (per the above) whilst helping to ensure that I haven't broken anything.
Quote from dandl on May 1, 2021, 2:59 pmQuote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
Just to clarify:
- the C++ code has no API, it cannot be tested. [Actually, it no longer builds.]
- You cannot expose classes through a "C" API.
- The big challenge is how to pass arrays and strings without using any malloc.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
That's the bit I don't understand. 50 data items means at least 50 tests at each level, which is a lot of work.
If the 50 data items are important, aren't they worth 50 tests to make sure they work and keep working as you refactor and make additions or changes?
And since the API is volatile, much of that will require rework as the design firms up. What I've seen is one major change break a whole slew of tests, and nobody willing to fix them. Maintaining one set is hard enough, but maintaining all those levels is hard to justify.
If testing the levels is hard to justify, maybe you've either got too many levels, or level generation should be automated and you test the level generator.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I agree that the killer app for a test suite is picking up regressions. You're adding new stuff over here and broke something over there and didn't find out until much later. Powerflex has many thousands of tests, and they've paid for themselves many times over. We found that the sweet spot for writing really good tests is alongside writing the documentation. Not everyone gets round to doing that...
Those are integration or end-to-end tests, which are also important, but they're not really unit tests.
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
But that really is only going to work if you test every level and maintain every test, and even then you often don't get enough information.
In the absence of information, I find creating tests to be a helpful way to find what information I don't have and need, or what information I don't have and don't need. If I can write some minimal code to use an idea before the idea is implemented, then I'm on my way to solving the problem. If I can't even write minimal code to use the idea, then I almost certainly need more information.
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I find it vital for creating new things, because it guides development based on implementing "what is the least code I can write to test if this (idea) works?"
That, in turn, leads to writing the minimal code needed to make the tests pass.
I find it very useful for upgrading/modifying/improving existing things, because it guides development of new features (per the above) whilst helping to ensure that I haven't broken anything.
Quote from tobega on May 2, 2021, 5:03 amQuote from dandl on May 1, 2021, 2:59 pmQuote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
Just to clarify:
- the C++ code has no API, it cannot be tested. [Actually, it no longer builds.]
- You cannot expose classes through a "C" API.
- The big challenge is how to pass arrays and strings without using any malloc.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
That's the bit I don't understand. 50 data items means at least 50 tests at each level, which is a lot of work. And since the API is volatile, much of that will require rework as the design firms up. What I've seen is one major change break a whole slew of tests, and nobody willing to fix them. Maintaining one set is hard enough, but maintaining all those levels is hard to justify.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I agree that the killer app for a test suite is picking up regressions. You're adding new stuff over here and broke something over there and didn't find out until much later. Powerflex has many thousands of tests, and they've paid for themselves many times over. We found that the sweet spot for writing really good tests is alongside writing the documentation. Not everyone gets round to doing that...
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
But that really is only going to work if you test every level and maintain every test, and even then you often don't get enough information.
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
I have found that one of the ways to divide programmers into two groups is to ask what the code does.
One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.
The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.
Quote from dandl on May 1, 2021, 2:59 pmQuote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
Just to clarify:
- the C++ code has no API, it cannot be tested. [Actually, it no longer builds.]
- You cannot expose classes through a "C" API.
- The big challenge is how to pass arrays and strings without using any malloc.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
That's the bit I don't understand. 50 data items means at least 50 tests at each level, which is a lot of work. And since the API is volatile, much of that will require rework as the design firms up. What I've seen is one major change break a whole slew of tests, and nobody willing to fix them. Maintaining one set is hard enough, but maintaining all those levels is hard to justify.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I agree that the killer app for a test suite is picking up regressions. You're adding new stuff over here and broke something over there and didn't find out until much later. Powerflex has many thousands of tests, and they've paid for themselves many times over. We found that the sweet spot for writing really good tests is alongside writing the documentation. Not everyone gets round to doing that...
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
But that really is only going to work if you test every level and maintain every test, and even then you often don't get enough information.
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
I have found that one of the ways to divide programmers into two groups is to ask what the code does.
One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.
The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.
Quote from Dave Voorhis on May 2, 2021, 4:14 pmQuote from tobega on May 2, 2021, 5:03 amQuote from dandl on May 1, 2021, 2:59 pmQuote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
Just to clarify:
- the C++ code has no API, it cannot be tested. [Actually, it no longer builds.]
- You cannot expose classes through a "C" API.
- The big challenge is how to pass arrays and strings without using any malloc.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
That's the bit I don't understand. 50 data items means at least 50 tests at each level, which is a lot of work. And since the API is volatile, much of that will require rework as the design firms up. What I've seen is one major change break a whole slew of tests, and nobody willing to fix them. Maintaining one set is hard enough, but maintaining all those levels is hard to justify.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I agree that the killer app for a test suite is picking up regressions. You're adding new stuff over here and broke something over there and didn't find out until much later. Powerflex has many thousands of tests, and they've paid for themselves many times over. We found that the sweet spot for writing really good tests is alongside writing the documentation. Not everyone gets round to doing that...
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
But that really is only going to work if you test every level and maintain every test, and even then you often don't get enough information.
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way. [...]
Excellent way to explain it.
Yes, tests are all about how I wish to (or if it's existing code, how I must) use some code, and verifying that it works once I can use it. They're not about how code is implemented, only whether it's correctly implemented or not. I don't care what's happening inside, as long as it's working from the outside.
If I know how I want to use a facility but I don't know how to implement it, I write some (failing) tests to use the facility that doesn't exist yet. Then I work on making the tests pass.
If I don't know how I want to use a facility then it doesn't matter whether I know how to implement it or not, because before I do anything else, I've got to do more thinking or research or ask questions to determine how it should be used.
Once I know how I want to use it, then I can write some (failing) tests to use the facility that doesn't exist yet, so I can work on making the tests pass.
Knowing how I want to use a facility -- defining a simple API -- is always the starting point.
Quote from tobega on May 2, 2021, 5:03 amQuote from dandl on May 1, 2021, 2:59 pmQuote from Dave Voorhis on April 29, 2021, 8:02 amQuote from dandl on April 29, 2021, 3:58 amWhilst pure experimentation perhaps doesn't need tests, I sometimes create tests first anyway as a helpful way to clarify my thinking on a new class, API or language feature, by imagining that it exists and writing code to use it.
That often allows me to avoid writing a non-working or poor quality thing, by discovering its crapitude by writing code to use it, before it exists.
I like it, but if it doesn't work for you, then it doesn't work for you.
I've given this some thought, and I don't see how it can work. The current project is to take a 'big ball of mud' C++ MFC game and port it to Unity. The steps I followed are:
- convert the C++ into a DLL by removing all the UI, adding a few stubs, macros and typedefs to minimise code changes (other than deletions)
- construct a new "C" API with a virtual data model of around 50 queryable fields and 10 commands, using only 'blittable' types (no memory allocation)
- add a C# Interop layer on top of that
- construct an actual data model on top of C# interop calls
- serve the data model as Json via a REST-like interface
- construct an expanded data model (with local state) in Unity scripting.
The obvious places for a test suite is the C# data model and commands, but that can't exist until the lower levels have been built. If TDD was used, there are 5 possible places: C API, C# interop, C# data model, REST API, Unity model. That looks like a lot of extra code to write, none of which has any value in the final build because if the test suite passes, everything down below is OK. Potentially it's 5 tests per value per field instead of just one.
My approach is a lot of test programs, logging and visual checking during early development, which really comes into its own for debugging, and then a formal test suite built late on when the APIs are stable.
It's not that I'm against TDD, it's just that I never seem to have enough certainty around the detailed requirements to use it. What would you do here?
This is what I would probably do. I say "probably" because having not actually seen the code, I don't know if there's something that would make me think, "oh man, I'm going to have to handle this one like this," where "this" is something completely different. Sometimes, that happens.
Anyway:
- Create test(s) of the C++ code to verify it works.
- Expose function(s)/class(es) from DLL.
- Create test(s) of the "C" API.
- Implement "C" API to make test(s) pass.
- Create test(s) of the C# interop layer.
- Implement C# interop layer to make test(s) pass.
- Create test(s) of actual data model.
- Implement data model code to make test(s) pass.
- Create test(s) of JSON-based RESTful interface.
- Implement code to make test(s) pass.
- Create test(s) in Unity scripting.
- Implement code to make test(s) pass.
- GOTO 1.
Just to clarify:
- the C++ code has no API, it cannot be tested. [Actually, it no longer builds.]
- You cannot expose classes through a "C" API.
- The big challenge is how to pass arrays and strings without using any malloc.
At each "Create test(s) ..." stage, you might create only one test before moving to the next step, or create several, or create many -- whatever seems appropriate to writing enough to allow moving to the next stage when it seems right, and repeat the whole cycle as needed until the development work is done.
The idea should not be to completely finish a step before moving to the next, but to implement one aspect of functionality before moving to the next, so it's an end-to-end iterative process that gets some minimal functionality working and verified at the Unity scripting stage as soon as possible. That way, if there's some lurking design/implementation/architecture showstopper, you hopefully catch it earlier rather than later.
That's the bit I don't understand. 50 data items means at least 50 tests at each level, which is a lot of work. And since the API is volatile, much of that will require rework as the design firms up. What I've seen is one major change break a whole slew of tests, and nobody willing to fix them. Maintaining one set is hard enough, but maintaining all those levels is hard to justify.
The result should be an extensive set of tests that you can run at any time during or after development and/or refactoring to verify that changes haven't broken anything, anywhere.
I agree that the killer app for a test suite is picking up regressions. You're adding new stuff over here and broke something over there and didn't find out until much later. Powerflex has many thousands of tests, and they've paid for themselves many times over. We found that the sweet spot for writing really good tests is alongside writing the documentation. Not everyone gets round to doing that...
I appreciate that an end-to-end (aka integration) test suite (which is #11, I guess) verifies everything below is working, but what it doesn't tell you is what's broken if it stops working. Having tests up and down the stack should help to quickly identify and address breakage.
But that really is only going to work if you test every level and maintain every test, and even then you often don't get enough information.
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way. [...]
Excellent way to explain it.
Yes, tests are all about how I wish to (or if it's existing code, how I must) use some code, and verifying that it works once I can use it. They're not about how code is implemented, only whether it's correctly implemented or not. I don't care what's happening inside, as long as it's working from the outside.
If I know how I want to use a facility but I don't know how to implement it, I write some (failing) tests to use the facility that doesn't exist yet. Then I work on making the tests pass.
If I don't know how I want to use a facility then it doesn't matter whether I know how to implement it or not, because before I do anything else, I've got to do more thinking or research or ask questions to determine how it should be used.
Once I know how I want to use it, then I can write some (failing) tests to use the facility that doesn't exist yet, so I can work on making the tests pass.
Knowing how I want to use a facility -- defining a simple API -- is always the starting point.
Quote from dandl on May 3, 2021, 2:32 pmBut I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
You seem very sure that you can always reach a point at which you know what your code will do, both the algorithm and the API, before writing or running the code, and that your knowledge at this early stage will be such as to allow you to write tests of sufficient quality that they will remain valid thereafter. If that's your experience I'm not surprised you like TDD; I've never seen it done.
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
My experience is the opposite. Tests written early in the life of software give a degree of confidence that it kind of works, but they don't explore the problem space well and they don't find edge conditions. To achieve high levels of code coverage and flush out things like buffer overruns and race conditions doesn't just happen. You have to poke at the weaknesses.
I have found that one of the ways to divide programmers into two groups is to ask what the code does.
One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.
The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.
I would be disappointed by both. How would you write a test based on that response?
But I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
You seem very sure that you can always reach a point at which you know what your code will do, both the algorithm and the API, before writing or running the code, and that your knowledge at this early stage will be such as to allow you to write tests of sufficient quality that they will remain valid thereafter. If that's your experience I'm not surprised you like TDD; I've never seen it done.
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
My experience is the opposite. Tests written early in the life of software give a degree of confidence that it kind of works, but they don't explore the problem space well and they don't find edge conditions. To achieve high levels of code coverage and flush out things like buffer overruns and race conditions doesn't just happen. You have to poke at the weaknesses.
I have found that one of the ways to divide programmers into two groups is to ask what the code does.
One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.
The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.
I would be disappointed by both. How would you write a test based on that response?
Quote from Dave Voorhis on May 3, 2021, 3:41 pmQuote from dandl on May 3, 2021, 2:32 pmBut I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
You seem very sure that you can always reach a point at which you know what your code will do, both the algorithm and the API, before writing or running the code, and that your knowledge at this early stage will be such as to allow you to write tests of sufficient quality that they will remain valid thereafter. If that's your experience I'm not surprised you like TDD; I've never seen it done.
I do it a lot, but note that the goal is to test the implementation via the API, not the specific algorithm or its internals (though components it uses might be unit tested too.) At the point I write the initial tests, I might have no idea what the implementation algorithm will ultimately be, but it shouldn't matter.
Occasionally I scrap a whole API because it turns out to be awkward or inappropriate, but that's pretty rare, and API flaws are more likely to be caught and fixed early in a TDD approach. In a non-TDD approach, you're more likely to discover the API is flawed only after you've implemented it, rather than before or during implementation whilst it's still relatively easy to change direction.
If my initial set of tests turn out to be weak or lacking comprehensiveness, I add more tests later.
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
My experience is the opposite. Tests written early in the life of software give a degree of confidence that it kind of works, but they don't explore the problem space well and they don't find edge conditions. To achieve high levels of code coverage and flush out things like buffer overruns and race conditions doesn't just happen. You have to poke at the weaknesses.
Often, refactoring is done specifically to add tests of security/robustness/performance/resource use/etc. to a basic initial get-it-working-and-keep-it-working set.
Quote from dandl on May 3, 2021, 2:32 pmBut I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
You seem very sure that you can always reach a point at which you know what your code will do, both the algorithm and the API, before writing or running the code, and that your knowledge at this early stage will be such as to allow you to write tests of sufficient quality that they will remain valid thereafter. If that's your experience I'm not surprised you like TDD; I've never seen it done.
I do it a lot, but note that the goal is to test the implementation via the API, not the specific algorithm or its internals (though components it uses might be unit tested too.) At the point I write the initial tests, I might have no idea what the implementation algorithm will ultimately be, but it shouldn't matter.
Occasionally I scrap a whole API because it turns out to be awkward or inappropriate, but that's pretty rare, and API flaws are more likely to be caught and fixed early in a TDD approach. In a non-TDD approach, you're more likely to discover the API is flawed only after you've implemented it, rather than before or during implementation whilst it's still relatively easy to change direction.
If my initial set of tests turn out to be weak or lacking comprehensiveness, I add more tests later.
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
My experience is the opposite. Tests written early in the life of software give a degree of confidence that it kind of works, but they don't explore the problem space well and they don't find edge conditions. To achieve high levels of code coverage and flush out things like buffer overruns and race conditions doesn't just happen. You have to poke at the weaknesses.
Often, refactoring is done specifically to add tests of security/robustness/performance/resource use/etc. to a basic initial get-it-working-and-keep-it-working set.
Quote from tobega on May 3, 2021, 5:42 pmQuote from dandl on May 3, 2021, 2:32 pmBut I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
You seem very sure that you can always reach a point at which you know what your code will do, both the algorithm and the API, before writing or running the code, and that your knowledge at this early stage will be such as to allow you to write tests of sufficient quality that they will remain valid thereafter. If that's your experience I'm not surprised you like TDD; I've never seen it done.
I've heard that a monkey banging away at a keyboard will eventually reproduce the works of Shakespeare. For my part, I prefer not to code that way and only write code when I know what I intend for it to achieve. I may have to experiment to find out how to achieve it, but tests are for "what", not for "how".
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
My experience is the opposite. Tests written early in the life of software give a degree of confidence that it kind of works, but they don't explore the problem space well and they don't find edge conditions. To achieve high levels of code coverage and flush out things like buffer overruns and race conditions doesn't just happen. You have to poke at the weaknesses.
I have found that one of the ways to divide programmers into two groups is to ask what the code does.
One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.
The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.
I would be disappointed by both. How would you write a test based on that response?
Well, if you can't figure out how to write a test from the first response how would you ever be able to understand anything I write here?
Quote from dandl on May 3, 2021, 2:32 pmBut I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
You seem very sure that you can always reach a point at which you know what your code will do, both the algorithm and the API, before writing or running the code, and that your knowledge at this early stage will be such as to allow you to write tests of sufficient quality that they will remain valid thereafter. If that's your experience I'm not surprised you like TDD; I've never seen it done.
I've heard that a monkey banging away at a keyboard will eventually reproduce the works of Shakespeare. For my part, I prefer not to code that way and only write code when I know what I intend for it to achieve. I may have to experiment to find out how to achieve it, but tests are for "what", not for "how".
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
My experience is the opposite. Tests written early in the life of software give a degree of confidence that it kind of works, but they don't explore the problem space well and they don't find edge conditions. To achieve high levels of code coverage and flush out things like buffer overruns and race conditions doesn't just happen. You have to poke at the weaknesses.
I have found that one of the ways to divide programmers into two groups is to ask what the code does.
One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.
The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.
I would be disappointed by both. How would you write a test based on that response?
Well, if you can't figure out how to write a test from the first response how would you ever be able to understand anything I write here?
Quote from tobega on May 3, 2021, 6:59 pmQuote from dandl on May 3, 2021, 2:32 pmBut I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
You seem very sure that you can always reach a point at which you know what your code will do, both the algorithm and the API, before writing or running the code, and that your knowledge at this early stage will be such as to allow you to write tests of sufficient quality that they will remain valid thereafter. If that's your experience I'm not surprised you like TDD; I've never seen it done.
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
My experience is the opposite. Tests written early in the life of software give a degree of confidence that it kind of works, but they don't explore the problem space well and they don't find edge conditions. To achieve high levels of code coverage and flush out things like buffer overruns and race conditions doesn't just happen. You have to poke at the weaknesses.
I have found that one of the ways to divide programmers into two groups is to ask what the code does.
One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.
The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.
I would be disappointed by both. How would you write a test based on that response?
I just remembered that I wrote an article about how I went about creating a sudoku solver, don't know if that would help, but here it is anyway https://cygnigroup.com/creating-an-algorithm/
Quote from dandl on May 3, 2021, 2:32 pmBut I'm not sure this is the same debate: TDD means you wrote tests at the time of maximum ignorance, but the tests you rely on are the ones written in full knowledge of what you did and how it might break, to pick up edge cases and assumptions.
I'm not against the concept, I just find relatively few situations where it seems to be the right approach.
I'll try to explain it another way.
What you term the "time of maximum ignorance" is when you have no idea how the code works. I should hope, though, that you know what you want the code to do, otherwise how do you know when you're finished? That "what" is your test. You should at least know it when you see it. Figuring out how to write the test code for it is actually "designing" and you certainly don't want to know any details of how the code is going to work at this point.
If you write tests like that, they won't break for every change because the code still needs to achieve what the code needs to achieve.
You seem very sure that you can always reach a point at which you know what your code will do, both the algorithm and the API, before writing or running the code, and that your knowledge at this early stage will be such as to allow you to write tests of sufficient quality that they will remain valid thereafter. If that's your experience I'm not surprised you like TDD; I've never seen it done.
If you write tests for how the code does something (after you know how it does it), that's when you get the fragile tests that can't tolerate any changes.
My experience is the opposite. Tests written early in the life of software give a degree of confidence that it kind of works, but they don't explore the problem space well and they don't find edge conditions. To achieve high levels of code coverage and flush out things like buffer overruns and race conditions doesn't just happen. You have to poke at the weaknesses.
I have found that one of the ways to divide programmers into two groups is to ask what the code does.
One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.
The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.
I would be disappointed by both. How would you write a test based on that response?
I just remembered that I wrote an article about how I went about creating a sudoku solver, don't know if that would help, but here it is anyway https://cygnigroup.com/creating-an-algorithm/