The Forum for Discussion about The Third Manifesto and Related Matters

Please or Register to create posts and topics.

Life after D with Safe Java

PreviousPage 11 of 11

I have found that one of the ways to divide programmers into two groups is to ask what the code does.

One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.

The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.

I would be disappointed by both. How would you write a test based on that response?

Well, if you can't figure out how to write a test from the first response how would you ever be able to understand anything I write here?

How exactly would you write a test when you don't know the signature or anything about the requirements? Is it a function? If so what is the signature? Does it return a single number or a sequence? One Fib number or many? 21 is a Fib number, so does a function that always returns 21 satisfy your requirement? Are these just random Fib numbers as test data? Does performance matter? And so on.

I just remembered that I wrote an article about how I went about creating a sudoku solver, don't know if that would help, but here it is anyway https://cygnigroup.com/creating-an-algorithm/

Much better. I read this article and the one on TDD it links to. I agree with just about everything in your "Summary of the Process", assuming you're instructing junior programmers or those who are finding it hard to get stuff to work. If they haven't tried TDD or struggled with it, all good advice.

But it's pitched too low to answer my question. It's about taking tiny baby steps and making sure you don't stuff up. It says how much it helps you, but it doesn't say how TDD is actually better or faster or easier once you get past baby steps and into walking and running. So far, I think all those extra low level tests just slow you down and I'd like to be convinced otherwise. Also, the examples you give are tightly bound to internals and would seem to be vulnerable to any changes in those internals. Since I always refactor and rename and even rewrite code , that could be a real problem.

As it happens I know this problem well. I've written 4 separate implementations of a Sudoku solver (C#, C# with Linq, Haskell, Andl). I spent a lot of time choosing a data structure and algorithm, but the code went together quickly. Rightly or wrongly it seems to me a pure 'guess and backtrack' approach would be too slow, and I wanted to check for unique solutions. So I wrote a 4 rule heuristic pass first (which solves most puzzles on its own) and a simple recursive back tracker for those that get stuck. I wrote tests, to make sure each rule was applied correctly. The various solvers run 100-300 lines of code, so really just a couple of bites.

[Incidentally, the Andl one is the highest level and the shortest, using RA and while. That surprised me. It was also the hardest to debug, which didn't.]

 

Andl - A New Database Language - andl.org
Quote from dandl on May 4, 2021, 1:43 pm

I have found that one of the ways to divide programmers into two groups is to ask what the code does.

One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.

The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.

I would be disappointed by both. How would you write a test based on that response?

Well, if you can't figure out how to write a test from the first response how would you ever be able to understand anything I write here?

How exactly would you write a test when you don't know the signature or anything about the requirements? Is it a function? If so what is the signature? Does it return a single number or a sequence? One Fib number or many? 21 is a Fib number, so does a function that always returns 21 satisfy your requirement? Are these just random Fib numbers as test data? Does performance matter? And so on.

If you don't know anything about the requirements you shouldn't be writing any code at all. As for the other questions, that's exactly why you try to write a test so that you get an API that works well. If you find there are options you don't have the mandate to decide, then you've learned that you don't know enough about the requirements and you need to back and check before you write code.

How many tests do you need? That's something you learn to reason about. Can you change the code to be wrong without any test failing? Then you probably need one more test. The rule in TDD is to not write any code before you have a failing test and not to write more code than is needed to pass the test. Important to try to take baby steps as well, you start with easy cases and progress slowly and steadily. Every time you've passed the previous test you think what else your code needs, what edge cases you might not have covered, what test can break your code.

Performance tends to be an orthogonal question. It normally shouldn't affect your API.

I just remembered that I wrote an article about how I went about creating a sudoku solver, don't know if that would help, but here it is anyway https://cygnigroup.com/creating-an-algorithm/

Much better. I read this article and the one on TDD it links to. I agree with just about everything in your "Summary of the Process", assuming you're instructing junior programmers or those who are finding it hard to get stuff to work. If they haven't tried TDD or struggled with it, all good advice.

But it's pitched too low to answer my question. It's about taking tiny baby steps and making sure you don't stuff up. It says how much it helps you, but it doesn't say how TDD is actually better or faster or easier once you get past baby steps and into walking and running. So far, I think all those extra low level tests just slow you down and I'd like to be convinced otherwise. Also, the examples you give are tightly bound to internals and would seem to be vulnerable to any changes in those internals. Since I always refactor and rename and even rewrite code , that could be a real problem.

The baby steps are exactly what you do all the time. Much faster than running because you get a warning as soon as you muck up and can fix it right away instead of spending time debugging later. And even when something obscure happens, the tests help you know a lot about what isn't wrong. The process itself helps you improve the API and drive out edge cases.

I suppose that if you decide the "placeDigit" function is the wrong approach altogether you would have to discard the tests, but that's rare. Renaming is just automated in the IDE so doesn't affect anything. Refactoring generally doesn't affect the API you're testing, you still need the code to do the same thing. You might come up with a slight signature change, but that's also pretty much automated.

As it happens I know this problem well. I've written 4 separate implementations of a Sudoku solver (C#, C# with Linq, Haskell, Andl). I spent a lot of time choosing a data structure and algorithm, but the code went together quickly. Rightly or wrongly it seems to me a pure 'guess and backtrack' approach would be too slow, and I wanted to check for unique solutions. So I wrote a 4 rule heuristic pass first (which solves most puzzles on its own) and a simple recursive back tracker for those that get stuck. I wrote tests, to make sure each rule was applied correctly. The various solvers run 100-300 lines of code, so really just a couple of bites.

[Incidentally, the Andl one is the highest level and the shortest, using RA and while. That surprised me. It was also the hardest to debug, which didn't.]

 

I can guarantee that your heuristic pass is premature optimization (and I'm almost certain it runs slower than a well-made guess and backtrack), but I'm sure it was pretty interesting to code.

My Dart solution runs to 79 lines and the tests are 149. My Tailspin version is 145 lines tests+code, I originally wrote the article in Tailspin but redid the exercise in Dart to make it more palatable to the masses.

I'm a little curious about how hard it would be to debug the Andl version. Is it because there is too much power in each step? What if you had written the baby-step-like tests one at a time? Solve an already solved sudoku. Solve one with one position open. Solve one with the options in the same row, and so on. I should maybe try to redo it with relational values in Tailspin to see if that gives any advantage or extra complications?

Quote from tobega on May 4, 2021, 3:13 pm
Quote from dandl on May 4, 2021, 1:43 pm

I have found that one of the ways to divide programmers into two groups is to ask what the code does.

One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.

The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.

I would be disappointed by both. How would you write a test based on that response?

Well, if you can't figure out how to write a test from the first response how would you ever be able to understand anything I write here?

How exactly would you write a test when you don't know the signature or anything about the requirements? Is it a function? If so what is the signature? Does it return a single number or a sequence? One Fib number or many? 21 is a Fib number, so does a function that always returns 21 satisfy your requirement? Are these just random Fib numbers as test data? Does performance matter? And so on.

If you don't know anything about the requirements you shouldn't be writing any code at all. As for the other questions, that's exactly why you try to write a test so that you get an API that works well. If you find there are options you don't have the mandate to decide, then you've learned that you don't know enough about the requirements and you need to back and check before you write code.

How is that relevant? I was giving reasons why the answer was poor, you're giving me advice on programming.

How many tests do you need? That's something you learn to reason about. Can you change the code to be wrong without any test failing? Then you probably need one more test. The rule in TDD is to not write any code before you have a failing test and not to write more code than is needed to pass the test. Important to try to take baby steps as well, you start with easy cases and progress slowly and steadily. Every time you've passed the previous test you think what else your code needs, what edge cases you might not have covered, what test can break your code.

That's circular. If you're going with TDD and baby steps, you need enough tests so you get to write the code you always knew you had to. If you can write the code without using TDD, then you need enough tests to cover the spec, all the assumptions you were forced to make because of the operating environment, all the possible extra paths triggered by exceptions/errors/etc, and IMO a selection of others that just might pick up stuff you missed.

Performance tends to be an orthogonal question. It normally shouldn't affect your API.

If performance is part of the spec is must be tested.

I just remembered that I wrote an article about how I went about creating a sudoku solver, don't know if that would help, but here it is anyway https://cygnigroup.com/creating-an-algorithm/

Much better. I read this article and the one on TDD it links to. I agree with just about everything in your "Summary of the Process", assuming you're instructing junior programmers or those who are finding it hard to get stuff to work. If they haven't tried TDD or struggled with it, all good advice.

But it's pitched too low to answer my question. It's about taking tiny baby steps and making sure you don't stuff up. It says how much it helps you, but it doesn't say how TDD is actually better or faster or easier once you get past baby steps and into walking and running. So far, I think all those extra low level tests just slow you down and I'd like to be convinced otherwise. Also, the examples you give are tightly bound to internals and would seem to be vulnerable to any changes in those internals. Since I always refactor and rename and even rewrite code , that could be a real problem.

The baby steps are exactly what you do all the time. Much faster than running because you get a warning as soon as you muck up and can fix it right away instead of spending time debugging later. And even when something obscure happens, the tests help you know a lot about what isn't wrong. The process itself helps you improve the API and drive out edge cases.

Not in my experience. The compiler finds the easy ones, and assertions and tracing print-out find most of the rest. Sometimes I use the debugger to verify code flow or the stack trace for an assertion, but it's a poor way to get rid of bugs. Best not to put them in.

 

I suppose that if you decide the "placeDigit" function is the wrong approach altogether you would have to discard the tests, but that's rare. Renaming is just automated in the IDE so doesn't affect anything. Refactoring generally doesn't affect the API you're testing, you still need the code to do the same thing. You might come up with a slight signature change, but that's also pretty much automated.

This is the code:

group('internal solver', () {
...
test('last digit gets placed', () {
expect(placeDigit([OpenPosition(Point(0, 0), ['5'])]), (result) => result[0][0] == '5');
});

});

There are internal decision visible, such as Point(), '5' and result[][] that will cause the test to fail if you make different choices.

As it happens I know this problem well. I've written 4 separate implementations of a Sudoku solver (C#, C# with Linq, Haskell, Andl). I spent a lot of time choosing a data structure and algorithm, but the code went together quickly. Rightly or wrongly it seems to me a pure 'guess and backtrack' approach would be too slow, and I wanted to check for unique solutions. So I wrote a 4 rule heuristic pass first (which solves most puzzles on its own) and a simple recursive back tracker for those that get stuck. I wrote tests, to make sure each rule was applied correctly. The various solvers run 100-300 lines of code, so really just a couple of bites.

[Incidentally, the Andl one is the highest level and the shortest, using RA and while. That surprised me. It was also the hardest to debug, which didn't.]

 

I can guarantee that your heuristic pass is premature optimization (and I'm almost certain it runs slower than a well-made guess and backtrack), but I'm sure it was pretty interesting to code.

Not true. It's not 'heuristic', it's the application of the rules derived directly from the game. It's very fast, and (so far) I don't know any way to do it faster.

  • For each location (81) Keep track of Knowns (digit if known) and Possibles (set of digit).
  • Rule 1: if a location is known, it's not a possible for any row/col/box containing that location
  • Rule 2: if only one digit can go in a location (single Possible) it goes there.
  • Rule 3: if a digit can only go in one location (within any row/col/box) it goes there.

If that doesn't solve it, then use backtracking:

  • using the smallest set of possible, set each in turn as known, apply the rules, then unset it. This will find duplicates if there are any.

It just isn't that hard, when you understand the problem well enough. Part of the reason I dislike TDD is that I know I won't start with that understanding, but it will come. I would rather write it badly (to gain understanding) and then rewrite the whole thing (to do it right) then pin my hopes on doing it right from the beginning.

My Dart solution runs to 79 lines and the tests are 149. My Tailspin version is 145 lines tests+code, I originally wrote the article in Tailspin but redid the exercise in Dart to make it more palatable to the masses.

I'm a little curious about how hard it would be to debug the Andl version. Is it because there is too much power in each step? What if you had written the baby-step-like tests one at a time? Solve an already solved sudoku. Solve one with one position open. Solve one with the options in the same row, and so on. I should maybe try to redo it with relational values in Tailspin to see if that gives any advantage or extra complications?

Andl is a toy language so it has no debugger and no testing framework, but mostly it's because a single RA expression does so much. The code for rule 1 is one line for knowns, 1 line to remove it from all the possibles. Rule 2 is 1 line, rule 3 is 3 lines (row/col/box).

TDD might work here, but there is no API to expose any of these, so any testing would be right in the code and for that I would tend to use assertions.

Actually I realise that's probably one of the reasons I don't use TDD, I tend to use assertions and a simple test driver to achieve the same result.

 

Andl - A New Database Language - andl.org
Quote from dandl on May 5, 2021, 6:31 am
Quote from tobega on May 4, 2021, 3:13 pm
Quote from dandl on May 4, 2021, 1:43 pm

I have found that one of the ways to divide programmers into two groups is to ask what the code does.

One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.

The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.

I would be disappointed by both. How would you write a test based on that response?

Well, if you can't figure out how to write a test from the first response how would you ever be able to understand anything I write here?

How exactly would you write a test when you don't know the signature or anything about the requirements? Is it a function? If so what is the signature? Does it return a single number or a sequence? One Fib number or many? 21 is a Fib number, so does a function that always returns 21 satisfy your requirement? Are these just random Fib numbers as test data? Does performance matter? And so on.

If you don't know anything about the requirements you shouldn't be writing any code at all. As for the other questions, that's exactly why you try to write a test so that you get an API that works well. If you find there are options you don't have the mandate to decide, then you've learned that you don't know enough about the requirements and you need to back and check before you write code.

How is that relevant? I was giving reasons why the answer was poor, you're giving me advice on programming.

How many tests do you need? That's something you learn to reason about. Can you change the code to be wrong without any test failing? Then you probably need one more test. The rule in TDD is to not write any code before you have a failing test and not to write more code than is needed to pass the test. Important to try to take baby steps as well, you start with easy cases and progress slowly and steadily. Every time you've passed the previous test you think what else your code needs, what edge cases you might not have covered, what test can break your code.

That's circular. If you're going with TDD and baby steps, you need enough tests so you get to write the code you always knew you had to. If you can write the code without using TDD, then you need enough tests to cover the spec, all the assumptions you were forced to make because of the operating environment, all the possible extra paths triggered by exceptions/errors/etc, and IMO a selection of others that just might pick up stuff you missed.

Performance tends to be an orthogonal question. It normally shouldn't affect your API.

If performance is part of the spec is must be tested.

I just remembered that I wrote an article about how I went about creating a sudoku solver, don't know if that would help, but here it is anyway https://cygnigroup.com/creating-an-algorithm/

Much better. I read this article and the one on TDD it links to. I agree with just about everything in your "Summary of the Process", assuming you're instructing junior programmers or those who are finding it hard to get stuff to work. If they haven't tried TDD or struggled with it, all good advice.

But it's pitched too low to answer my question. It's about taking tiny baby steps and making sure you don't stuff up. It says how much it helps you, but it doesn't say how TDD is actually better or faster or easier once you get past baby steps and into walking and running. So far, I think all those extra low level tests just slow you down and I'd like to be convinced otherwise. Also, the examples you give are tightly bound to internals and would seem to be vulnerable to any changes in those internals. Since I always refactor and rename and even rewrite code , that could be a real problem.

The baby steps are exactly what you do all the time. Much faster than running because you get a warning as soon as you muck up and can fix it right away instead of spending time debugging later. And even when something obscure happens, the tests help you know a lot about what isn't wrong. The process itself helps you improve the API and drive out edge cases.

Not in my experience. The compiler finds the easy ones, and assertions and tracing print-out find most of the rest. Sometimes I use the debugger to verify code flow or the stack trace for an assertion, but it's a poor way to get rid of bugs. Best not to put them in.

I suppose that if you decide the "placeDigit" function is the wrong approach altogether you would have to discard the tests, but that's rare. Renaming is just automated in the IDE so doesn't affect anything. Refactoring generally doesn't affect the API you're testing, you still need the code to do the same thing. You might come up with a slight signature change, but that's also pretty much automated.

This is the code:

group('internal solver', () {
...
test('last digit gets placed', () {
expect(placeDigit([OpenPosition(Point(0, 0), ['5'])]), (result) => result[0][0] == '5');
});

});

There are internal decision visible, such as Point(), '5' and result[][] that will cause the test to fail if you make different choices.

As it happens I know this problem well. I've written 4 separate implementations of a Sudoku solver (C#, C# with Linq, Haskell, Andl). I spent a lot of time choosing a data structure and algorithm, but the code went together quickly. Rightly or wrongly it seems to me a pure 'guess and backtrack' approach would be too slow, and I wanted to check for unique solutions. So I wrote a 4 rule heuristic pass first (which solves most puzzles on its own) and a simple recursive back tracker for those that get stuck. I wrote tests, to make sure each rule was applied correctly. The various solvers run 100-300 lines of code, so really just a couple of bites.

[Incidentally, the Andl one is the highest level and the shortest, using RA and while. That surprised me. It was also the hardest to debug, which didn't.]

I can guarantee that your heuristic pass is premature optimization (and I'm almost certain it runs slower than a well-made guess and backtrack), but I'm sure it was pretty interesting to code.

Not true. It's not 'heuristic', it's the application of the rules derived directly from the game. It's very fast, and (so far) I don't know any way to do it faster.

  • For each location (81) Keep track of Knowns (digit if known) and Possibles (set of digit).
  • Rule 1: if a location is known, it's not a possible for any row/col/box containing that location
  • Rule 2: if only one digit can go in a location (single Possible) it goes there.
  • Rule 3: if a digit can only go in one location (within any row/col/box) it goes there.

If that doesn't solve it, then use backtracking:

  • using the smallest set of possible, set each in turn as known, apply the rules, then unset it. This will find duplicates if there are any.

It just isn't that hard, when you understand the problem well enough. Part of the reason I dislike TDD is that I know I won't start with that understanding, but it will come. I would rather write it badly (to gain understanding) and then rewrite the whole thing (to do it right) then pin my hopes on doing it right from the beginning.

My Dart solution runs to 79 lines and the tests are 149. My Tailspin version is 145 lines tests+code, I originally wrote the article in Tailspin but redid the exercise in Dart to make it more palatable to the masses.

I'm a little curious about how hard it would be to debug the Andl version. Is it because there is too much power in each step? What if you had written the baby-step-like tests one at a time? Solve an already solved sudoku. Solve one with one position open. Solve one with the options in the same row, and so on. I should maybe try to redo it with relational values in Tailspin to see if that gives any advantage or extra complications?

Andl is a toy language so it has no debugger and no testing framework, but mostly it's because a single RA expression does so much. The code for rule 1 is one line for knowns, 1 line to remove it from all the possibles. Rule 2 is 1 line, rule 3 is 3 lines (row/col/box).

TDD might work here, but there is no API to expose any of these, so any testing would be right in the code and for that I would tend to use assertions.

Actually I realise that's probably one of the reasons I don't use TDD, I tend to use assertions and a simple test driver to achieve the same result.

In TDD, the initial tests are simply an in-code representation of basic technical requirements -- your own, or someone else's -- and automated means to verify that they're met.  The usual starting point is the API; once that's working, extend the test set to include security, reliability, resource use, time, whatever. If some of them are critical from the outset, then write those tests from the outset.

If you know the requirements, then TDD makes a lot of sense because it guides you to meeting all the requirements in a straightforward manner, and the ever-growing and ever-evolving test set ensures you haven't broken anything when you make changes. For large projects with a gaggle of requirements -- masses of business rules and back-end complexity -- it's virtually indispensable.

But for small, narrow-purpose, experimental projects -- like your Sudoku solver in Andl -- I can see how it might not fit. Your arguments against TDD remind me of those I sometimes see from data analytics folks.

But that makes sense -- rather like your Sudoku solver, most of their code is either conceptually or literally meeting one requirement by writing a pipeline to transform some source data into summary data. There aren't a large number of requirements, there isn't an API, the result is obviously right or wrong, the solution is often effectively a single expression, and often as not it will only be used once.

I'm the forum administrator and lead developer of Rel. Email me at dave@armchair.mb.ca with the Subject 'TTM Forum'. Download Rel from https://reldb.org
Quote from dandl on May 5, 2021, 6:31 am
Quote from tobega on May 4, 2021, 3:13 pm
Quote from dandl on May 4, 2021, 1:43 pm

I have found that one of the ways to divide programmers into two groups is to ask what the code does.

One group will be able to answer "it calculates fibonacci numbers" and they will be able to write nice readable code and write good tests up front.

The other group will answer "it takes two numbers, a and b, and every iteration it sets a to b and b to the sum of a and b". Keep them deep down in your system, their code will hopefully be efficient, but it will be a horror to use and to maintain and their tests written after the fact will verify that the code did exactly what it did in the way it did it and they will all break for every single change.

I would be disappointed by both. How would you write a test based on that response?

Well, if you can't figure out how to write a test from the first response how would you ever be able to understand anything I write here?

How exactly would you write a test when you don't know the signature or anything about the requirements? Is it a function? If so what is the signature? Does it return a single number or a sequence? One Fib number or many? 21 is a Fib number, so does a function that always returns 21 satisfy your requirement? Are these just random Fib numbers as test data? Does performance matter? And so on.

If you don't know anything about the requirements you shouldn't be writing any code at all. As for the other questions, that's exactly why you try to write a test so that you get an API that works well. If you find there are options you don't have the mandate to decide, then you've learned that you don't know enough about the requirements and you need to back and check before you write code.

How is that relevant? I was giving reasons why the answer was poor, you're giving me advice on programming.

Oh, it looked like you were explaining why you were incapable of writing the test.

How many tests do you need? That's something you learn to reason about. Can you change the code to be wrong without any test failing? Then you probably need one more test. The rule in TDD is to not write any code before you have a failing test and not to write more code than is needed to pass the test. Important to try to take baby steps as well, you start with easy cases and progress slowly and steadily. Every time you've passed the previous test you think what else your code needs, what edge cases you might not have covered, what test can break your code.

That's circular. If you're going with TDD and baby steps, you need enough tests so you get to write the code you always knew you had to. If you can write the code without using TDD, then you need enough tests to cover the spec, all the assumptions you were forced to make because of the operating environment, all the possible extra paths triggered by exceptions/errors/etc, and IMO a selection of others that just might pick up stuff you missed.

The whole point of TDD is to not write the code you know you have to write until you've proven that you need it to get a correct answer or fulfil some other requirement.

Performance tends to be an orthogonal question. It normally shouldn't affect your API.

If performance is part of the spec is must be tested.

Yes, it's just that it generally never has any effect on the tests that verify correct result. You still need a correct result even when the code goes fast.

I just remembered that I wrote an article about how I went about creating a sudoku solver, don't know if that would help, but here it is anyway https://cygnigroup.com/creating-an-algorithm/

Much better. I read this article and the one on TDD it links to. I agree with just about everything in your "Summary of the Process", assuming you're instructing junior programmers or those who are finding it hard to get stuff to work. If they haven't tried TDD or struggled with it, all good advice.

But it's pitched too low to answer my question. It's about taking tiny baby steps and making sure you don't stuff up. It says how much it helps you, but it doesn't say how TDD is actually better or faster or easier once you get past baby steps and into walking and running. So far, I think all those extra low level tests just slow you down and I'd like to be convinced otherwise. Also, the examples you give are tightly bound to internals and would seem to be vulnerable to any changes in those internals. Since I always refactor and rename and even rewrite code , that could be a real problem.

The baby steps are exactly what you do all the time. Much faster than running because you get a warning as soon as you muck up and can fix it right away instead of spending time debugging later. And even when something obscure happens, the tests help you know a lot about what isn't wrong. The process itself helps you improve the API and drive out edge cases.

Not in my experience. The compiler finds the easy ones, and assertions and tracing print-out find most of the rest. Sometimes I use the debugger to verify code flow or the stack trace for an assertion, but it's a poor way to get rid of bugs. Best not to put them in.

Seems like you're all confused here, first explaining how you do it and then arguing that your method is inferior. I have never had to use a debugger on code that was TDD:ed.

 

I suppose that if you decide the "placeDigit" function is the wrong approach altogether you would have to discard the tests, but that's rare. Renaming is just automated in the IDE so doesn't affect anything. Refactoring generally doesn't affect the API you're testing, you still need the code to do the same thing. You might come up with a slight signature change, but that's also pretty much automated.

This is the code:

group('internal solver', () {
...
test('last digit gets placed', () {
expect(placeDigit([OpenPosition(Point(0, 0), ['5'])]), (result) => result[0][0] == '5');
});

});

There are internal decision visible, such as Point(), '5' and result[][] that will cause the test to fail if you make different choices.

Yes, that is explained in the article as being a trade-off, choosing to expose some knowledge of the algorithm internals in order to make it easier to test. Not a great loss in the unlikely event that I might have to rewrite something that is easy to understand and easy to create.

As it happens I know this problem well. I've written 4 separate implementations of a Sudoku solver (C#, C# with Linq, Haskell, Andl). I spent a lot of time choosing a data structure and algorithm, but the code went together quickly. Rightly or wrongly it seems to me a pure 'guess and backtrack' approach would be too slow, and I wanted to check for unique solutions. So I wrote a 4 rule heuristic pass first (which solves most puzzles on its own) and a simple recursive back tracker for those that get stuck. I wrote tests, to make sure each rule was applied correctly. The various solvers run 100-300 lines of code, so really just a couple of bites.

[Incidentally, the Andl one is the highest level and the shortest, using RA and while. That surprised me. It was also the hardest to debug, which didn't.]

 

I can guarantee that your heuristic pass is premature optimization (and I'm almost certain it runs slower than a well-made guess and backtrack), but I'm sure it was pretty interesting to code.

Not true. It's not 'heuristic', it's the application of the rules derived directly from the game. It's very fast, and (so far) I don't know any way to do it faster.

LOL, you said it was heuristic. Seems to be very common that you say things that both are and aren't. I'm just pointing out that the guess and backtrack is fast enough and that the way computers work it is probably also the fastest (provided it is well-crafted, which means avoiding combinatorial explosion by pruning early, most easily achieved by picking the spot with fewest available options at each guess and propagating the constraints that the choice induces)

  • For each location (81) Keep track of Knowns (digit if known) and Possibles (set of digit).
  • Rule 1: if a location is known, it's not a possible for any row/col/box containing that location
  • Rule 2: if only one digit can go in a location (single Possible) it goes there.
  • Rule 3: if a digit can only go in one location (within any row/col/box) it goes there.

If that doesn't solve it, then use backtracking:

  • using the smallest set of possible, set each in turn as known, apply the rules, then unset it. This will find duplicates if there are any.

It just isn't that hard, when you understand the problem well enough. Part of the reason I dislike TDD is that I know I won't start with that understanding, but it will come. I would rather write it badly (to gain understanding) and then rewrite the whole thing (to do it right) then pin my hopes on doing it right from the beginning.

My Dart solution runs to 79 lines and the tests are 149. My Tailspin version is 145 lines tests+code, I originally wrote the article in Tailspin but redid the exercise in Dart to make it more palatable to the masses.

I'm a little curious about how hard it would be to debug the Andl version. Is it because there is too much power in each step? What if you had written the baby-step-like tests one at a time? Solve an already solved sudoku. Solve one with one position open. Solve one with the options in the same row, and so on. I should maybe try to redo it with relational values in Tailspin to see if that gives any advantage or extra complications?

Andl is a toy language so it has no debugger and no testing framework, but mostly it's because a single RA expression does so much. The code for rule 1 is one line for knowns, 1 line to remove it from all the possibles. Rule 2 is 1 line, rule 3 is 3 lines (row/col/box).

TDD might work here, but there is no API to expose any of these, so any testing would be right in the code and for that I would tend to use assertions.

Actually I realise that's probably one of the reasons I don't use TDD, I tend to use assertions and a simple test driver to achieve the same result.

 

Fair enough. You may still be doing a form of TDD if you've decided beforehand what to test. Then the discussion rather becomes one of whether to make those tests automatically repeatable in code or not. And possibly a discussion of what granularity to test at. In the end, do what works for you.

Hey guys, if you can manage to keep this thread going for 11 more months you've broken my record.

Author of SIRA_PRISE

As it happens I know this problem well. I've written 4 separate implementations of a Sudoku solver (C#, C# with Linq, Haskell, Andl). I spent a lot of time choosing a data structure and algorithm, but the code went together quickly. Rightly or wrongly it seems to me a pure 'guess and backtrack' approach would be too slow, and I wanted to check for unique solutions. So I wrote a 4 rule heuristic pass first (which solves most puzzles on its own) and a simple recursive back tracker for those that get stuck. I wrote tests, to make sure each rule was applied correctly. The various solvers run 100-300 lines of code, so really just a couple of bites.

[Incidentally, the Andl one is the highest level and the shortest, using RA and while. That surprised me. It was also the hardest to debug, which didn't.]

 

I can guarantee that your heuristic pass is premature optimization (and I'm almost certain it runs slower than a well-made guess and backtrack), but I'm sure it was pretty interesting to code.

Not true. It's not 'heuristic', it's the application of the rules derived directly from the game. It's very fast, and (so far) I don't know any way to do it faster.

LOL, you said it was heuristic. Seems to be very common that you say things that both are and aren't. I'm just pointing out that the guess and backtrack is fast enough and that the way computers work it is probably also the fastest (provided it is well-crafted, which means avoiding combinatorial explosion by pruning early, most easily achieved by picking the spot with fewest available options at each guess and propagating the constraints that the choice induces)

Heuristic was your word. The rules are heuristic in the sense of "not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation" but it really doesn't matter. AS it happens the first 2 rules are unavoidable: based on your description you must be applying them. So your claim is that it's faster to guess and backtrack (multiple times) than to apply rule 3 (once). I claim that is not so: rule 3 is always faster than even a single guess and backtrack.

I would be interested for you to run your program and print out how many times you guess and backtrack -- you might be surprised.

This was part of a program to generate new Sudokus with no duplicates and at a range of difficulties, so it had to find duplicates and run at maximum speed.

  • For each location (81) Keep track of Knowns (digit if known) and Possibles (set of digit).
  • Rule 1: if a location is known, it's not a possible for any row/col/box containing that location
  • Rule 2: if only one digit can go in a location (single Possible) it goes there.
  • Rule 3: if a digit can only go in one location (within any row/col/box) it goes there.

If that doesn't solve it, then use backtracking:

  • using the smallest set of possible, set each in turn as known, apply the rules, then unset it. This will find duplicates if there are any.

 

Andl - A New Database Language - andl.org

I'm a little curious about how hard it would be to debug the Andl version. Is it because there is too much power in each step? What if you had written the baby-step-like tests one at a time? Solve an already solved sudoku. Solve one with one position open. Solve one with the options in the same row, and so on. I should maybe try to redo it with relational values in Tailspin to see if that gives any advantage or extra complications?

Andl is a toy language so it has no debugger and no testing framework, but mostly it's because a single RA expression does so much. The code for rule 1 is one line for knowns, 1 line to remove it from all the possibles. Rule 2 is 1 line, rule 3 is 3 lines (row/col/box).

TDD might work here, but there is no API to expose any of these, so any testing would be right in the code and for that I would tend to use assertions.

Actually I realise that's probably one of the reasons I don't use TDD, I tend to use assertions and a simple test driver to achieve the same result.

In TDD, the initial tests are simply an in-code representation of basic technical requirements -- your own, or someone else's -- and automated means to verify that they're met.  The usual starting point is the API; once that's working, extend the test set to include security, reliability, resource use, time, whatever. If some of them are critical from the outset, then write those tests from the outset.

If you know the requirements, then TDD makes a lot of sense because it guides you to meeting all the requirements in a straightforward manner, and the ever-growing and ever-evolving test set ensures you haven't broken anything when you make changes. For large projects with a gaggle of requirements -- masses of business rules and back-end complexity -- it's virtually indispensable.

That makes a lot more sense. For a very long time we have used TDD on bug fixing. The rule: you must not fix a bug in released software until after you have a test that reproduces it. Then, while you understand the code, write some more tests to capture that understanding. The only regressions we ever had were UI, where it's really hard to write good tests.

Ditto adding new code to old: add more tests to confirm current behaviour before making changes. I wouldn't agree with baby steps, but the tests do drive the process.

But for small, narrow-purpose, experimental projects -- like your Sudoku solver in Andl -- I can see how it might not fit. Your arguments against TDD remind me of those I sometimes see from data analytics folks.

But that makes sense -- rather like your Sudoku solver, most of their code is either conceptually or literally meeting one requirement by writing a pipeline to transform some source data into summary data. There aren't a large number of requirements, there isn't an API, the result is obviously right or wrong, the solution is often effectively a single expression, and often as not it will only be used once.

Just so.

Andl - A New Database Language - andl.org

I should add that (rather than TDD), I am a big fan of design by contract. The idea is to explicitly document every assumption (pre-condition), every return value or state change (post-condition) and core features of any algorithm (often in the form of invariants), preferably in a form the compiler can check but if not, for testing at runtime.

When I've used this well, I've found I needed only a test driver (to feed in lots of test cases) but very few formal tests. DBC does all the heavy lifting. It also helps a lot with documentation. Eiffel had its issues(which I discussed at some length with Bertrand Meyer himself), but DBC has been largely overlooked and is IMO seriously underrated.

Please note: DBC can be computationally expensive. At the very least the compiler should allow it to be disabled in production builds, and the lack of those features in Java and C# and other languages contributes to why it is not more widely used.

Andl - A New Database Language - andl.org
PreviousPage 11 of 11