tag:blogger.com,1999:blog-8648268364549955959.post6296345541030881167..comments2024-03-28T11:52:10.996+00:00Comments on the literate programmer: Tests are facts. Code is theory.ctfordhttp://www.blogger.com/profile/05464902188219000642noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-8648268364549955959.post-22428480735016594252010-11-18T04:23:16.588+00:002010-11-18T04:23:16.588+00:00Hm, you cite science, so I'll disagree on the ...Hm, you cite science, so I'll disagree on the "tests are facts" point. Tests are experiments. The (huge, IMHO) difference to facts is that they can have bugs themselves.antifuchshttps://www.blogger.com/profile/17780794795592263653noreply@blogger.comtag:blogger.com,1999:blog-8648268364549955959.post-75577414665065410662010-10-25T10:47:03.475+01:002010-10-25T10:47:03.475+01:00This would be all well and good if science really ...This would be all well and good if science really did progress according to Popper's principles, but it's well accepted in the scientific community that this is not the case: Kuhn's Structure of Scientific Revolutions presents a very different model which at the same time should have resonance with any experienced developer.<br /><br />Drawing a direct analogy we can consider a set of requirements to be a paradigm embodying a world view for the domain under investigation, programs meeting the requirements to be theories within that paradigm and the processes of coding, testing, debugging and refactoring to be the normal science which sustains the paradigm.<br /><br />A significant change of requirements represents a revolutionary step which makes the previous paradigm largely irrelevant.<br /><br />It's also important to bear in mind that contrary to your statement above normal science is very much about adding special cases to cope with new information: that is in fact one of the defining traits of normal science. The prevailing paradigm will have many anomalies and the work of normal science seeks to explain these in its terms no matter how convoluted such explanations may be.<br /><br />When a revolution in scientific understanding does occur it is in part because the accumulation of such special cases reaches the point at which their weight exceeds the tolerance of the prevailing paradigm, creating an opportunity for alternative world views which under other circumstances - regardless of their fundamental proximity to accurate domain knowledge - would be of insufficient demonstrable benefit for the accompanying phase transition.<br /><br />It is also important to bear in mind that a test is not an experiment, nor is it a fact. A test is a measurement. When performing experiments in the sciences it is usual to take many measurements and to consider these in aggregate as a means of discovering facts. In this sense the execution of a test suite represents an experiment as does each execution of a program.<br /><br />Tests are unfortunately bound by all the same problems that bedevil measurement in any other discipline. Is the right thing being tested? How can we validate that it's the right thing? How do we verify that the test is correctly implemented? Is the test even necessary or informative?<br /><br />To these considerations a suite of tests adds the complexities of Zeno's paradox, not to mention suffering the fundamental limitations imposed by Gödel.<br /><br />None of this is to dismiss testing as a useful tool in developing software or to rag on any particular software development methodology, just to point out that if we really aspire to writing code the way that scientists discover laws of nature we first have to speak the same language as scientists...Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8648268364549955959.post-66456381693086161262010-10-24T13:03:15.943+01:002010-10-24T13:03:15.943+01:00Good point.
Unit testing gets around this somewha...Good point.<br /><br />Unit testing gets around this somewhat by making individual tests as specific as possible.<br /><br />But many times I've encountered system tests failing and not been able to tell whether it's the tools, the tests or the code itself that's to blame.ctfordhttps://www.blogger.com/profile/05464902188219000642noreply@blogger.comtag:blogger.com,1999:blog-8648268364549955959.post-38856948166121889292010-10-17T19:01:30.855+01:002010-10-17T19:01:30.855+01:00Great Read.
One thought.
It's disheartenin...Great Read. <br /><br />One thought.<br /><br />It's disheartening to think that the test "tools" themselves are also complex pieces of software that require considerable attention and tuning. To apply the "tests are facts" paradigm, your tools must be as dependable as gravity, the photoelectric effect etc. I haven't found this to be the case. <br /><br />In the physical sciences your tools are calibrated to dependable standards that are relatively fixed and unchanging. How can we do this with a complex set of software test tools that may also be "theories"?<br /><br />philip.hartlieb@gmail.combsfbdfbdfbdfhttps://www.blogger.com/profile/12675433085083111945noreply@blogger.com