What instruction manual did this come from? Monday, May 31 2010 

Fisher Custom Electra III Thursday, May 27 2010 

Deck – formal software development without the formality Wednesday, May 26 2010 

I had a wild idea about unit testing yesterday.

So I’m working on the design of Deck and thinking about ways to incorporate the advantages of formal software development without inheriting the overhead.  One of the more common issues I’d like to handle is the situation where a bug is introduced as the side-effect of adding an (often un-related) feature.

In formal software development this risk is mitigated by the use of automated unit testing; these tests often originate from aTest-Driven Development* process.  When changes are made to the app (bug fixes, new features, etc) the automated test are run again and if the change has produced an unexpected side-effect, the test(s) will fail and indicate where further attention is necessary.

While I agree with TDD theory, in my experience the overhead of writing (and in particular, maintaining) tests exceeds the advantages in practice (most likely a tool limitation, not a flaw in the concept).  I wanted the “validation” that automated unit testing provides but without putting the burden of creating and maintaining tests on the developer.  I believe I have a solution to this problem; here’s how it works:

After the first “version” of an application is considered complete, the application is fed to an “analyzer” which exercises every possible combination of input and the resulting output and stores the results for later use.  When the application is later modified, the analyzer is run again (probably automatically, as part of the deployment process), first evaluating all possible code paths (to discover new ones) and then applying the tests found during the last run.  Finally a report is generated noting the differences between the two versions as well as any tests which have failed.

The developer then examines the report and acknowledge the differences that are by design which updates the record for the next run.  The developer then reviews the remaining test failure details and remedies any unexpected changes or defects in the application. 

This process is repeated until there are no more differences.

The primary result is the ability to detect and remedy defects that are introduced by accident when adding new features to an application.  The secondary result is that a definitive reference of changes made in each version is produced automatically (i.e., “release notes”) which reflects all changes, not just the ones documented by the developer.  This point bears repeating, that no only intentional changes are documented but unexpected ones as well, both beneficial and detrimental, and this type of testing can provide insight into how an application can be used (based on exploration of all possible inputs) outside of the use cases defined by the original design.  In turn, this may reveal potential bugs that would elude hand-written tests and traditional user acceptance testing as well, and may even provide insight into additional uses of the application unanticipated by the developer.

*This approach is not intended to replace Test-Driven Development

Posted via email from Jason’s posterous

Any guesses where this is heading? Monday, May 24 2010 

Photo-Op Sunday, May 23 2010 

No, I don’t mind the blank stairs… Tuesday, May 18 2010 

(new) Hammer Time Tuesday, May 18 2010 

The New Watch Friday, May 14 2010 

I’ve been looking for this switch for months… Thursday, May 13 2010 

Encouraging Monday, May 10 2010 

Next Page »