Monday, January 21, 2008

Unit Tests for Old Code and Crazy Dog Ladies

So my brother knows this crazy dog lady, who lives in a 100+ year old farm house, and has decided to augment her dog collection with 8 or so teenagers. Of course she needs more space to make this happen, so she brought a contractor out to ask about putting a bathroom into the shed, to make it living space - because when I say crazy, I mean crazy.

She brought out a contractor, who no doubt looked to her with that mix of bemusement and confusion one gets when one wastes half a day to scope a monumentally stupid task, and asked her, "where do you think it's going to drain to?". She answered, "I don't care, a pipe out the back or something."

So here I am sitting at my gig in front of a mountain of code, some good, though a lot of it is that overbuilt sort of "there's lots of words, it's extra double good" type of code all developers write out of ego.

But the problem, as it were, is a big push from management towards unit testing, and the old "code coverage" buzzword is getting tossed around. Once we have 80% or higher code coverage, we will have crossed the "bug count" singularity, and our codebase will implode upon it's own awesomeness and form a new supermassive black hole, I don't know.

I'm long familiar with unit testing from a functional design, creating the tests first and building the code around that.

But I'm sitting in front of code with epic method calls, you know the type - "private void DoAllTheTasksInOneCall(string[] EveryParameterPossible, out int ReturnValueBecauseImAMoMo)". Of course, the definitions of the parameters, the return value, and what "AllTheTasks" means are nowhere to be found.

I need to pass or fail this thing, but I have no idea what it does. Like any public school teacher, though, it would behoove me to pass everyone, because a failure would make me look bad, not so much the original coder.

So, my question is, Where do you expect me to put the drain? Yeah, I could stick a pipe out the back but that just makes for a yard full of poop.

To the C# MSDN globetrotter all-star geeks: how do you approach unit testing legacy crap? In particular, how do you deal with private methods? I can think of a few approaches, each with pros and cons. We're on VStudio 2005, TFS, and my coworkers have downed the KoolAid double fisted and it is the most amazing IDE money can buy.

1) Run the built in Unit Test wizard, and let it codegen a private accessor. Pros: I get a private accessor, and I'm always thrilled to have MSDN put sourcecode in my projects with a dire warning that I never, ever, ever, ever, ever dare touch the code.

Cons: Well, if I refactor my project, my accessor is junk now. Why can't VStudio rebuild this accessor as a prebuild step in the unit test? Who knows. All my objects will be wrapped in these accessors, so I'm testing them - and not the code itself. If I want to mock objects with RhinoMocks, or it's ilk, I have to mock the wrapper. Why should I be unit testing autogenned dog crap? Ultimately, I'm indirectly testing private methods, and that's never preferable.

2) The [assembly: System.Runtime.CompilerServices.InternalsVisibleTo] attribute, which allows me to simply call my class directly in the test project. Cons: well, intellisense doesn't seem to work in the test project but intellisense is pretty much a crutch for babies anyways - at least I'm creating and calling the classes directly. There's a few more mouseclicks and copy/pastes to sign the test projects binary, and embed it's PublicKey in the attribute, or else any joker could be instancing internal classes, and that will not do.

The problem with these approaches is they don't really seek to solve the real problem. Both assume I know what this object does in the first place.

My preferred option, the Holmes on Code approach:

3) Refactor, redesign and rewrite the code. Tear this all out, it simply has to go. We're going to knock down all these classes, come in here with a new MVP model, and simply will not quit until everything is encased in 8 inches or more of spray foam. We'll bring in the experts to properly size and install a new septic system. It will be expensive, it will take a lot of time, but it will be done right the first time.

This takes time. I'm talking about going back to the functional design (creating it if it never existed), and in many cases starting over. But how else do you do it? Just throwing an accessor onto some void method, calling it, and assume that if I didn't trap an exception everything is hunkydory?

Of course, refactoring and rewriting carries the largest risk of introducing new bugs, but, it leaves the project with better, and more testable code which can then truly become more robust. But it's a hard sell when QA finds issues with code that worked before, and you try to tell them with a straight face that it "works better now" despite all the crippling new bugs. So I'm either explaining my 20% code coverage, or why my new "100%" tested code, doesn't work as well.

I miss the days when unit testing was something only real developers knew about, back before Microsoft decided to borf up a sloppy knock off of NUnit and hotglue it to their "one size fits all" development environment.

Now "unit testing" is a buzzword in the "agile XTREMe 2 tha max" canon, and in danger of becoming a massive timesink. Good code is testable, but simply making bad code testable doesn't make it good. Folks sure seem to think so.

No comments: