Wednesday, August 06, 2014
When practicing Test Driven Development, we use the 3 steps Red Green Refactor. The first step is meant to check the test itself, to make sure that it will actually fail when it's supposed to. I have found quite a few instances where tests have been written without this critical first step. The reason behind this step is to make sure we can trust the test itself, and if we can't be sure it fails when it should, then we probably shouldn't be including the test in our automation. If it passes when it should fail, then it is giving us false information and that is worse than no information. We stick to the recommended approach:

1. Write a test, it fails (no code to test yet) (Red)
2. Write just enough code to make the test pass (Green)
3. Refactor the code (AND the test code), keeping the test passing
3A. (This is where any simple logic would be written)
When satisfied, start over with a new test.

I find that step 3A is where we lose a lot of folks. Simple logic pretty much includes a calculation or deterministic algorithm. If there is any kind of branch statement - IF, SWITCH, etc. these should all be coded only as a result of new tests.

If we stay with this approach, we will have valuable information about the state of our code and a mechanism to make sure the code actually does what we wanted it to do.

Wednesday, August 06, 2014 6:35:18 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]  |  Trackback
Tuesday, August 20, 2013
"Look. You guys are just taking too long on this code. We need to get this out there ASAP!"
Raise your hand if you have heard this one before. If you've actually said this line, please go find some nice cat pictures on the internet to look at...

A methodical approach to coding is a very good way to ensure success of a software product. Not everyone in a company might agree however. Those who manage team budgets are often the first to apply pressure to get the product out to the market, regardless of quality (or even if it's finished). As a test-driven developer, I am sometimes asked/ordered to send out a product before it is really ready. I am not sure how someone else can have confidence in my software product if even I don't... If I don't believe that the guy remodeling my house has confidence in his work then I certainly wouldn't either.

A good approach to this dilemma is to have prioritization of stories, and to deliver what's most important, and at a level of quality that we can be confident will deliver value. Even in the most dire of circumstances, all but the most stubborn of people can be taught the ability to understand the reality of the situation. We can negotiate that the most important deliverable items can be put in first, and we can discuss the risks associated with doing this. In some cases the risks that rigorous testing lessen aren't always worth the cost. Sometimes it's acceptable to have a feature delivered even with the possibility of defects, just because it might be a lever to get the company a competitive advantage or take a market opportunity that otherwise might be missed.

As Developers, we must also be able to stand back from the code we are writing (and testing) and see the bigger picture. Nobody wants our code to fail or our customers to have a negative experience, and sometimes there may be other things in the works that make certain risks acceptable. It's our job to do the best development work we can, in the time given. Also it part of our responsibility to make sure that those who ask for the work to understand what they are getting when we deliver it, issues, bugs, and features all.

Tuesday, August 20, 2013 12:48:31 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]  |  Trackback
Monday, April 29, 2013
Here is a presentation I did on an overview of the three main practices under the Agile umbrella, Scrum, XP, and Lean. PPTX available upon request.

http://testdrivendeveloper.com/content/binary/Agile%20Software%20Development%20Overview.pdf
Monday, April 29, 2013 9:11:12 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]  |  Trackback
Friday, July 13, 2012
We've all used teardown methods in our test classes, right? It's a good way to clean up after the tests run. But wait - what do we need to clean up? Data in a database probably? Well, that would mean that the database hasn't been mocked out... making it not really a unit test... Unit tests should run really, really fast. If we haven't mocked out just about all of the dependencies (like databases, service calls, etc.) then the tests are not only dependent on external systems, but they are going to run slow also. Slow running tests tend not to get run as often, and don't provide us the rapid feedback we need in order to have the confidence that our code is working correctly.

Every time we encounter a test class with a tear-down method, we should take a hard look at what is being cleaned up in the teardown, and why that is necessary. Can we refactor the tests so that teardown is not needed? Can we mock out that database so there isn't any data to clean up? Can we mock out those expensive objects we had to construct and then deconstruct? Is there a way to make the tests run faster?

Remember, refactoring is the last (and most forgotten) step of TDD... Take a look at the test code too - not just the main line code. Look at the code as a whole, from a variety of perspectives. If we have to clean up something afterward, then something in the test might just be dirty... The focus of the unit tests is really to test just the code under test, and not other objects or systems, so always try to isolate it and eliminate other variable by mocking as much as possible.

Friday, July 13, 2012 7:26:59 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]  |  Trackback
Thursday, June 21, 2012
Ever see a call in code like this?

      connection.Close();

Watch out - Close() or Dispose() calls are a code smell! In .NET, we have lots of classes that implement the IDisposable interface (like most of the ADO SQL connection classes) and registry access, and lots more. It's pretty common for most of the whole framework I think. Whenever a class does implement IDisposable, it has to provide a Dispose() method which should clean up anything in memory, and tie up loose ends.

The safest thing we can do with these kinds of objects, is to wrap them in a using() block. This way the Dispose() method gets called no matter what happens, even if there is some kind of exception thrown. It's guaranteed. The using() structure is superior to and much cleaner than coding up a try/catch/finally, and much easier to read.

Example:

            using (SqlConnection conn = new SqlConnection(connectionString))
            {
                conn.Open();
                using (SqlCommand cmd = conn.CreateCommand())
                {
                    cmd.CommandType = CommandType.StoredProcedure;
                    cmd.CommandText = "dbo.sp_Sproc";
                    cmd.Parameters.Add(new SqlParameter("@parm1", "foo"));
                    cmd.Parameters.Add(new SqlParameter("@parm2", "bar"));
                    cmd.Parameters.Add(new SqlParameter("@id", id));
                    string xml = (string)cmd.ExecuteScalar();

// ...
 } }
If you can safely refactor the code (it has tests), take out the explicit calls to Close() or Dispose(), and refactor to use a using block. Make sure to check if a class implements IDisposable, and use a using block whenever possible.

Thursday, June 21, 2012 8:15:48 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]  |  Trackback
Tuesday, May 15, 2012
Do you know the three parts of each and every test? Arrange, Act, Assert (also known as setup, test, validation). Each test should have these three sections, and they should always come in this order. Many non-unit tests should have an additional step also - Cleanup. I have found on many teams that the first step - Setup - is often either missing or incomplete. The purpose of testing is to create an environment around the code under test [CUT] and hold it steady so that it is in a known state. We need to control all of the variables except for the ones we are actually testing. If we allow multiple things to vary in a test, we can have varied results and intermittent failures.

We can't trust intermittent tests... so always identify the three sections on your tests, and make sure that the Setup section is sufficient to keep all the variables in a steady state except the ones under actual test.

Tuesday, May 15, 2012 1:23:24 PM (Pacific Standard Time, UTC-08:00)  #    Comments [0]  |  Trackback
Tuesday, December 27, 2011
As Esther Derby's article discusses, it's the area of integration testing or cross-context testing where we most often discover the discrepancies in interfacing systems to each other. Automated acceptance testing, or automated integration tests are a key point for winning the bug battle in larger systems. If the teams and/or organizations negotiate a contract on test automation across these integration points, they can collaborate to create test frameworks and test automation that can be used to ensure proper functionality. These acceptance tests (or end-to-end tests as some call them) can and should be demonstrated to key stakeholders to make sure that the business understands what development is delivering, and that it meets the intended need.

As I often say, if we have automated acceptance tests, we can do ATDD: First, we automate the acceptance test, which fails because there is no code yet to test. Then, we write unit tests and code, more unit tests, and more code - striving ONLY to make the acceptance test pass. We should focus on just the one failing acceptance test, and not stray off into other functionality. Once we do, we can refactor to make sure we have the best solution, then we start all over again with writing more automated acceptance tests. That's ATDD!

Test automation is such a great thing to have on a project. It makes me smile each time an automated test catches a bug that would have otherwise gone unnoticed and perhaps even shipped to production.

This will be my last article for 2011, hope you had a great year! Best wishes for a fantastic 2012!

ATDD | Automation | TDD
Tuesday, December 27, 2011 8:53:06 AM (Pacific Standard Time, UTC-08:00)  #    Comments [0]  |  Trackback
© Copyright 2014, John E. Boal