Mittwoch, 30. November 2011

"Models" at Agile Testing Days 2011

Out there are numerous, brilliant articles on Agile Testing Days 2011. Most of them focus on a single talk or a single conference day. I would like to take another approach: I am going to pick some topic that connects multiple talks and take a look at the conference from this specific point of view.
The one topic that seems to be the most prominent one is simply "models".

Definition
I would like to start by giving my "favorite" (academic) definition of a model. According to Herbert Stachowiak, a model has at least these three attributes:
  • transforming: a model is a transformation of a natural or artificial original (which can be a model by itself),
  • reducing: a model does not include all attributes of the original, but only those that seem to be relevant to the model creator or model user
  • pragmatic: An original does not have a single, unique model. Models are designed to replace the original
    • for certain subjects (who?),
    • within a certain time interval (when?), and
    • in respect to certain mental or real operations (what for?).
Building Models: Learning
According to Liz Keogh, learning is the process of building (mental) models. When we build models, we delete, distort and/or add information. Based on observed data, we filter the data, generate assumptions, draw conclusions and finally build beliefs. Unfortunately, our beliefs influence our mental filters and thus might constrain our ability to learn. In order to improve our filters we have to adjust our beliefs.
As Esther Derby pointed out, humans are good at spatial reasoning, but bad at temporal reasoning. When we have to analyze temporal problems, we should try to convert them into spatial problems. One option is a graphical model of temporal data, e.g. time lines.
Michael Bolton gave a very good demonstration of the model building process. In the TestLab, he built a model of an application under test through Exploratory Testing. Based on James Bach's "Heuristic Test Strategy Model", he developed a rather comprehensive mind map describing the product and came up with lots of ideas for further testing activities. It became obvious that "Exploratory Testing is simultaneous test design, test execution and learning."

Using Models
David Evans emphasized that "[test] coverage is [implicitly] measured by reference to models". That is why "you can have complete coverage of a model, but you can never have a complete model". This is due to the reducing nature of any model. And finally David pointed out that "the more complete the model is, the less you can economically cover."
When it comes to complex systems, "there are no clear cause and effect relationships. Accept Uncertainty and complexity. Experiment and measure what really makes things better" (Liz Keogh). You might visualize some relationships using Diagrams of Effects aka. Causal Loop Diagrams (Esther Derby)
Gojko Adzic presented two specific models during the conference: effect maps are a way to model the relationships between software features and a business goal. They can be used to deduce possible activities from a business goal, select the most promising activities and measure their impact on the business.
Gojko started the initiative visualisingquality.org to gather models that facilitate the communication of quality attributes to stakeholders and management. He presented the ACC Matrix that was developed by James Whittaker at Google as one measure.

And What's the Tester's Job?
In a nutshell, testers challenge / break models. They help developers to make more accurate models (Liz Keogh). Thus, model creation is one of the main responsibilities of effective testers.

Donnerstag, 3. November 2011

On the Status of SoCraMOB (Software Craftsmanship in Münster,Osnabrück,Bielefeld)

From September 1 through September 2 I participated in SoCraTes 2011, a gathering of German software developers who are interested in Software Craftsmanship. It became the spark for a German Software Craftsmanship Community that is now organized under the label "Softwerkskammer". Me and three other participants volunteered to build up a regional community in Münster/Bielefeld/Osnabrück.
On October 19 we set up a kick-off meeting in the .space Osnabrück. Fortunately there are quite a few other people in the area who are deeply interested in the matter. It was easy to mobilize them for the kick-off. There were 11 people in the meeting; only four of them had been to SoCraTes!
In my opinion we had a great discussion on different aspects of Software Craftsmanship - agile processes, domain-driven design, test-driven development, educating junior developers. All the attendees were very energized.
This was the beginning of the regional community; we named it "SoCraMOB" ("Software Craftsmanship in Münster, Osnabrück & Bielefeld").
It turned out that a relevant number of members live/work in Münster. And people did not feel up to traveling to one of the other cities for (short) evening events in the future. We agreed on a different form of meetings: We are going to organize whole-day workshops on every second Saturday in a quarter. The program will be a mixture of open space sessions and coding dojos. The first one will be on January 14 in Münster. The members in each of the cities are free to organize local events.
Besides, Martin Klose and Jaroslaw Klose are going to stage a code retreat in Bielefeld in the context of the Global Day of Code Retreat.
The next step is to spread the word in the area. We already talked to the local .NET User Group on October 26. The people there were all energized and interested in the matter, too; all developers across technological barriers seem to have similar problems in terms of organization, development process, and team communication. The .NET User Group set up a coding dojo immediately for November 30.
The local Münster community are going to meet for a round-table on December 21. One of the topics will be the organization of the first SoCraMOB on January 14.

Montag, 17. Oktober 2011

Kick-Off: Software Craftsmanship in Münster/Osnabrück/Bielefeld

During the kick-off meeting for the German Software Craftsmanship Community (SoCraTes 2011) from September 1st through September 3rd I joined a group of participants who want to organize a regional Software Craftsmanship Community in the area of Münster/Osnabrück/Bielefeld.
I am happy to announce the first community meeting for the next Wednesday, October 19th, in the .space Osnabrück. We hope to bring several people from the area together. We are going to discuss what modern sensible software development should look like nowadays. Whatever language you use or which frameworks you employ, everybody is warmly welcome.
For more information and registration please see the Xing event page.

Sonntag, 18. September 2011

SFD Münster: PDF reports with Cucumber

By this post, I want to complete my talk on "Executable Specifications using Cucumber" during Software Freedom Day Münster.

Yesterday the JUG Münster and the produktivhaus | Coworking Space organized a local event associated with the world-wide Software Freedom Day. In my opinion, the event was quite a success.
In the course of the afternoon talks, I also took the opportunity. I chose the topic "Executable Specifications using Cucumber". All in all, my was not too bad, I suppose. But one thing bugged me: after delivering my talk (and a small, live-coded example), one participant asked me, what was the special advantage of using Cucumber for writing acceptance tests. Generally, I think that the advantage is the understandability / readability of the feature files. Gherkin (the language in which the features are described) has only minimal syntactical demands. Most of the content is simple prose (and some tables). This is easy to read and understand for business users what facilitates discussions, because they and the developers are nearly instantly able to write new feature specifications, change (and maintain) existing ones etc.
Another pro is the possibility to create PDF reports automatically. This is useful for management information, printing and archiving. I wanted to demonstrate the feature. Unfortunately, there was some kind of problem with the PDF generation mechanism. I was not able to resolve the issue immediately, so I could not prove my argument. I want to make up on this with this post.
When the event was over, I analysed Cucumber's log output. Finally, I found out that Cucumber relies on the Prawn gem to create PDFs. And Cucumber is built against an old version of Prawn, 0.8.4. I had not defined the desired version of the gems in my Gemfile, so Bundler downloaded the most recent version, which is 0.11.1, respectively 0.12. Because obviously the API changed in the meantime, Cucumber terminated untimely. The correct content of the gemfile is:

source :rubygems

gem "cucumber"
gem "capybara", ">=1.1.0"
gem "selenium-webdriver"
gem "rspec", ">=2.6.0"
gem "prawn", "~> 0.8.4"
gem "prawn-layout", "~> 0.8.4"

Executing
bundle install
and
bundle exec cucumber features --format pdf --out report.pdf
finally creates a feasible report:


Montag, 20. Juni 2011

My (unauthoritative) list of book recommendations

After my workshop on Ruby on Rails on Saturday I was asked to recommend some books on programming, Agile and similar topics. I don't feel like an authority, because I didn't read at least half of the books I assume to be import. But these are my current recommendations.
(I allowed myself to link the titles to Amazon using a personalized link; when you order one of the books by following a link within this post I will be rewarded a small amount of money; you might reward my gratuitious engagement for the SkillShare workshop this way)

Ruby on Rails books

I just advice the classics on Ruby and Rails:

My personal top 3

  1. Martin Fowler: "Refactoring" (also available as Ruby Edition). This book was a real eye-opener for me. Afterwards software was way "softer" to me. I became able to knead my code towards the direction I want.
    And besides, Refactoring is the basic ability in becoming agile. It enables you to evolve and extend your code and it is one of the three phases in TDD which is the next level when becoming agile.
  2. Robert "Uncle Bob" Martin: Clean Code. This book literally boosted the quality of my code. It gives lots of tips on the common principles when programming like good naming, refactoring the code, unit-testing, concurrency, and so on.
  3. Lasse Koskela: "Test Driven". This book was a key source for my diploma thesis in 2010. It gives a nice introduction to TDD, ATDD and an agile development process. A nice book to get you started.
This list might change with the next Amazon shipping ;-)

    Books I'm about to read / I was adviced to read

    When you are interested in more books, just leave a comment and I will post further tips. You might also name a specific topic.

    Montag, 6. Juni 2011

    Introducing New Features - Done Right

    One of my customers offers articles in campaigns to his customers and allows orders for these campaigns. Once upon a time I was involved in the creation of the specific web application with some fellow students. And I am still engaged in maintaining and extending the system.
    So far there was the restriction that there are no temporally parallel campaigns, they had to be in strict chronological order. Now my client wants to offer several campaigns at a time. Up to now, the concept of a "campaign" was hidden within a GUI component, namely a JavaServer Faces backing bean. So the following requirements had to be fulfilled:
    • Present multiple, parallel campaigns to the customer
    • Select one of those campaigns
    • Present the order process for the selected campaign
    So first of all, as an apprentice of the Acceptance Test-Driven Development community, I wrote an acceptance test for the new feature (slightly simplified):
    Given there is an active campaign C with product P
    And there is an active campaign D with product Q
    When I order 17 pieces of P
    And I order 42 pieces of Q
    Then I want to receive 17 pieces of P
    And 42 pieces of Q
    
    Afterwards I automated the acceptance test as an end-to-end test using Selenium/Webdriver 2. Good boy!

    The Mistake

    I carried out the necessary implementation steps in the wrong order.
    1. First of all I blew up all of my unit / integration tests by introducing a list of the former unknown concept "Campaign". Afterwards it took me more than a week to move the functionality from the backing bean to the new Campaign class and refit the existing tests.
    2. Bad enough, but I could do worse. I spotted several dirty parts in the existing nested objects of Campaign, respectively the backing bean (e.g. an Order or an OrderLine). And I started to fix them immediately: specialize and devide some functionality in subclasses, generalize other parts and so on. But I didn't formulate and rearrange my unit tests first, I just changed the production code. So while my tests were already failing, I changed the production code even further, breaking more and more tests. So what should have been a small change ended up in more than a week of work for a single commit...

    So what went wrong?

    While succeeding in ATDD, I simply missed basic concepts of TDD:

    1. Test comes first, afterwards refactor the code or write new code to pass the test.
    2. When a change is hard to do, refactor first to enable the change.

    I didn't evolve the new functionality from the existing one, but did a rewrite in fact (and reused parts of the existing code).

    How you do better

    At first, extract the concept of a Campaign from the backing bean (Extract Class Refactoring); reference just one Campaign from the backing bean. By forwarding the method calls to the backing bean to the new Campaign object, all of the tests stay green while moving attributes and methods. When all of the functionality is moved bit by bit to the new Campaign class, you can inline the delegating methods in the backing bean (Inline Method Refactoring). So these methods disappear and the access to the newly created Campaign field will become explicit in the tests. Finally move the existing tests concerning the functionality of Campaign from the existing BackingBeanTest to a new CampaignTest. All the tests stay green all the time. And that gives you
    • little risk
    • more confidence
    • and a whole lot of fun. Commit.
    While browsing the code you might spot some ugly parts. But don't touch them while changing another part of the system. Simply write it down on your todo-list. When your first change is done, you can turn toward the next item. When you want to rename some of your classes for the sake of more expressiveness, you can search for the corresponding name in your build path. Then you rename the class, all references to that class, the corresponding test class, and the variables referencing an object of the class, consistently (Rename Class Refactoring). Commit.
    One method to clean up a class violating the Single Responsibility Principle is creating subclasses addressing the different responsibilities. You split up the existing test class into two separate test classes. You can then pick the necessary test cases / methods from the former test class and assign them to the new test classes by their relevance to the former mixed up responsibilities (similar to the Replace Type Code With Subclasses Refactoring, see also the "Anti-If" school). You can pull up common custom assertions into the superclass. After reorganizing the test cases you can split the production code into two subclasses, move the method and fields specific to just one subclass in the corresponding file and keep common functionality in the superclass. This process takes a little more time but it's a task of reasonable size. Commit.
    Ah, and finally you can introduce the list of Campaigns easily and determine the selected one in the backing bean. Change done. Acceptance test passes. Mission accomplished.

    What I learned from my mistake

    • Refactoring and evolving a new functionality are fundamentally different. The sequence of the steps is nearly inverse.
    • Refactoring demands more discipline.
    • And the top priority is the following: don't ever break your tests!

    Montag, 23. Mai 2011

    Simplifying JMock Expectations

    I like to use JMock for mocking depending objects when test-driving my code. It allows a very readable description of mock object behaviour. Readability is my first priority when writing tests so that they are easy to understand later, too, and can be used as a valuable documentation of the intended behaviour. In order to lift the readability of JMock expectations I used a generalized implementation of the "Object Mother"-Pattern.

    The "Object Mother"-Pattern

    The "Object Mother"-Pattern is known to express in a succinct form what makes an object special.
    Let's assume we have a domain object Person with the two attributes name and surname.
    class Person {
      String _name, _surname;
      
      public Person(String name, String surname) {
        _name = name;
        _surname = surname;
      }
    }
    

    When we want to assure that a Person object with an empty surname is rejected by a PersonRepository when trying to save it, the simple solution would be the following:
    @ExpectedException(ConstraintViolationException.class)
    @Test public void shouldRejectPersonWithEmptySurname {
      Person personWithEmptySurname = new Person("Name", "");
    
      personRepository.save(personWithEmptySurname);
    }
    

    Though this is quite good (when using intent-revealing names), it becomes tedious when you want to test the behaviour in all possible attribute combinations. That's why the "Object Mother"-Pattern is handy. A possible ObjectMother for Person is:

    class PersonMother {
      Person _child = new Person("Name", "Surname");
    
      PersonMother withEmptyName() {
        _child._name = "";
        return this;
      }
      
      PersonMother withEmptySurname() {
        _child._surname = "";
        return this;
      }
      
      Person build() {
        return _child;
      }
    }
    

    Using the PersonMother and a simple factory method we can simplify the test code:

    @ExpectedException(ConstraintViolationException.class)
    @Test public void shouldRejectPersonWithEmptySurname {
      personRepository.save(aPerson().withEmptySurname().build());
    }
    
    PersonMother aPerson() {
      return new PersonMother();
    }
    

    And the PersonMother simplifies the combination of attributes, too:

    @ExpectedException(ConstraintViolationException.class)
    @Test public void shouldRejectPersonWithEmptyNameAndSurname {
      personRepository.save(
          aPerson()
            .withEmptyName()
            .withEmptySurname().build());
    }
    

    Using the Object Mother for JMock expectations

    After this excursus in the world of patterns, I am approaching my point, finally. Assuming that we want to test the behaviour of a service that depends on a PersonRepository to access persisted persons. In order to test the service in isolation, I mock the PersonRepository. I want to test the behaviour of the service depending on the contents of a returned Person. So we want to return different persons from the mocked repository. With "plain" JMock this results in the following setup code for our mock:

    PersonRepository mockPersonRepository = context.mock(PersonRepository.class);
    context.checking(new Expectations() {{
      oneOf(mockPersonRepository).get(1); returnValue(aPerson().withoutSurname().build());
    }});
    [...]
    

    Not bad, actually. Very expressive and readable. But since I am quite a perfectionist, I am bothered by the required call to build(). So, let's see if we can simplify this further.

    At first, I extract a generic interface from PersonMother:
    interface ObjectMother<T> {
      T build();
    }
    
    class PersonMother implements ObjectMother<Person> {
      [...]
    }
    

    Afterwards, I create a new method to use in JMock expectations:

    static Action returns(ObjectMother<?> objectMother) {
      return new ReturnValueAction(objectMother.build());
    }
    

    With these two ingredients I am able to reduce the mock setup code as I wanted:

    PersonRepository mockPersonRepository = context.mock(PersonRepository.class);
    context.checking(new Expectations() {{
      oneOf(mockPersonRepository).get(1); returns(aPerson().withoutSurname());
    }});
    [...]
    

    Montag, 21. Februar 2011

    Visual Kanban Principles

    In order to prepare for a little introduction on Kanban that I will give later this week, I created a visual Mindmap of the principles. I want to share it with you, maybe it's helpful to anyone of you. The labels are in German. If you are interested in the XMind file, drop me a line.

    Best regards!

    Samstag, 19. Februar 2011

    Photographic Testing: Lessons learned from Belgium Testing Days 2011

    During Belgium Testing Days 2011, I learned a lot about teaching and - surprise, surprise - about software testing. When I took a walk around the city of Brussels with my camera the day after the conference, I reflected what I had learned while taking some photos and a little parable came to my mind which I would like to share with you.

    Almost every person took a photo sometime. Lots of people nowadays think taking a photo is easy. The same applies to software testing. A lot of people tested some software and a lot of these people think, testing is easy. Well, that depends.¹

    It is relatively easy to do simple regression testing (what should better be called regression checking). And it's relatively easy to take a simple photo. The advantage is: these things can be easily automated. This is state of the art in photography. Take a look at autofocus or automatic exposure. And automation in testing is becoming more and more popular (see TDD, BDD, ATDD and so on). The disadvantage is: both of them seldomly convey interesting information. A simple picture is almost always boring.² And regression testing has a 30% chance of finding a bug at most. To find out interesting information, you need (human) skill.² The first skill for a photographer and a tester is knowing his tools. There are lots of them in photography, often paralleled in testing.

    Change your aperture
    You can change the aperture of your lens, resulting in more or less depth of field. In software testing, you can change your scope. You can choose to take a broader or a more narrow view. See the application as a whole or pick out specific functionality or specific qualities.
    (c) http://www.digicamguides.com/




    Change your shutter speed
    Changes in the shutter speed result in "frozen" or "fluent" images. In testing you can take a static or dynamic look on your product. Watch it with a static number of users or ramp up the load. You can spend less or more time on testing your area of choice. But beware! Sometimes longer exposure results in more blurry results ;-)
    (c) Flagstaffotos 2007 licensed under GNU Free Documentation License

    Change your focal point
    Different focal points make near or far objects sharp. You can focus on different levels of depths of your product likewise. Just walk the happy paths or explore the whole functionality in depth. Test your system through the UI or "crawl under the skin" and test the API.
    (c) Brien Szabo - http://www.natureimages321.com

    Change your sensitivity
    With modern cameras, you can easily adjust the sensitivity of your sensor, too, resulting in brighter or darker images with more or less visible details. As a tester, you can work at different levels of sensitivity, too. You can be open to any problems you see or you can be focused on specific types of bugs in specific areas, such as usability problems in the order function of your web shop.
    (c) Andreas Simon 2011

    Change your colour
    With modern cameras you can easily adjust the white balance of the sensor, which influences the colour of your image in consequence. Your "testing balance" influences your results, too. More exploratory testing will show you more bugs than simple regression testing.
    (c) Spiritia licensed under GNU Free Documentation License
    Change your perspective
    One of the most important means for a photographer - and a tester - is the perspective. You can move nearer to the subject or you can step back. You can go to the left or to the right. And you can kneel down or climb on a wall. Every step influences your picture and eventually you reach an interesting or even awesome photo.
    In testing, you should change your perspective from time to time, too. Look at the application from the point of view of different user personas. "What would Homer Simpson do?" (Lisa Crispin). What would Mark Zuckerberg do? What would the kind hacker from your neighbourhood do? These questions lead you to interesting results.
    (c) Andreas Simon 2011
    Use special tools for special purposes
    Extraordinary situations require extraordinary means. These are very specialized, mostly very expensive and sometimes heavyweight. But you just need them at time. Be it large lenses like the one shown below, be it a special GUI testing library, be it the cloud application for generating load on your servers, be it training, coaching or services.
    (c) Ryan Foong 2011

    Interaction
    You see, there are lots of instruments. Some influence a single property of your image, some influence several aspects. Sometimes you have several means to influence the same attribute. The same is true for testing tools and methods.

    Due to the scarcity of resources, e.g. light in photography and time in testing, the tools interact absolutely. When you want a "frozen" image, you probably have to make it more blurry, in order to get a well exposed photo. When you cannot raise your man power and you want to do more exploratory testing, you will have to do less (manual) regression checking for example. (Hm, maybe automating dumb tests could also be an alternative. But, that's just a thought ;-)

    Back and forth
    Take a "big picture" first, to get an overview and a first impression of your subject. Afterwards, select interesting details and try to work out their characteristics. Zoom in, approach them and watch them from different angles. Search for the pictures within the picture. Focus on a special area of your subject. Get an overview of your system under test at first. After having more information about the intended use of your customer, you can enter the areas of interest in detail.

    Build on your social skills
    Watch the people in interaction with their environment. It's often very interesting and insightful, and sometimes even funny.
    (c) Wladyslaw Sojka 2007, licensed under GNU Free Documentation License
    Ask for help. Sometimes other people allow you to reach an angle of view that you cannot reach on your own. Try to go to a high flat in the opposite building for bird view.
    Talk to the best informed people of your domain of interest. In order to find interesting places in a city, a local resident might be your help. In order to find interesting areas of your application, talk to your customer. He can tell you which areas might be risky (from the business point of view). He can tell you what areas are hard to use or buggy.

    Be economic
    Sometimes the best picture is not affordable. Renting (or even buying) an elevating truck in order to get to the right point of view is not worth the cost. And asking for another thousand users to test your product might not yield any relevant insight.
    (c) Michał Derela 2007, licensed under GNU Free Documentation License

    Explore
    Be it the city you are visiting, be it a software product. You might find interesting places, that are not in your guide book / your specification / user manual. Sometimes you will come to a dead end. Then, take a step back and explore another direction.
    (c) David Stowell 2007, licensed under Creative Commons Attribution-ShareAlike 2.0
    Look out for interesting differences (Belgians cross the street on red light; this is very extraordinary for me as a German guy). Maybe the user experience in your application is not consistent either.
    (c) Andreas Simon 2011

    Test the boundaries
    Sometimes, good pictures are only possible under unusual conditions. A bathing beach will be much more attractive at sunset than at bright noon light. Wild animals are very shy, you have to wait for hours or days for the special single moment when they cross your way. Try out the edge cases in testing alike. These are the region where most of the defects are found.
    (c) RonAlmog 2006, licensed under Creative Commons Attribution 2.0 Generic
    Be creative

    Play around with your tools to reach better results, both images and test reports. Sometimes you are surprised by reality. I was surprised that Manneken Pis for example is actually relatively small, I supposed it to be life-size.
    (c) Andreas Simon 2011


    Finally, give a try to serious photography. "Photography is learning to see [properly and precisely]" (Andreas Feininger). And this is on its own a very valuable skill for testers.

    This article was inspired by Belgium Testing Days 2011, especially:
    Another source of inspiration was the interview with Michael Bolton and James Bach (Agile Record 05, January 2011): bring knowledge from other disciplines into testing.




    ¹ This is probably the most important sentence I was told at university and during BTD2011
    ² "I believe that photography at its best is an Art, and photo-technique is but a means to an end: the creation of the picture. Today, even a fool can learn to operate any of our modern foolproof cameras, and produce technically perfect pictures -- but is this knowledge really all he needs for taking purposeful and pictorially exciting photographs? Naturally, as in any other art, there are artists and there are dabblers. If photography really were nothing but the simple and purely mechanical reproduction process the majority of people still think it is, why are there so many dull and meaningless photographs around?" - Andreas Feininger