A roadmap to structured user interface testing


Introduction

In order to prevent bugs from manifesting in production we need to bring to structure our user interface testing process. By having a structured testing process we can assure that we will all test features in the same manner and, even better, anyone will be able to do it. Another advantage is that we can improve our process over time.

This document will enlighten you on how the documentation can be made and how we can advance to structured testing of our applications.

Coverage

Ideally we would start making plans and scenarios for testing the user interface around the same time we build the feature. That way we can immediately test the feature once it is finished and verify that our scenarios (based on assumptions) are correct. If they aren’t correct we can easily modify them and ask questions about them. The designer(s) and developer(s) will still have the ideas behind it in their memory.

In some cases the application or at least part of it already exists. This means we cannot make the plans and scenarios before or during the development stages. This does not have to be a big problem.
Sometimes it’s easy to figure out how a feature is meant to work. For example when they are explained in a manual for end-users or maybe they are very intuitive by design.

In other cases it can be hard to understand how a feature is supposed to work. In these cases it can be a good idea to consult a developer, product owner or even an experienced user.
Our goal should be to cover all features of the application.

Manual testing vs automated testing

While the title of this section implies that one might be better than the other, this is most certainly not the case. Of course there are differences and both methods have advantages and disadvantages. Combining and finding the right balance between the two is key in this case.

Manual testing

Manual testing is the most secure way of quality assurance, depending on how structured the process is. One of the reasons for this might be because the user interface is meant to be used by a real person. Users tend to think of the most creative ways to use a feature. Ways that may differ completely from the initial idea behind them. Are these users wrong in how they use a certain feature? As a developer, designer or product owner your answer might be “yes”. However, if the user interface allows it and users are using it like that, they will tell you different.

Manual testing is also the most expensive way of testing. Testing all scenarios for every feature of an application can take up a lot of time and thus can be very costly. It is only really worth it, when it is done properly.

Points to take into consideration are:
  • Is the documentation or knowledge of the tester good enough to test the applications
  • How to keep track of the results and process them
  • Is there enough time to test what is needed

Automated testing

Automated testing is a great way to overcome some of the problems posed by manual testing. Compared to manual testing it is very structured by default and will execute every test scenario in exactly the same way over and over again. While this sounds like a big advantage, this is not always the case. It is however a cheaper way to test the user interface and it is also quite a bit faster.

When a user interface changes, the test will not always be able to tell. It will test a very specific thing. For example an automated test might check to see if a certain text is showing in a dialog, it will however not check if any error messages are being shown in the same dialog.

Being very specific and realizing that this way of testing can impossibly replace a real tester is key here.

Points to take into consideration are:
  • Automated tests are very specific
  • The tests can be fragile as they are based on finding elements by using CSS selectors
  • Testing some parts of a user interface can be nearly impossible (think in terms of a captcha)

Finding the balance

The trick to finding the right balance between manual testing and automated testing is to know the system and its features. In some cases parts of the user interface maybe re-used in several places for example. This is important to know when making a test plan.

The balance might differ from release to release, but a good balance usually is:
  • Manually test the feature(s) that have changed (update the automated tests where necessary)
  • Run the automated tests on the entire application

Writing test scenarios for manual testing

A test scenario for manual testing should be clear on what steps to take to test a certain feature. A good way of doing this is to make a document and start by adding screenshots with a short explanation of what needs to be done. For example testing the “create a new user account” feature of the jobright application requires (at least) 4 steps.

In its most simple form we will verify that:
  1. The homepage has a button we can press to go to the sign up page
  2. We have fields we can fill in on the sign up page
  3. We are able to fill in the fields with test data
  4. When we submit our test values, something will happen that matches our expectation
A good way to document this would be to add screen shots that show us what we can expect after each step. With the last step this might be a little difficult, the outcome of this step will differ based on the data we use. Which is good because we want to test all possible outcomes, we call this “test cases”.

Test cases

Most scenarios will have multiple test cases including the “happy case” and the “worst case”. In the happy case a user will do everything right and achieves exactly what he or she wanted. In the worst case the user does everything wrong. These are the extremes and with most features there will be more possible ways to use it, so more cases need to be added. When a bug is reported we will usually receive information on how to reproduce the bug. This means that this fell through our testing process and we can add this scenario and/or case to our test plan.

The scenario will remain in place, but we’ll add test cases and document the outcome of each case.

Example

The following is a basic example of what the scenario and basic cases for testing the “create new user account” feature of jobright.

Creating a new user account


In order for people to use the application they must be able to register an account.
The following steps describe what a user needs to do to register a new account.

Scenario
  1. Go to the homepage:
  2. Click the button with the text “Create an account”
  3. Insert data according to test cases
Test cases

Happy case
Prerequisites: none

Step Data Pass/Fail Expectation
1. Go to the homepage



The homepage will show and there is a button with the text “Create an account”.
2. Click the button with the text “Create an account”



The sign up page will load and it has fields for “email”, “password”, “company name” and “your name”.
3. Insert data Email: “test@example.com
Password: “password”
Company name: “Example”
Your name: “T. Ester”


We are allowed to fill in the fields without anything changing in the user interface.
4. Click the button with the text “Create an account”



A dialog will show with the text:
“We sent the email to verify account registration. Please check the email and complete the registration.” and an email is sent.

Worst case
Prerequisites: none

Step Data Pass/Fail Expectation
1. Go to the homepage



The homepage will show and there is a button with the text “Create an account”.
2. Click the button with the text “Create an account”



The sign up page will load and it has fields for “email”, “password”, “company name” and “your name”.
3. Do not insert data





4. Click the button with the text “Create an account”



The labels of the fields turn red and the fields get a red border.
Below the fields messages in red appear.

Automating the user interface testing

In order to be able to automate the user interface testing process we need to use certain tools. In our case we chose to automate the tests using Behat and using the Mink extension to control Selenium.
This way we are bound to Behat, but not necessarily to Selenium. Behat is a tool designed for “Behavior Driven Development” and allows us to specify features and user stories using a syntax called Gherkin.

With the Mink extension we are able to perform user interface testing as well and a lot of work is already done for us. For example looking up elements using CSS selectors and checking their values or following a link is already available in the extension.

Example

The example shown is a feature file based on the Gherkin syntax. Feature descriptions are not processed into test code, but give a good description of why a feature exists. It should be made up following the structure:

Feature: {some short description of the feature}
In order to {description of business value}
As an {actor role within the application}
I want to {goal that needs to be achieved}
For the “create a new user account” feature this looks like:


Feature: Create a new user account
    In order to be able to use the application
    As a new user
    I need to be able to create a new user account

    Scenario: Accessing the account creation page from the homepage
        Given I am on the homepage
        When I follow "Create an account"
        Then I should be on "/signup"

    @javascript @mail
    Scenario: Creating a new user account (happy case)
        Given I am on "/signup"
        When I fill in "Email" with "test@example.com"
        And I fill in "Password" with "password"
        And I fill in "Company name" with "Example"
        And I fill in "Your name" with "T. Ester"
        And I press "Create an account"
        Then I wait for the confirmation dialog to appear
        And I should see "We sent the email to verify account registration. Please check the email and complete the registration." in the "#signup_confirm_text" element

In the example above you can see lines such as “Given I am on the homepage”, “When I follow "Create an account"” and “When I fill in "Email" with "test@example.com"”. The test context (code) behind these lines are already available in the Mink extension and we do not have to write any code to test them.

Sometimes there are cases that cannot be tested by using the default contexts of the Mink extension. In the example above there is a line stating: “Then I wait for the confirmation dialog to appear”. The Mink extension can’t determine that this happened, because it is very specific to our implementation. In cases like this we have to write some code:

/**
* @Then I wait for the confirmation dialog to appear
*/
public function iWaitForTheConfirmationDialogToAppear()
{
    PHPUnit_Framework_Assert::assertTrue(
        $this->getSession()->wait(
            1000,
            "$('#modal').attr('aria-hidden') == 'false'"
        ),
        "The confirmation dialog did not show!"
    );
}

The code above asserts that the return value of the wait() method is true. The wait() method itself is called with 2 parameters. One giving the number of milliseconds to wait and the second is a line of javascript that is injected into the user interface and allows us to lookup a certain element and check its attributes.

Conclusion

In order for us to achieve as much coverage as possible we have to start implementing a process which supports us in making the test scenarios, test cases and automated tests.

Because we are just starting with documenting and automating the user interface testing and already have an application with an user interface, we have some catching up to do.

The first steps are already taken, we have some structure. The next step would be to make a list of all features of the application and start making the scenarios and test cases for each scenario. When we lack time to do so, we can at least make them for the features we change. This way we will accumulate the required information over time.

Once one or more test scenarios are produced for manual testing we can automate them. This will always be an ongoing process.
If you liked this article

Let's subscribe the updates of Scuti!
Share on Google Plus

About Anonymous

This is a short description in the author block about the author. You edit it by entering text in the "Biographical Info" field in the user admin panel.
    Blogger Comment
    Facebook Comment

0 Comments:

Post a Comment