Home 中文 | English
Home > News & Events
„Crowd testing and software testing done by test experts complement each other perfectly”

Testbirds chief executive Georg Hansbauer and imbus chairman Tilo Linz in conversation

Munich/Moehrendorf, 22 August 2013 – Two specialists in the field of software testing interview each other: Together with Philipp Benkler and Markus Steinhauser, Georg Hansbauer incorporated the successful crowd testing platform Testbirds in 2011. Tilo Linz is co-founder and executive board member of imbus – one of the most profiled and specialised service providers for software quality assurance for more than 20 years now. Their topics: What are the strengths of the two test outsourcing approaches? How can they be combined? What developments in the field of software tests are to be expected for the future?

Georg Hansbauer: Let’s try a little experiment: I start a sentence and you’ll complete it. “Tilo Linz and imbus AG think crowd-based software testing is useful because…

Tilo Linz: … it completes systematic and independent testing offered by a consulting house perfectly.

Hansbauer: In what way?

Linz: Software testing by an off-site specialist like imbus is based on the technical know-how of professional and qualified software testers. And it starts as a process at a much earlier stage than testing itself, namely at the consulting of the software development team. The test specialists have the competence to assess how the test should be organised and to what extend it should be implemented. They know the relevant statutory requirements and standards you have to stick to in the respective sector. There are tough targets in the field of medical, automotive and railway technique for example. The next steps of the test specialists are to plan the test schedule, define the necessary test cases and provide tools, test environment and test data. Then, they automate the test cases. After that the execution off of the prepared tests is pretty much of a routine and – due to automated tests – in many cases the minor part of work. What follows is the evaluation. In addition to that, testing based on test experts means also to consult the software manufacturer on what he and his development team could do even better ensuring that his product will contain less defects from the start. At crowd testing however, you deliberately chose in large parts non-professionals as testers. Their tasks: They test and assess software from the end-user’s point of view. Is it possible to operate the software like an every-day user would do it? These are highly subjective assessments, but thanks to a representative group of potential users they can be ascertained to a good extend. Crowd testing makes this perfectly possible, especially for end-user-orientated software and smartphone apps.

Hansbauer: Crowd testing is best explained by using an example.

Linz: Let’s take as an example the case of a sporting goods manufacturer who develops a new app that helps users to record and control their fitness data. There are already a few of those apps. There is the risk the app might get lost in the Apple App Store or the Google Play Store, if it isn’t able to convince at once. The app must be tested to prevent this – e.g. with an exhaustive system test checking the functionality as well as data protection regulations, performance aspects and compatibility for example – complemented by a crowd test. Corresponding to the user target group, we chose as crowd test team a group of athletes with different disciplines and power stages: from beginners, via back-to-the-roots sportspersons through to performance-oriented sporters, both males and females using different mobile devices. Everyone in the crowd gets the new app on his private mobile device. Then they all work with it for a limited period of time. The sporting goods manufacturer receives feedback about his app within just a few days – plus, it’s exactly from the target group who is intended to use the app later on. I’m sure you can give more information on how a crowd test script looks like. 

Hansbauer: The script is individually defined for every customer either by a specialist from imbus or a test manager from Testbirds. It’s possible to retrieve certain quality management aspects as well as user experience aspects in the test: Implementing given test cases as well as finding functional defects (so-called bugs) complement the technical-oriented quality assurance of an application – whereas Beta tests are to be numbered among marketing-oriented quality assurance, buzzword usability studies. According to the customer’s requirement, either a pure bug test or usability test or a combined test procedure is chosen. In addition to that, you can implement a bug approval. That means other testers reproduce identified defects on their own systems in order to find out whether bugs appear in general or just in certain system constellations, on certain devices or in certain test environments. For the fitness app from our example it might look like this: You define the test conditions – bug test, usability test or combined test – and issues as well as different device combinations for the fitness app test and you chose the target group. Besides quantitative questions like “Rate the fitness app on a scale of 1 to 6” you could pose qualitative questions to the testers like “What do you like most about the fitness app?”. These are the ingredients for a test set-up and they are the rough general conditions of a crowd-based project. You draft a test description like “Download the fitness app, make yourself familiar with the fitness app, use the app for bicycling/ in the mountains/ at the gym, minimise the app etc.”, then the complete implementation and unrolling follow within a few days: The sporty crowd puts the fitness app to the acid test on their own devices and records observations and impressions. These results are subsequently checked for completeness and detailedness. The testers have to attest all implemented steps by means of screen shots or screen casts. Our project managers control whether all test requirements have been met. They assess the feedback and formulate specific guidance on how the fitness app can be optimsed.

Linz: In my estimation the two approaches crowd testing and expert-based software testing will continue heading towards each other. At crowd testing, the infrastructure and the test object are available on the internet. You work online, the crowd can be scattered around the globe. In the future you will also depose test objects on the net when you’re doing testing based on test experts. It allows rapid and flexible access. As a consequence testers at expert-based software testing will be able to be further apart in geographical terms, too. At the same time it is foreseeable that qualified test specialists will play an increasing part in tester pools at crowd testing.


Tilo Linz                                                   Georg Hansbauer

Copyright © 2014, imbus Shanghai. All Rights Reserved.
Home | Login | Download | Contact