Table of Contents
2. Software Test
3. The approach for the creation of acceptance test cases
A complete in-house development of vehicle software by a manufacturer (OEM) is almost impossible today due to high requirements. Various know-how carriers are required during development, which makes the task of coordinating and integrating the software modules on the control unit in the vehicle more difficult. In addition, the high demands on the correctness of the software must be observed.
This is confirmed in the study HAWK (Challenge Automobile Value Chain). In this study, far-reaching changes in the value of creating architecture in the automotive industry is showen. Suppliers as know-how carriers are increasing their share. The manufacturer's share is to fall by almost a third to 25 percent by 2015.
The dependency of OEMs on multiple suppliers increases the complexity of testing. Before the OEMs integrate the SW modules of different suppliers on software in the loop level (SiLE level) or test them on hardware in the loop test benches (HiL test benches), the individual modules must be approved.
The OEMs then concentrate on the one hand on the acceptance and system test of the software modules of various suppliers and their integration and on the other hand on the integration of the delivered ECU in the entire vehicle. Embedded systems are increasingly becoming an integral part of safety-critical applications.
They control and regulate complex internal processes and/or interact with the environment. In addition, these devices assume the task of networking distributed and simultaneously running processes between sender and receiver, which can lead to new types of errors.
The error rate of the software in the medical or automotive sector today is approx. 20 percent, so that quality assurance is of great importance.
Testing is one of the most important pillars of quality assurance. A test is used to check whether the system has been implemented according to the requirements. Different test strategies and concepts are necessary to achieve acceptable test coverage with regard to customer requirements, standards and legal requirements.
2. Software Test
The prerequisite for the term "software test" is the term "error in software systems". An error is the non-fulfilment of a specified requirement on the software system. There is a deviation between the actual behavior and the target behavior. The actual behavior is determined during operation of the software system. The target behavior is firmly anchored in the specification of the test tasks. Errors in software systems, in contrast to those in physical systems, do not occur due to wear or ageing, but during the development of the software. The errors are retained and are applied during the execution of the system. The task of testing is to reduce risks when using the software. This is done by detecting as many errors as possible during testing.
The aim of testing is to improve the quality by eliminating a Error condition. Testing is performed by executing a program in the Control unit. You need a test case and a test plan. A test case has the task of determining which entries are to be made in the program and what the result should be. Many test cases are needed to cover the entire program in the ECU. Test plans or test concepts are usually drawn up in accordance with the IEEE 829 standard. The content of a test concept according to IEEE 829 consists of the following points:
Test concept name
The test concept shall be marked to ensure that the document is clearly identifiable from other documents. The naming of documents is archived in each company and determined according to rules and regulations. The document must be stored in the document management system and contain at least the document name, version, and release status.
The introduction briefly describes the project and its background. The project participants (customer, tester, test manager and other roles) are shown. References to other documents, norms, standards and company rules must be included.
The software components to be tested and parts of the system that are not tested should be briefly described here.
Performance characteristics to be tested
This subchapter of the test concept considers functions and properties that are to be tested. The reference to the test specification and test levels must be included.
Performance features that are not tested
The aspects in the software that should not be tested must be listed here in order to prevent unjustified expectations.
The test objectives are defined on the basis of a risk analysis. Test methods and necessary automations of the test tasks are defined, the selected strategies are checked for existing resources and identified risks are weighed up.
The development team must decide whether the implemented software should be delivered to the customer or not on the basis of the tests performed. A short announcement that the software is error-free is not sufficient. Criteria such as "Number of errors found", "Severity of errors found" and "Costs incurred" are used here.
Criteria for test termination and test continuation
It can happen that the software performs so badly during the first tests that further tests seem pointless. At this point, it must be considered which criteria are to be selected in order to bring test procedures to a halt.
Here it is decided when and which test documents and results have to be communicated to whom.
The allocation of responsibilities as well as the planning and execution of test activities are defined.
Test tools, work equipment and test infrastructure must be listed. If special tools and resources are required for test automation, these must be listed.
The organizational integration of the test personnel into the project and the division into different test groups and test levels must be planned.
Personnel, induction, training
The resources are planned. Staffing levels and qualifications are determined and administrative staff are instructed.
The test activities are planned with milestones and communicated to the project manager.
The organizational units that must approve the test concept are listed. Approval shall be given by the signature of the management level of an organisational unit.
Many different terms are used in the test area. These should be listed in a glossary. The test technical terms should always be used in the project in order to avoid different interpretations of a technical term.
The classification of test activities in software development processes is depicted using the V-model of the federal government (see Figure 1). The basic idea of the VModel is that test activities and development activities are corresponding, equal activities. The left branch illustrates the development steps.
The requirements are designed, specified and then implemented according to the customer's wishes. The right branch stands for integration and test work. The process describes the integration and testing of the components successively to grow the overall system. The left branch captures the verifications after each finished step. At the same time, the tester validates whether a partial product actually solves a specified task.
Abbildung in dieser Leseprobe nicht enthalten
Figure 1: V-model of the federal government and test activities
The core task of the acceptance test cases for the SW modules of different suppliers is to check whether they are mature or not for the next test stages in the right branch of the V-model, i.e. Sil-Integration and HiL tests.
There's SiL integration. HiL tests or even system tests involve a lot of effort and costs, the maturity of the SW modules is of great importance. At this point it is necessary to remove the SW modules in order to check their suitability for the further test stages in the right branch of the V-model. In order to counteract this problem, approaches are needed that take into account the special requirements of ECU software in the automotive industry and on the other hand facilitate the acceptance of the software modules. This paper presents an approach for the acceptance of software modules. The approach considers manual test case creation to derive acceptance test cases for OEMs from the requirements.
3. The Approach for the creation of acceptance test cases
The approach is based on the integration of the test case creation method "Classification Tree Method" and the "Design by Contract" concept (DBC). The DBC is based on the definition of assumptions and promises for each software module. This means that each software module concludes a contract with other modules. In this contract, the module accepts something and promises tomake something available to other modules if its acceptance is met. In contrast, the classification tree method starts with the analysis of the functional specification. The aim is to identify the so-called test-relevant aspects of the requirements. By selecting the values that can assume these aspects in a meaningful way, one expects to find test cases of different relevance. This division into different aspects, which can be refined separately below, significantly reduces the complexity of the original test problem.
In the approach, DBC is used to select the test relevant aspects and categories in the classification tree method. In the classification tree, no functional categories are considered, but the contracts defined by the DBC. The division of the classification tree into aspects is replaced by DBC contracts. The contracts are divided into Assumptions and Promises. The test case manufacturer must always think of the contract that can be concluded between one software module and the other modules in the entire architecture of the ECU software.
This division of the tree into different contracts, which can be refined separately into equivalence classes, considerably reduces the complexity of the original test problem. A contract offers the required test data through the assumptions, the promise on the other hand represents the target behavior.
The method prescribes the splitting of the tree into aspects without proposing a concrete approach to the selection and derivation of the aspects. This approach allows the creation of all necessary test cases needed to accept a correct, integrable and functioning software module, when all assumptions and their promises of the software module are considered. It is independent of the test levels used and can be used, for example, for acceptance tests at SiL (Software in the Loop), HiL (Hardware in the Loop) or overall system level. For the evaluation of the approach the classification tree editor CTE/XL can be used and applied in the context of a project.
The approach presented belongs to the category of function-oriented testing. By using the formal approach "DBC" and the classification tree method, the approach meets the requirements of the quality assurance standards and standards that exist within the framework of testing software in safety-critical applications such as the automotive industry.
The function-oriented approach used is suitable for the creation of acceptance test cases. The approach can be smoothly integrated into the acceptance phase of software modules in the development process.
The approach not only supports OEMs in the acceptance of software modules, but also offers a simple procedure for the tester in his task of creating acceptance test cases from specifications.
The advantages that distinguish this approach from other industry testing methods and approaches are as follows:
- Simple method and intuitive test case creation approach
- Concentration on acceptance test cases
- Early detection of consistent and consistent requirements
- Reduction of the test problem
- Use of formal methods in test case creation
- Simplification of the derivation of test cases from the specification sheet
- Use of test tools already established on the market
- Early detection of architecture and specification errors
- Reuse of created test cases
: Schneider, Kurt (2007): Adventure Software Quality, dpunkt-Verlag, Heidelberg
: Spillner, Andreas; Linz, Tilo (2006): Basic knowledge of software testing, dpunkt.verlag.
 Cf. http://www.automobil-produktion.de/themen/02430/index.php
 Cf. Andreas/Tilo (2006), p. 7.
 See Kurt (2007), p. 83.
 See Andreas/Tilo (2006), p. 223.
 See Andreas/Tilo (2006), p. 223.
 See Andreas/Tilo (2006), p. 39.
 See Andreas/Tilo (2006), p. 40.
 Cf. http://de.wikipedia.org/wiki/Design_by_contract
 Cf. Schneider, Kurt (2007)
 Cf. http://www.systematic-testing.com