An Evaluation for Model Testability approaches

Design for testability is a very important issue in software engineering. It becomes crucial in the case of Model Based Testing where models are generally not tested before using as input of Model Based Testing. The quality of design models (e.g.; UML models), has received less attention, which are main artifacts of any software design. Testability tends to make the validation phase more efficient in exposing faults during testing, and consequently to increase quality of the endproduct to meet required specifications. Testability modeling has been researched for many years. Unfortunately, the modeling of a design for testability is often performed after the design is complete. This limits the functional use of the testability model to determining what level of test coverage is available in the design. This information may be useful to help assess whether a product meets the target requirement to achieve a desired level of test coverage, but has little proactive effect on making the design more testable. Council for Innovative Research Peer Review Research Publishing System Journal: INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY Vol 9, No 1 editor@cirworld.com www.cirworld.com, member.cirworld.com


Introduction
Software testing is a significant part of both the software lifecycle and quality assurance activities. In simple form, testing is the software products dynamic execution in order to show the errors presence. Software testing is costly, and it is considered as the final barrier to the software product release. Model-based testing (MBT) is an automation of black box testing technique; its major difference from the conventional methods of black box testing is that the test cases are automatically produced by software tools that develop the expected behavioral models of the software under the test (SUT). MBT has conveyed a lot of benefits. These contain errors early detection, which is measured to be very cheap. The major MBT liability is to knowing how to model SUT (Reza, Ogaard, & Malge, 2008).
Model-based testing (MBT) is a rising trend in test automation. In MBT, the SUT is modeled at an appropriate level of abstraction for testing, and tools are used to automatically create test cases based on this model. Given a set of appropriate methods and tools, MBT has been demonstrated to be a helpful and efficient means for high-level testing in diverse domains. Some benefits comprise decreased costs of maintenance in concentrating on a single high-level model, and boosted test coverage over the features expressed in the test model by the means of automated test generation (Puolitaival & Kanstrén, 2010).
Model-based testing (MBT) defendants the systematic use of software models in particular for testing activities, e.g. creation of test oracles, test execution environments and test cases. If a software model is well appropriate for activities of testing, we speak of a testable model. Thus, software models testability is a significant quality representing the degree to which a software model helps activities of testing in a given test context. The software models testability should already be analyzed and enhanced at modeling time before MBT activities start. This assists to save costs for the detection and correction of defects of testability in later stages.
For models used in model-based testing, their testability evaluation is a significant problem. Existing approaches lack some related features for a systematic and complete evaluation. Either they do (1) not think about the context of models of software, (2) not propose a systematic process for choosing and developing right dimensions, (3) not describe a reliable and general understanding of quality, or (4) not separate between subjective and objective measurements (Voigt & Engels, 2008).
Testability is a property of program that is introduced with the purpose of forecasting efforts need for testing the program. To quantify the program testability is to relate the program with some numbers to present the degree of how easy are those definite testable properties in the program to be tested. High testability software means that errors could be discovered more easily during testing (if the software exist errors). Otherwise, it is complicated to be tested and is likely to be fewer reliable. Therefore the number of tests need in a program may be taken as the quantify of its testability. There has not been an instinctively acceptable measure of program testability. Software testability is the degree to which a software artifact (i.e. a software module, software system, design document or requirements) supports testing in a test context that given. Testability is not an inherent property of a software artifact and cannot be calculated directly (such as software size). Instead testability is an extrinsic property that results from software interdependency to be tested and the test goals, test methods used, and test resources (i.e., the test context). A minor degree of testability results in enhanced test effort. In tremendous cases a testability lack may hinder software testing parts or software requirements at all (Yeh & Lin, 1998).

An Empirical Analysis of a Testability Model for Object-Oriented Programs
Authors presented a metric based testability model for object-oriented programs. The paper investigated empirically the relationship between the model and testability of classes at the code level. Testability has been investigated from the perspective of unit testing. They designed an empirical study using data collected from two open sources Java software systems for which JUnit test cases exist. To capture testability of classes, they used various metrics to quantify different characteristics of the corresponding JUnit test cases. In order to evaluate the capability of the model to predict testability of classes, they used statistical tests using correlation. The achieved results support the idea that there is a statistically significant relationship between the model and the used test case metrics (Kout et al., 2011).

Analysis of Object Oriented Complexity and Testability Using Object Oriented Design Metrics
The approach proposed in this paper revolves around the complexity and testability of object oriented design. Predicting complexity of design at class level helps to simplify the design as much as possible. Object oriented design metrics extended by the approach proposed in this paper helps to obtain the quantifiable results so that complexity of design can be predicted accurately. The approach is based on literature survey and the concepts developed are validated by mapping the metrics J u l y 1 5 , 2 0 1 3 model to the actual software projects. Literature survey concepts and estimated results obtained by actual projects verify each other. Improvements can be made to the proposed metrics. As more detailed designs are produced during later design phases, the proposed metrics can be applied to different other types of diagrams produced as a part of detailed design either in its existing form or by modifying it according to requirements. Complexity analysis at coding stage can be performed by the object oriented design metrics proposed in this paper (Khalid et al., 2010).

Metric Based Testability Model for Object Oriented Design (MTMOOD)
The proposed model for the assessment of testability in object-oriented design has been validated using structural and functional information from object oriented software. The models" ability to estimate overall testability from design information has also been demonstrated using several functionally equivalent projects where the overall testability estimate computed by model had statistically significant correlation with the assessment of overall project characteristics determined by independent evaluators. The proposed model is more practical in nature having quantitative data on testability is of immediate use in the software development process. The software developer can use such data to plan and monitor testing activities. The tester can use testability information to determine on what module to focus during testing. And finally, the software developer can use testability metrics to review code, trying to find refactoring that would improve the testability of the code (Khan & Mustafa, 2009).

Automating regression test selection based on UML designs
The authors propose a methodology supported by a prototype tool to tackle the regression test selection problem at the architecture/ design level in the context of UML-based development. Their main motivation is to enable, in the context of UMLbased development, regression test selection based on design change information, early in the change process. They also present three case studies that were used as an initial feasibility and benefit assessment. These case studies are varied in the sense that they cover very different systems and changes, in both industrial and academic settings. Results show that design changes can have a complex impact on regression test selection and that, in many cases, automation is likely to help avoid human errors. Their objective has been to ensure that regression testing was safe while minimizing regression testing effort. But they have shown that certain changes may not be visible in the design and may require additional attention during coding or a special way to document them during design. Another limitation is that, based on UML design information, test selection may not be as precise as if it was based on detailed code analysis. Improving precision by analyzing in more detail guard conditions and OCL contracts is the focus of their current research. However, their case studies have only shown one case of imprecision in classifying a test case as retestable. Despite the above limitations, by providing a methodology and t ool to perform impact analysis and regression test selection based on UML designs, the achievements are: -Higher efficiency in test selection based on the automation of design change analysis and traceability between UML designs and regression test cases.
-Better support for assessing and planning regression test effort earlier in the change process that is once design changes have been determined (Briand et al., 2009).

Automatic generation of test specifications for coverage of system state transitions
The authors have presented a novel strategy for automatic state-based system testing for achieving transition path coverage. A severe handicap in system test generation for transition coverage arises due to the fact that system state models are not developed during any pragmatic development process. This may be attributed to the fact that the state models for practical problems tend to be extremely large and complex. To overcome this problem, they synthesize a system state model of an object-oriented system from the relevant UML models. The synthesized state model is then used to generate test specifications for transition coverage. Their approach to system testing does not replace traditional testing approaches such as, method coverage, message path coverage, etc. On the other hand, their approach to testing based on transition path coverage is intended to be used in a complementary manner with traditional test techniques. The test specification generated by our approach could detect bugs not detected using traditional testing, since the fault models for the two are different (Sarma & Mall, 2009).

Improving the Testability of Object Oriented Software through Software Contracts
The main goal of the study was to establish the relationship between software testability and software contracts. The software contracts reduce the testing effort and hence improve the testability of an object oriented class. The same is demonstrated through an example of a queue class written in C++. The results show that software contracts reduce the number of test cases J u l y 1 5 , 2 0 1 3 by 50% to test a class. Hence, the software practitioners can make use of software contracts to reduce the testing effort and hence improve the testability of the software (Singh & Saha, 2010).

Construction of a Systemic Quality Model for Evaluating a Software Product
In referring to the well-known expression "build quality into software," Dromey points out that high-level quality attributes, such as reliability and maintainability, cannot be built into the software. What can be done though is to identify a set of properties (such as modules without side effects) and build them up consistently, harmoniously and fully to provide reliability and maintainability. Links must be forged between the tangible properties of the product and the high-level quality attributes. Dromey proposes three models, depending on the products resulting from each stage of the development process: requirements model, design model, and implementation quality model (programming). In comparing this model to ISO 9126, additional characteristics like process maturity and reusability are noticeable. It is important to point out the weighting Dromey gives to process maturity, an aspect not considered in previous models. This model seeks to increase understanding of the relationship between the attributes (characteristics) and the subattributes (subcharacteristics) of quality. It also attempts to pinpoint the properties of the software product that affect the attributes of quality. After analyzing various product quality models, the different quality attributes or characteristics found in each of them must be compared. Table 2.9 shows that the quality characteristics found in the majority of the models are: efficiency, reliability, maintainability, portability, usability and functionality, which have been present in more recent models. Because they are present in all the models studied, they can be considered essential and worthy of study (Ortega & Rojas, 2003).

An empirical study into class testability
Testability The ISO defines testability as""attributes of software that bear on the effort needed to validate the software product"". Binder offers an analysis of the various factors that contribute to a system"s testability, which he visualizes using the fish bone diagram as shown in below Figure. The major factors determining test effort Binder distinguishes include the testing criterion that is required, the usefulness of the documentation, the quality of the implementation, the reusability and structure of the test suite, the suitability of the test tools used, and the process capabilities. Of these factors, our study is concerned with the structure of the implementation, and on source code factors in particular. We distinguish between two categories of source code factors: factors that influence the number of test cases required testing the system, and factors that influence the effort required to develop each individual test case. We will refer to the former category as test case generation factors and to the latter category as test case construction factors, which both are discussed below (Bruntink & Van Deursen, 2006).

Contract-Based Software Component Testing with UML Models
This research takes a different approach by incorporating testing-support artefacts (called test contracts) at the model-based specification level to improve model-based component testability with model-based test contracts. They believe that modelbased testability stands at a testing level above traditional code-based testability, and thus is consistent with and supports model-based approaches to SCT (Zheng & Bundell, 2008).

Quality Assessment and Quality Improvement for UML Models
The ISO/IEC 9126-model defines no uses, but distinguishes between internal quality, external quality and quality-in-use. The quality ISO/IEC 9126-model is a generic quality model that covers internal and external quality in one abstract model. The model for quality-in-use is similar to the operation use of the McCall model. However, quality-in-use and external quality are out of the scope of this paper, and therefore not discussed any further (Jalbani, 2011). J u l y 1 5 , 2 0 1 3

Comparative evaluation
In this section we evaluate the above model testability approaches. We declare the achievement and main issue of each approach.

Conclusion
This paper has aimed to provide an overview and compare recent progress in model testability. We study and survey several approaches. But we cannot claim that these approaches are comprehensive and exhaustive. Finally, we have comparison table with different columns that compares all the approaches. The main problem with most of these approaches to testing is considering the behavioral architecture in model testability. It should be noted that although the above comparison is conducted based on some prominent approaches, the outcome of this research is not restricted to such approaches. In other words, my considered criteria either can be served as features to be included in a newly developing system or may be applied to help generally evaluating or selecting model testability approaches. However the results of my comparison show that there is no single superior model testability approach in all cases. Therefore, deciding which approach to use in a certain scenario should be done based on its specifications and combine and present the new approach that is more comprehensive and mature is my future work.

Acknowledgement
This project is sponsored by Ministry of Higher Education (MOHE) in corporation with Universiti Teknologi Malaysia, Research Management Centre (UTM-RMC) under Vote No: 03H74. The authors also would like to thanks those who are directly or indirectly involved in this project.