A Comparative Evaluation of approaches for Model Testability

-Design for testability is a very importantissue in software engineering. It becomes crucial in the case of Model Based Testing where models are generally not tested before using as input of Model Based Testing. The quality of design models (e.g.; UML models), has received less attention, which are main artifacts of any software design. Testability tends to make the validation phase more efficient in exposing faults during testing, and consequently to increase quality of the end-product to meet required specifications. Testability modeling has been researched for many years. Unfortunately, the modeling of a design for testability is often performed after the design is complete. This limits the functional use of the testability model to determining what level of test coverage is available in the design. This information may be useful to help assess whether a product meets the target requirement to achieve a desired level of test coverage, but has little pro-active effect on making the design more testable.


Introduction
Software testing is a significant part of both the software lifecycle and quality assurance activities. In simple form, testing is the software products dynamic execution in order to show the errors presence. Software testing is costly, and it is considered as the final barrier to the software product release. Model-based testing (MBT) is an automation of black box testing technique; its major difference from the conventional methods of black box testing is that the test cases are automatically produced by software tools that develop the expected behavioral models of the software under the test (SUT). MBT has conveyed a lot of benefits. These contain errors early detection, which is measured to be very cheap. The major MBT liability is to knowing how to model SUT (Reza, Ogaard, &Malge, 2008).
Model-based testing (MBT) is a rising trend in test automation. In MBT, the SUT is modeled at an appropriate level of abstraction for testing, and tools are used to automatically create test cases based on this model. Given a set of appropriate methods and tools, MBT has been demonstrated to be a helpful and efficient means for high-level testing in diverse domains. Some benefits comprise decreased costs of maintenance in concentrating on a single high-level model, and boosted test coverage over the features expressed in the test model by the means of automated test generation (Puolitaival&Kanstrén, 2010).
Model-based testing (MBT) defendants the systematic use of software models in particular for testing activities, e.g. creation of test oracles, test execution environments and test cases. If a software model is well appropriate for activities of testing, we speak of a testable model. Thus, software models testability is a significant quality representing the degree to which a software model helps activities of testing in a given test context. The software models testability should already be analyzed and enhanced at modeling time before MBT activities start. This assists to save costs for the detection and correction of defects of testability in later stages.
For models used in model-based testing, their testability evaluation is a significant problem. Existing approaches lack some related features for a systematic and complete evaluation. Either they do (1) not think about the context of models of software, (2) not propose a systematic process for choosing and developing right dimensions, (3) not describe a reliable and general understanding of quality, or (4) not separate between subjective and objective measurements .
Testability is a property of program that is introduced with the purpose of forecasting efforts need for testing the program. To quantify the program testability is to relate the program with some numbers to present the degree of how easy are those definite testable properties in the program to be tested. High testability software means that errors could be discovered more easily during testing (if the software exist errors). Otherwise, it is complicated to be tested and is likely to be fewer reliable. Therefore the number of tests need in a program may be taken as the quantify of its testability. There has not been an instinctively acceptable measure of program testability. Software testability is the degree to which a software artifact (i.e. a software module, software system, design document or requirements) supports testing in a test context that given. Testability is not an inherent property of a software artifact and cannot be calculated directly (such as software size). Instead testability is an extrinsic property that results from software interdependency to be tested and the test goals, test methods used, and test resources (i.e., the test context). A minor degree of testability results in enhanced test effort. In tremendous cases a testability lack may hinder software testing parts or software requirements at all (Yeh& Lin, 1998).

Measuring design testability of a UML class diagram
In this paper, the authors identified two configurations in a UML class diagram that can lead to code difficult to test. These configurations are called testability antipatterns, and can be of two types, either class interaction or self-usage interaction. Those anti-patterns between classes may be implemented as interactions between objects in which case, the final software may be very difficult to test. The paper proposes a test criterion that forces to cover all object interactions. It also defines a model that can be derived from a class diagram, and from which it is possible to detect, in an unambiguous way all the antipatterns. From this model, it is also possible to compute the complexity of anti-patterns which is the maximum number of object interactions that could exist (and should be tested). The testability measurement corresponds to the number and complexity of the antipatterns (Baudry&Traon, 2005).

Quality Plans for Measuring Testability of Models
In this paper, the authors focused on the evaluation of software models that are used for testing activities. They introduced the Model Quality Plan (MQP) approach for measuring quality of software models. They presented by an example how a MQP can be systematically developed for measuring the testability of UML statecharts. The MQP approach is based on a combination of the Goal Question Metric (GQM) and quality models. It consists of a top down process and a related metamodel. The process guides one how to develop a Model Quality Plan (MQP) and the metamodel states how a MQP may look like. The most important differences of their approach in respect to GQM are (1) adoption of the context characterization, (2) integration of quality models, (3) refinement of measurement level, and (4) the formalization of their approach by an integrated UML model. Due to the documentation of a set of intermediate products and their high degree of details the initial effort for applying their approach remains heavy. Nevertheless, they are convinced that their approach is cost effective in the long term. Involving context factors tap the full potential to reuse existing quality plans. In addition, the systematic usage of context factors for the identification of information needs makes their approach more efficient. Last but not least the development effort can be considerably reduced by a dedicated tool support ).

Using UML Models and Formal Verification in Model-Based Testing
The authors presented an approach on combining UML modeling with formal verification in order to improve the quality of the models used for automated test derivation. While incorporating formal verification in the overall process gave number of advantages. Firstly, the quality of the models was improved allowing them to detect inconsistence in the models that were not Jul y 15, 2 0 1 3 detected by their custom OCL validation rules. Secondly, while modeling with UML-B, they observed several ambiguities, or not well-explained details, in the specifications that might have been difficult to observe by using UML only. In a way, by using formal specifications, they better understood the functionality of the SUT. However, it also increased the complexity of the model as we had to add more details just to prove the system correct. At the moment, traceability of requirements from test cases into formal models is not performed. However, this can be done by attaching textual requirements, taken from requirement model, to the generated Event-B specifications. This would certainly help to find which part of the system has passed or failed the test and constitute a topic for our future work (Malik et al., 2010).

A Novel Quality Model for UML Models
In this paper, a continuous quality assessment and improvement methodology for UML have been proposed, which is based on quality model. The quality model uses inclusion relationship for three types of model Completeness, these are: incomplete, complete and executable UML models. The quality attributes of each quality attributes are adopted from a generic quality model ISO/IEC 9126. The three types of model completeness takes into account the different level of model completeness or different level of abstraction of UML model developed during software development phases. The purpose of the quality model is to provide a way to the modeler to select appropriate methods for continuous quality assessment and improvement of UML models (Jalbani et al., 2012).

Method for Improving Design Testability through Modeling
In this paper, authors mentioned that testability modeling is part of an overall strategy. In order to utilize testability modeling it is important to understand what role it will take in the products test strategy. With this understanding, a definition of what needs to be modeled and to what extent can be generated. From this, the testability modeling can be performed. The information yielded by the testability modeling can then be utilized as part of the design processes of investigation, analyzing alternatives, acting on the alternatives to yield the optimum balance of test within a product design. Documenting the testability decisions, actions and reasons behind them will provide information that is useful through the products lifecycle, potentially leading to reduced lifecycle costs (Emmert, 2010).

Contract-Based Software Component Testing with UML Models
The authors in this research extended the DbC concept to the SCT domain, and developed a new TbC technique with a key aim to bridge the gap between ordinary UML models (non-testable) and target test models (testable) and improve model-based component testability for effective UML-based SCT. Moreover, they introduced the new concept of test contract as the key testing-support mechanism, and the new concept of Contract for Testability as the principal goal of the TbC technique. They described the concept of test contracts based on basic component contracts, classified test contracts into internal and external test contracts for effective testing based on the new concept of effectual contract scope, and developed a set of TbC test criteria to realise testability improvement for achieving the CfT goals. Then, following the developed TbC working process, they put the TbC technique into practice to conduct UML-based CIT with the CPS case study and described and discussed contractbased component test model construction, component test design, and component test generation (Zheng&Bundell, 2008).

Research and Practice of the IVF Software Testability Model
Authors in this paper first analyze and review the research and practice of existing software testability models and various factors which affect software testability in software development process and then proposes an IVF (Iteration of Vector Factorization) software testability model based on years of statistics and analysis of test data of the Software Test and Evolution Center and validates it with practice. The IVF testability model is compact and practical and can directly be applied to the practice of software engineering (Wri et al., 2010).

A measurement framework for object-oriented software testability
The framework presented in this paper provides practical and operational guidelines to help assess the testability of designs modelled with the UML. These guidelines are presented in such a way that the evaluation procedure can be tailored to the specific design and test strategies employed in a specific environment. From a research viewpoint, this paper presents a number of precise hypotheses that can be investigated through empirical means. In other words, it presents a starting point theory that can be verified and refined by experimental means (Mouchawrab et al., 2005).

Measuring Testability of Aspect Oriented Programs
The proposed model for the assessment of testability in aspect-oriented design has been validated. The models' ability to estimate overall testability from design information has also been demonstrated. The proposed model is more practical in nature having quantitative data on testability is of immediate use in the software development process. The software developer can use such data to plan and monitor testing activities. The tester can use testability information to determine on what module to focus during testing. And finally, the software developer can use testability metrics to review the design (Kumar, Sharma, &Sadawarti, 2010).

Comparative evaluation
In this section we evaluate the above model testability approaches. We declare the achievement and main issue of each approach. Jul y 15, 2 0 1 3

Conclusion
This paper has aimed to provide an overview and compare recent progress in model testability. We study and survey several approaches. But we cannot claim that these approaches are comprehensive and exhaustive. Finally, we have comparison table with different columns that compares all the approaches. The main problem with most of these approaches to testing is considering the behavioral architecture in model testability. It should be noted that although the above comparison is conducted based on some prominent approaches, the outcome of this research is not restricted to such approaches. In other words, my considered criteria either can be served as features to be included in a newly developing system or may be applied to help generally evaluating or selecting model testability approaches. However the results of my comparison show that there is no single superior model testability approach in all cases. Therefore, deciding which approach to use in a certain scenario should be done based on its specifications and combine and present the new approach that is more comprehensive and mature is my future work.