We experimented the approach on 10 Web applications to evaluate its testing effectiveness and its performance. Developers do not have to write LTL properties not to be expert in formal models. These properties are formalised with LTL properties, which are generated from the knowledge base. The last stage of the approach checks whether behavioural properties of security patterns hold in the application traces collected while the test case execution. After the test case execution, test verdicts show whether an application is vulnerable to the threats modelled by an ADTree. In the second stage, these trees are used to guide developers in the test case generation. These defenses are given under the form of security pattern combinations. The first stage of the approach consists in assisting developers in the design of Attack Defense Trees expressing the attacker possibilities to compromise an application and the defenses that may be implemented. This approach relies on a knowledge base integrating varied security data, e.g., attacks, attack steps, and security patterns that are generic and re-usable solutions to design secure applications. The approach proposed in the paper aims at guiding developers towards the implementation of secure applications, from the threat modelling stage up to the testing one. This paper tackles the problems of generating concrete test cases for testing whether an application is vulnerable to attacks, and of checking whether security solutions are correctly implemented. The evaluation of various scenarios from industrial case studies demonstrates that the proposed approach efficiently translates the behaviour models into formal specifications and properties. Moreover, our approach can provide the developers more informative and comprehensive feedback regarding the inconsistency issues, and therefore, help them to efficiently identify and resolve the problems. The generated formal specifications and properties can directly be used by existing model checkers for detecting any discrepancy between the input models and yield corresponding counterexamples. To alleviate this issue, we define and develop a fully automated transformation of behaviour models into formal specifications and properties. Unfortunately, creating formal consistency constraints and specifications is currently done manually, and therefore, labour-intensive and error prone. Because the containment checking is based on model checking, it requires both formal consistency constraints and specifications of these models. We interpret the containment checking problem as a model checking problem, which has not received special treatment in the literature so far. In this article, we introduce a containment checking approach to verify whether a certain low-level behaviour model, typically created by refining and enhancing a high-level model, still is consistent with the specification provided in its high-level counterpart. It is thus crucial to detect these inconsistencies at early phases of the software development process, and especially as soon as refined models deviate from their abstract counterparts. Because of the involvement of different stakeholders in constructing these models and their independent evolution, inconsistencies might occur between the models. Models are extensively used in many areas of software engineering to represent the behaviour of software systems at different levels of abstraction. Weĭiscuss various challenges in our approach and show the applicability of our Properties and descriptions that can be directly used by model checkers. Inputs and devising automated mappings of behavior models onto formal Our approach presented in this paper aims at alleviating theĪforementioned challenges by considering the behavior models as verification Well as consistency constraints, which is a tedious and error-prone task whenĭone manually. Lessen the burden of creating formal specifications of the behavior models as Specified in the high-level counterparts. System's behaviors described by the low-level models satisfy what has been Techniques, we propose containment checking as a means to assess whether the In this context, we focus on behavior models that is, weĪim to ensure that the refined, low-level behavior models conform to theĬorresponding high-level behavior models. As a consequence, the refined models can deviateįrom the original models over time, especially when the two kinds of modelsĮvolve independently. The developers will refine and enrich these high-level Software system using high-level models that are technology- and Business analysts and domain experts are often sketching the behaviors of a
0 Comments
Leave a Reply. |