Technical choices

Technical choices & Project expected results

The main concepts of the project are:

  • Concept#1: A Domain-Specific Modeling Framework for Model-Based Testing designed to capture the SUT Observations from a wide range of connected systems. It will include features related to communication packets, as well as sequencing and timing of communications.
  • Concept#2: Data-driven Test Model inference. By abstracting 'normal' behaviors data of the SUT in the DSML notation, test models will be automatically inferred. In brief, an inferred test model could be regarded as expressing a set of communication traces, with probability distributions expressing the likelihood of various features of those traces. This gives a good basis for using model-based testing to generate realistic 'normal' test inputs for the SUT, as well as oracles for the expected outputs.
  • Concept#3: Security testing at the business-logic level for simulating false data injection attacks. By leveraging them from the cumbersome task of writing a generic test models, validation engineers will focus on designing attacks based on their domain knowledge and expertise.
  • Concept#4: Robustness testing using behavioral fuzzing, boundary testing, and exception testing. Model-based testing will be used to generate a wide variety of robustness tests, to automatically stress the SUT and evaluate its reliability.
  • Concept#5: Select and prioritize test scenarios by using online learning models. Modern agile development practices, e.g. continuous integration, mean that the SUT is changing frequently, which enables the intelligent selection and prioritization of test cases at each build-test-deploy step. By intelligent, we mean an automated (online) guidance of test selection and prioritization based on the test results of previous runs. Typical reinforcement learning methods combined with memory models based on (multi-layers) neural networks perform well in this context [Spieker17]. An advantage of this approach is the perpetual adaptation of test focus on the more error-prone parts of the SUT.
  • Concept#6: Smart analytics of test execution results. By monitoring the coverage of test models and test results, in complement to other test metrics such as SUT coverage, our tools will perform smart analytics by using unsupervised learning methods (e.g., clustering). Automated classification of test results becomes crucial to provide high-level views on the test process and help the validation engineers to focus on the design of complex test scenarios. Furthermore, traceability from failed tests to models that generated them using a higher abstraction level will facilitate the analysis of failed tests
    by validation engineers.

Project expected results:

The targeted results and research prototypes of the SARCoS project aim for a technology readiness level of TRL 4 or 5 by the end of the project.