Towards Benchmark Optimization by Automated Equivalence Detection


Especially in the case of Cyber-Physical Systems (CPSs), testbed validations and benchmarks, while necessary, incur significant setup and operation costs. Optimized benchmark sets reduce the number of tests that need to be performed, which ultimately reduces costs. In this paper, we propose a new methodology to provide automated assistance for optimizing existing benchmarks or for creating new ones from scratch. The proposed methodology is based on complete Symbolic Execution of a single control loop iteration, optionally expanded with a Nondeterministic Finite Automaton (NFA) model that represents possible changes in the environment or the system in between control loop iterations. This enables us to compute a stress number that represents the computational burden put upon the controller by a respective benchmark. By iteratively searching for benchmarks with high stress numbers and automatically detecting and pruning benchmarks that induce the same path through the controller code, we can ultimately create a minimal set of relevant benchmarks for a CPS.

IEEE Workshop on Benchmarking Cyber-Physical Networks and Systems (CPSBench 2018)
PhD Student

I am a researcher and head of Systems Analysis at the Chair of Communication and Distributed Systems at RWTH Aachen University, where I research the testability of distributed systems. My specific focus is on the applicability of Symbolic Execution to real world software.