Scalable Modeling for Rigorous Software Specification and Testing
Selected case studies contain a range of embedded software from car door mirror electronic control units to satellite operations software.
Modern software development processes for safetyand mission-critical systems rely on rigorous coding and testing to support dependability claims and assurance cases. The goal of researchers at Ball State University’s Security and Software Engineering Research Center (S2ERC) is to economically provide for higher software quality and dependability for large and complex software-intensive systems.
The two foundational methods, namely sequencebased software specification and Markov chain usage-based statistical testing, were developed in the 1990s by Jesse Poore and his colleagues at the University of Tennessee’s Software Quality Research Laboratory. Since their inception they have been successfully combined and applied to a variety of industry and government projects. Field applications of these methods had earlier identified the need to address complexity and increase scalability both theoretically and practically for larger and more complex applications.
When these two demonstrably rigorous methods are combined and used in the complete software development cycle it translates to a formal system model that can be used as a first pass Markov chain usage model for statistical testing. However, to reduce the specification to a manageable size, the human specifier usually must clarify the extent to which inputs of the system can be partitioned into subsets that do not interact or communicate with each other.
This S2ERC breakthrough methodology overcomes this limitation by composing larger system models from smaller ones built on either disjoint or non-disjoint subsets of inputs. The constructed larger models form a basis for more rigorous automated statistical testing and software certification.
The Ball State research at S2ERC uses theory-practice-tools. As a first step they develop a theoretical foundation for the proposed approach. They then implement the theory and algorithms in a tool that supports sequence-based specification and defines the engineering process for how to accomplish it systematically with tool support. Finally, researchers conduct case studies to evaluate its applicability and effectiveness.
While previous work in this area focuses on a clean partitioning of system inputs that results in clean system decomposition to manage complexity and scalability, the new approach relaxes this constraint by looking into two ways to control the size of the specification and testing model. This is accomplished by either limiting the number of stimuli being considered or the number of states being explored. The new modeling techniques improve on previous strategies by working out a formal and systematic process to explicitly merge towards a complete system model. This results in render analyses for system level software specification, testing, and certification.
The completed work involves merging sub-models that focus on selected system boundaries. Researchers continue to work on combining partial work products that emphasize different applications. The selected case studies contain a range of embedded software from car door mirror electronic control units to satellite operations software. Results of the case study have shed light on when and how the theory is in effect in different application contexts to handle and manage specification complexity.Economic Impact:
The consequences of an error or bug in new embedded systems or products can lead to product recalls, class action lawsuits, or wrongful death claims. Special efforts are in place to create products that more reliably perform as intended; not failing in the field is an essential quality. Though it is impossible to predict the extent of economic impacts that this development will enable, curbing the aforementioned consequences will save significant resources. This breakthrough demonstrates that scalable modeling for rigorous software specification and testing is not only feasible but practical. With a sound theoretical foundation and effective tool support, large and complex models of software-intensive systems can be more systematically constructed and tested statistically based on an operational usage profiles. This work has provided two distinct benefits: 1) more exhaustive analyses of systems’ behavior in all possible scenarios of use prior to design and implementation, and; 2) more rigorous quantitative analysis after testing and validation prior to system deployment. Results can be used to more clearly demonstrate, document, and certify that systems are ready for their intended uses. The resulting scaled methods will prove to be better engineered, more dependable software systems, complete with audit trails of evidence to support claims related to the dependability for such systems.
For more information, contact Lan Lin at Ball State University, email@example.com, Bio http://cms.bsu.edu/academics/collegesanddepartments/computerscience/facultyandstaff/faculty/linlan, 765.285.8641.S2ERC-2016.pdf