The following roundtable/s are scheduled, but please check the board outside the room for additional roundtables added during the event.
C++ and the Code Coverage Ecosystem
Organizer: Remi and Nicola
Coverage analysis requires complex cooperation between methodologies, tools, and the language specification. Since 2011, we have seen continuous growth in the power of the C++ language, moving computation to compile time as much as possible. Most founding methodologies of Code Coverage Analysis were developed before that time, unaware of this characteristic trait.
Those methodologies have driven the development of the tools for decades, are they still valid?
Can they be improved to align with the new C++ versions and this compile-time computation?
Example:
-How should exception handling be treated in the context of branch coverage without exploding the number of possible paths to test?
-How should statement coverage consider SFINAE… for a library?
-“Defaulted” constructors are not managed by some compiler-based tools; should they be?
-What to do with constexpr-specified functions?
-If “Constraints and concepts” should be tested, how to analyze their coverage?
Targeting CPUs from ML frameworks/compilers
Organizer: Andrzej Warzynski
Over the years there has been a lot of emphasis on perfecting the Clang-LLVM integration. What about other frontends targetting CPUs via LLVM? In particular, various Machine Learning frontends/frameworks. How do people target various CPU backends and their SIMD/Matrix extensions from these frameworks? What are the gaps and what could be improved (e.g. vectorisation)? Some potential frontends that we could discuss: MLIR, OpenXLA, IREE and TVM.
Terminology to avoid confusion (these terms can mean different things depending on context):
* frontend - consumes high level representation and generates LLVM IR,
* backend - consumes LLVM IR and generates machine code for the specified CPU.