Session Type
Lightning Talks
Date & Time
Wednesday, November 9, 2022, 11:00 AM - 12:20 PM
Name
Lightning Talks
Description

1) LLVM Office Hours: addressing LLVM engagement and contribution barriers - Kristof Beyls
2) Improved Fuzzing of Backend Code Generation in LLVM - Peter Rong
3) Interactive Programming for LLVM TableGen - David Spickett
4) Efficient JIT-based remote execution - Anubhab Ghosh
5) FFTc: An MLIR Dialect for Developing HPC Fast Fourier Transform Libraries - Yifei He
6) Recovering from Errors in Clang-Repl and Code Undo - Purva Chaudhari
7) 10 commits towards GlobalISel for PowerPC - Kai Nacke, Amy Kwan, Nemanja Ivanovic
8) Nonstandard reductions with SPRAY - Jan Hueckelheim
9) Type Resugaring in Clang for Better Diagnostics and Beyond - Matheus Izvekov
10) Swift Bindings for LLVM - Egor Zhdan
11) Min-sized Function Coverage with IRPGO - Ellis Hoag, Kyungwoo Lee
12) High-Performance GPU-to-CPU Transpilation and Optimization via High-Level Parallel Constructs in Polygeist/MLIR - Ivan R. Ivanov
13) Tools for checking and writing non-trivial DWARF programs - Chris Jackson
14) Analysis of RISC-V Vector Performance Using MCA Tools - Michael Maitland
15) Optimizing Clang with BOLT using CMake - Amir Ayupov
16) Exploring OpenMP target offloading for the GraphCore architecture - Jose M Monsalve Daiz

Abstract/s
1) LLVM Office Hours: addressing LLVM engagement and contribution barriers - Kristof Beyls As part of registering for the 2021 LLVM dev meeting, participants were asked to answer a few questions about how the LLVM community could increase engagement and contributions. Out of the 450 people replying, the top 3 issues mentioned were "sometimes people aren't receiving detailed enough feedback on their proposals"; "people are worried to come across as an idiot when asking a question on the mailing list/on record"; "People cannot find where to start; where to find documentation; etc." These were discussed in the community.o workshop at the 2021 LLVM dev meeting, and a summary of that discussion was presented by Adelina Chalmers as a keynote session, see 2021 LLVM Dev Mtg "Deconstructing the Myth: Only real coders contribute to LLVM!? - Takeaways” One of the solutions suggested to help address those top identified barriers from the majority of participants is introducing the concept of "office hours". We have taken steps since then to make "office hours" a reality. In this lightning talk, I will talk about what issues "office hours" is aiming to address; how both newbies and experienced contributors can get a lot of value out of them; and where we are in implementing this concept and how you can help for them to be as effective as possible. 2) Improved Fuzzing of Backend Code Generation in LLVM - Peter Rong Fuzzing has been a effective method to test software's. However, even with libFuzzer, LLVM backend is not sufficiently fuzzed nowadays. The difficulties are two fold. First, we lack a better way to monitor program behavior, edge coverage is not effective when backend heavily rely on target descriptor where data flow is more important than control flow. Second, mutation method is naive and ineffective. We design a new tool to better fuzz LLVM backend and we have found numerous missing features inside AMD. We also found many bugs in LLVM upstream, eight of which have been confirmed, 2 of which are fixed. 3) Interactive Programming for LLVM TableGen - David Spickett Interactive programming with Jupyter is a game changer for learning. The ability to have your code and documentation in one place, always up to date and extendable. See how this is being applied to a core part of LLVM, TableGen, and why we should embrace the concept. 4) Efficient JIT-based remote execution - Anubhab Ghosh In this talk we demonstrate a shared memory implementation and its performance improvements for most use cases of JITLink. We demonstrate the benefits of a separate executor process on top of the same underlying physical memory. We elaborate how this work will be useful to larger projects such as clang-repl and Cling. 5) FFTc: An MLIR Dialect for Developing HPC Fast Fourier Transform Libraries - Yifei He Discrete Fourier Transform (DFT) libraries are one of the most critical software components for scientific computing. Inspired by FFTW, a widely used library for DFT HPC calculations, we apply compiler technologies for the development of HPC Fourier transform libraries. In this work, we introduce FFTc, a domain-specific language, based on Multi-Level Intermediate Representation (MLIR), for expressing Fourier Transform algorithms. FFTc is composed of: A domain-specific abstraction level (FFT MLIR dialect), a domain-specific compilation pipeline, and a domain-specific runtime (working in progress). We present the initial design, implementation, and preliminary results of FFTc. 6) Recovering from Errors in Clang-Repl and Code Undo - Purva Chaudhari In this talk we outline the PTU-based error recovery capability implemented in Clang and available in Clang-Repl. We explain the challenges in error recovery of templated code. We demonstrate how to extend the error recovery facility to implement restoring the Clang infrastructure to a previous state. We demonstrate the `undo` command available in Clang-Repl and the changes required for its reliability. 7) 10 commits towards GlobalISel for PowerPC - Kai Nacke, Amy Kwan Nemanja Ivanovic We share our experiences with the first steps to implement GlobalISel for the PowerPC target. 8) Nonstandard reductions with SPRAY - Jan Hueckelheim We present a framework that allows non-standard floating point reductions in OpenMP, for example to ensure reproducibility, compute roundoff estimates, or exploit sparsity in array reductions. 9) Type Resugaring in Clang for Better Diagnostics and Beyond - Matheus Izvekov In this presentation, we talk about the effort to implement type resugaring in Clang. This is an economical way to solve, for the majority of cases, diagnostic issues related to the canonicalization of template arguments during instantiation. The infamous 'std::basic_string' appearing on the diagnostics when the user wrote 'std::string' is the classic example. 10) Swift Bindings for LLVM - Egor Zhdan Using LLVM APIs from a different language than C++ has often been necessary to develop compilers and program analysis tools. However, LLVM headers rely on many C++ features, and most languages do not provide interoperability with C++. As part of the ongoing Swift/C++ interoperability effort, we have been creating Swift bindings for LLVM APIs that feel convenient and natural in Swift, with the purpose of using the bindings to implement parts of the Swift compiler in Swift. In this talk, I will present our current status and what we were able to accomplish so far. 11) Min-sized Function Coverage with IRPGO - Ellis Hoag, Kyungwoo Lee IRPGO has a mode to collect function entry coverage, which can be used for dead code detection. When combined with Lightweight Instrumentation, the binary size and performance overhead should be small enough to be used in a production setting. Unfortunately, when building an instrumented binary with -Oz, the “.text” size overhead is much larger than what we’d expect from the injected instrumentation instructions alone. In fact, even if we block instrumentation for all functions we still get a 15% “.text” size overhead from extra passes added by IRPGO. This talk explores the flags we can use to create a function entry coverage instrumented binary with a “.text” size overhead of 4% or smaller. 12) High-Performance GPU-to-CPU Transpilation and Optimization via High-Level Parallel Constructs in Polygeist/MLIR - Ivan R. Ivanov We extent Polygeist/MLIR to succinctly representation, optimize, and transpile CPU and GPU parallel programs. Through the use of our new operations (e.g. memory effects-based barrier) and transformations, we can successfully transpile GPU Rodinia and PyTorch benchmarks to efficiently run on the CPU _faster_ than their existing CPU parallel versions. 13) Tools for checking and writing non-trivial DWARF programs - Chris Jackson DWARF expressions describe how to recover the location or value of a variable which has been optimized away. They are expressed in terms of postfix operations that operate on a stack machine. A DWARF program is encoded as a stream of operations, each consisting of an opcode followed by a variable number of literal operands. Some DWARF programs are difficult to interpret and check for correctness in their assembly-language format. 14) Analysis of RISC-V Vector Performance Using MCA Tools - Michael Maitland The llvm-mca tool performs static performance analysis on basic blocks and llvm-mcad tool performs dynamic performance analysis on program traces. These tools allow us to gain insights on how sequences of instructions run on different subtargets. In this talk, I will discuss the shortcomings of these tools when they are tasked to report on RISC-V programs containing vector instructions, how we have extended these tools to generate more accurate reports for RISC-V vector programs, and how these improved reports can be used to make meaningful improvements to scheduler models and assist performance analysis. 15) Optimizing Clang with BOLT using CMake - Amir Ayupov Advanced build configuration with BOLT for faster Clang 16) Exploring OpenMP target offloading for the GraphCore architecture - Jose M Monsalve Daiz GraphCore is a mature and well documented architecture that features a MIMD execution model. Different to the other players in the market, GraphCore systems are currently available, its compiler infrastructure is based on LLVM, and it allows direct compilation to the device. Furthermore, the Poplar SDK is a C++ library that can be directly used with the current OpenMP Offloading Runtime (i.e. libomptarget). In this short presentation, we describe the strategy we are currently using to explore compilation of OpenMP Offloading support for the GraphCore architecture.
Location Name
Hayes Ballroom - Main Level