MLIR is an extensible compiler framework that we use to generate highly performant code for compute-intensive and rapidly-evolving Machine Learning and Computer Vision workloads. Vectorisation is a key optimisation in enabling this performance and MLIR is no exception. In this presentation, we will give an overview of a high-level vectorisation approach that leverages one of the main abstractions available in MLIR: the Linalg Dialect. We will also discuss how it can be used to support Arm's Scalable Vector and Scalable Matrix Extensions (SVE and SME, respectively) in MLIR.
The Linalg Vectoriser combines a simple tiling + basic-block vectorisation approach with advanced vectorisation concepts such as scalable vectors, vector masking and multi-dimensional vectorisation. This presentation will provide an overview of the design and how it differs from traditional vectorizers. You will also see how the Linalg Vectoriser can be used to generate highly-optimised kernels for ubiquitous operations like matrix-matrix multiplication and convolutions.
The extensibility of MLIR has allowed us to target less established, yet very promising, vector architectures, such as those offering scalable vectors. In this presentation, we will give an overview of the key building blocks of scalable vectorisation and provide a status update on the implementation. Specifically, we will talk about the ongoing effort to support SVE and SME as real-world end-to-end examples that leverage Linalg vectorization and target-specific dialects in MLIR.