2024 European LLVM Developers' Meeting: Job Postings

If you are interested in having your job openings posted, details on becoming a sponsor may be found here.

Company Description: Apple’s investment in developer tools, security technologies, and performance optimizations powered by LLVM helps deliver products that impact billions of users worldwide. This work enhances applications used by everyday people and gives rise to new technologies. We actively participate in the LLVM open source project and are dedicated to supporting and mentoring our employees. We emphasize diversity, collaboration, and creativity!

Company Contact: Anna Zaks - ganna@apple.com

Job Title:  Compiler Backend Security Engineer (Cupertino, USA)

Job Description: The compiler team at Apple is looking for individuals who are passionate about security and compilers to design and develop new mitigations for the most challenging security challenges such as side channel attacks.

Job Title: Security Tools Manager (Seattle, USA)

Job Description: Apple’s Security Tools team is looking for a manager with a background in systems-level developer tooling to lead a team responsible for developing new dynamic bug-finding tools.

Job Title: Security Tools Manager - Static Analysis (Cupertino, USA)

Job Description: Apple’s Security Tools team is looking for a manager with a background in compilers, refactoring tools, or static analysis to lead a team responsible for building tools and compiler features to eliminate entire classes of security vulnerabilities.

Job Title: Clang Compiler Engineer (London, UK)

Job Description: Apple’s C Languages and Libraries team in London, UK is looking for a compiler software engineer to develop new language and compiler features to advance Clang's support for C++ language standards as well as to improve stability and performance of Apple platforms. 

Job Title: Compiler Performance Engineer (London, UK)

Job Description: The CPU and Accelerator Compilers Team is looking for experienced engineers passionate about working on advancing compiler performance and optimization technology. We are responsible for optimizations and code generation for CPUs and Accelerators on all Apple platforms. Our goal is to optimize important workloads for C/C++ and Swift advancing LLVM's leading-edge technology along the way.

Job Title: CPU Compiler Performance Team Lead (Haifa or Herzliya, Israel)

Job Description: The CPU and Accelerator Compilers Team is looking for experienced engineers passionate about working on advancing compiler performance and optimization technology. We are responsible for optimizations and code generation for CPUs and Accelerators on all Apple platforms. Our goal is to optimize important workloads for C/C++ and Swift advancing LLVM's leading-edge technology along the way.

Job Title: CPU Performance Engineer - Backend (Haifa or Herzliya, Israel)

Job Description: Apple is building a new compiler team in Israel! If you are passionate about advancing compiler-based optimization technologies for ARM and RISC-V, we’d love to hear from you. In this role, you will have the opportunity to delve deep into the realms of optimizations, code generation, and CPU architecture exploration. Our primary responsibility is to optimize CPUs and Accelerators across all Apple platforms.

Job Title: CPU Performance Engineer (Haifa or Herzliya, Israel)

Job Description: Apple is building a new compiler team in Israel! If you are passionate about optimization technologies this role is for you. Our goal is to optimize important workloads for C/C++ and Swift advancing LLVM's leading-edge technology along the way.

Job Title: Compiler Frontend Engineer (Cupertino, USA)

Job Description: Apple’s C Languages and Libraries team in Cupertino, USA is looking for a compiler software engineer to develop and enhance the Clang compiler, and advance the interoperability between C++ and Swift languages. 

Company Description: Arm’s processors are shipped in billions of products, across a huge range of markets, each with unique code generation challenges. LLVM is a foundational code generator for our CPUs, GPUs and Machine Learning accelerators. For example, in the past year, about 100 Arm engineers contributed to LLVM, in areas such as performance optimization, security hardening, support for new instructions and many more.

Company Contact: Kristof Beyls - Kristof.Beyls@arm.com

Many LLVM-related jobs at Arm

Your skills and knowledge of compiler fundamentals, and your passion to learn from and contribute to the LLVM community will help us develop innovative technologies that improve the performance and security of the entire field of computing.

Arm always has lots of LLVM-related job vacancies open.

Company Description: The Programming Languages and Runtime (PL&R) team is a part of the Infra organization at Meta, which is responsible for driving the products: Facebook, Instagram, WhatsApp, AR/VR, etc. The PL&R team specializes in compilers, AI, virtual machines, programming language design and developer experience. Our work on AI allows the company to train state of the art models, and make efficient use of our hardware. Our work on compilers and virtual machines help to make our data-centers run more efficiently, and our apps smaller and faster. Our work on programming languages makes our developers more efficient in their work. We contribute and collaborate with the Open Source community on many of our projects. We contribute to LLVM, Pytorch, Triton, and other open source projects. 

Company Contact: Davide Italiano - davidino@meta.com

Job Title: Software Engineer, AI Compiler

Job Description: AI workloads are ubiquitous at Meta and drive many important services. Be part of the team that focuses on development of AI compilers, starting with Triton, to better utilize our hardware platforms effectively.

Responsibilities
* Be a subject matter expert in the field of: systems, performance compilers, virtual machines, programming languages, developer experience
* Be both highly technical and a collaborative team player
* Work with your teammates and influence strategy
* Work independently and deliver business impact
* Work effectively with cross functional partners and stakeholders to set and achieve optimal outcomes

Minimum Qualifications
* Experience with compiler architecture and development, particularly ML compilers or DSLs or static/dynamic languages compilers.
* Experience with cross functional collaboration with hardware or AI framework teams. Preferred qualifications
* Experience with compiler optimizations such as loop optimizations, vectorization, parallelization, HW architecture specific optimizations.
* Experience in compiling and code generation targeting ML accelerators or custom hardware, GPUs or CPUs.
* Experience with different programming models for high-performance computations, e.g. GPU CUDA programming or OpenCL or OpenMP programming.
* Experience with MLIR, or LLVM or Glow or XLA or Triton or TVM or Halide.
* Knowledge of ML frameworks like PyTorch, TensorFlow, ONNX, MXNet, etc.

Company Description: Modular AI is the next-generation AI developer platform unifying the development and deployment of AI for the world.

Company Contact: Kiran Deen - kdeen@modular.com

Job Title: AI Runtime Tech Lead

Job Description: ML developers today face significant friction in taking trained models into deployment. They work in a highly fragmented space, with incomplete and patchwork solutions that require significant performance tuning and non-generalizable/ model-specific enhancements. At Modular, we are building the next generation AI platform that will radically improve the way developers build and deploy AI models.

A core part of this offering is providing a platform that allows customers to achieve state-of-the-art performance across model families and frameworks. As an AI Runtime tech lead, you will be leading a cross team effort to develop and optimize the MAX runtime to reach state of the art performance on a wide variety of CPU and GPU hardware, automating performance optimizations to achieve the best performance of AI models.

What you will do:

  • Develop and optimize Modular’s state-of-the-art multithreaded CPU+GPU runtime.
  • Collaborate with graph compiler and kernels teams to build tightly integrated systems that deliver world-class performance.
  • Design and develop runtime optimizations to improve CPU and GPU efficiency and address issues such as CPU overhead, caching, data-locality across multiple devices.
  • Automate performance tuning of AI models to adapt to different user scenarios and cloud deployment conditions to reach the best performance and cost for a variety of CPUs and GPUs hardware platforms.
  • Generalize the Modular runtime as we expand support for exotic and cutting-edge AI hardware.
  • Optimize important AI pipelines across AI domains while collaborating with passionate, motivated and highly skilled individuals across teams at Modular.
  • Collaborate with the product teams and engage with customers to understand their requirements and use-cases.
  • Collaborate with Tooling & Infrastructure teams to design systems for automated performance analysis and benchmarking.
  • What you bring to the table: 8+ years of experience working on high performance computing systems or relevant domains in industry or research.
  • Proven experience in technical leadership in relevant domains Experience in C++ programming and complex software systems.
  • Experience with CPU or GPU runtime optimizations and performance analysis on CPUs, GPUs or AI accelerators.
  • Creativity and curiosity for solving complex problems, a team-oriented attitude that enables you to work well with others, and alignment with our culture.

Job Title: AI Compiler Engineer

Job Description: Modular is looking for Staff AI Compiler engineers to work on multi-framework, multi-hardware and user-extensible compiler infrastructure. We are looking for candidates with strong communication and teamwork skills who excel in collaborating across team boundaries. In this role you will own the design and implementation of compiler types, operations and passes related to machine learning models, and work across teams to integrate them with the rest of Modular’s product.

What you will do:

  • Build an MLIR-based machine learning compiler with scalable and high quality infrastructure
  • Implement generic and extensible optimizations for machine learning models from multiple frameworks and on multiple devices
  • Work with other teams to provide a balanced stack that fully utilizes today’s complex server and mobile systems.
  • Collaborate with machine learning researchers and engineers to guide compiler development for future ML trends.
  • What you bring to the table:
  • 8+ years of compiler engineering experience.
  • Experience working with compilers for machine learning, such as XLA, Glow, TVM, nGraph, TensorRT, IREE, etc.
  • Experience with machine learning graph optimizations
  • Creativity and curiosity for solving complex problems, a team-oriented attitude that enables you to work well with others, and alignment with our culture.
  • Strong knowledge of core compiler algorithms and data structures.
  • In-depth knowledge of C++, as well as, knowledge of basic GitHub workflows like pull requests.
  • Helpful but not required:
  • Strong knowledge of and experience working with MLIR and LLVM.
  • Experience with 8 bit and lower model quantization
  • Advanced degree in Computer Science or a related area

Job Title: AI CPU Performance Engineer

Job Description: ML developers today face significant friction in taking trained models into deployment. They work in a highly fragmented space, with incomplete and patchwork solutions that require significant performance tuning and non-generalizable/ model-specific enhancements. At Modular, we are building the next generation AI platform that will radically improve the way developers build and deploy AI models.

A core part of this offering is providing a platform that allows customers to achieve state-of-the-art performance across model families and frameworks. As an AI GPU Performance Engineer, you will own architecting performance libraries for GPUs. You will develop kernels and algorithms to increase performance of kernels, reduce the activation volumes, speedup data pre- and post-processing, and improve the end-to-end performance of the AI workloads.

LOCATION: Candidates based in the US or Canada are welcome to apply. You can work remotely from home.

What you will do:

  • Design, develop, and optimize high-performance AI numeric and data manipulation kernels/operators for GPUs.
  • Achieve state-of-the-art performance by leveraging software and micro-architectural features of GPUs. 
  • Work with compiler, framework, and runtime teams to deliver end-to-end performance that fully utilizes GPU workstations and servers.
  • Collaborate with machine learning researchers to guide system development for future ML trends.
  • What you bring to the table:
  • 10+ years of relevant experience working on complex code and systems.
  • Experience with GPU programming languages such as CUDA or OpenCL.
  • Experience with GPU assembly such as PTX and SASS.
  • Strong understanding of GPU architectures and performance optimizations.
  • Experience with AI workloads and performance tuning considerations such as fusion strategies;
  • Experience with core AI kernels such as matrix multiply and convolution.
  • Strong familiarity with using GPU profilers, debuggers, etc.
  • Deep interest in machine learning technologies and use cases. 
  • Creativity and curiosity for solving complex problems, a team-oriented attitude that enables you to work well with others, and alignment with our culture.

Job Title: Senior AI GPU Performance Engineer

Job Description: ML developers today face significant friction in taking trained models into deployment. They work in a highly fragmented space, with incomplete and patchwork solutions that require significant performance tuning and non-generalizable/ model-specific enhancements. At Modular, we are building the next generation AI platform that will radically improve the way developers build and deploy AI models.

A core part of this offering is providing a platform that allows customers to achieve state-of-the-art performance across model families and frameworks. As an AI GPU Performance Engineer, you will own architecting performance libraries for GPUs. You will develop kernels and algorithms to increase performance of kernels, reduce the activation volumes, speedup data pre- and post-processing, and improve the end-to-end performance of the AI workloads.

LOCATION: Candidates based in the US or Canada are welcome to apply. You can work remotely from home.

What you will do:

  • Design, develop, and optimize high-performance AI numeric and data manipulation kernels/operators for GPUs.
  • Achieve state-of-the-art performance by leveraging software and micro-architectural features of GPUs. 
  • Work with compiler, framework, and runtime teams to deliver end-to-end performance that fully utilizes GPU workstations and servers.
  • Collaborate with machine learning researchers to guide system development for future ML trends.
  • What you bring to the table:
  • 10+ years of relevant experience working on complex code and systems.
  • Experience with GPU programming languages such as CUDA or OpenCL.
  • Experience with GPU assembly such as PTX and SASS.
  • Strong understanding of GPU architectures and performance optimizations.
  • Experience with AI workloads and performance tuning considerations such as fusion strategies;
  • Experience with core AI kernels such as matrix multiply and convolution.
  • Strong familiarity with using GPU profilers, debuggers, etc.
  • Deep interest in machine learning technologies and use cases. 
  • Creativity and curiosity for solving complex problems, a team-oriented attitude that enables you to work well with others, and alignment with our culture.

Company Description: Careers at AMD: At AMD, we push the limits of innovation to solve the world’s most important challenges. We strive for execution excellence, while being direct, humble, collaborative and inclusive of diverse perspectives.

Company Contact: Mark Searles -  Mark.Searles@amd.com

Job Title: Senior Staff Software Development Engineer - LLVM Compiler

Job Description: We are building first class compilation technology for AMD GPU. 

The successful candidate will work on language implementation and optimization in the open source LLVM compiler framework. In addition to HPC apps, our compilers are used in the development of AMD Machine Learning frameworks and Libraries. 

The successful candidate will have a phenomenal opportunity to work closely with AMD first class Machine Learning, HPC and Libraries developers to get the best performance from the compiler.

Job Title: Principal AI/ML Compiler Engineer

Job Description: AMD is looking for an experienced Principal Software Engineer to join our growing team. 

In this position, you will lead, design, architect and build the compiler and software stack for optimizing Deep Learning workloads for our next-gen devices. You will help lead the evolution of AMD's compiler by proposing and creating advanced optimizations and transformations targeting our XDNA devices as well as planning and designing the architecture runway of our product. You will be using C++ and Python as well as open-source technology like LLVM, TVM, MLIR to perform your tasks.

As a principal member of the technical staff you will be leading and providing guidance junior architects and developers. You will work with the very latest hardware and software technology. You will help us drive and enhance AMD’s abilities to deliver the highest quality, industry-leading technologies to market.

Job Title: AMD Compiler Engineer open roles

Job Description: AMD is looking for talented software engineers who are passionate about improving the performance of key applications and benchmarks. 

Joining AMD you will be a member of incredibly talented industry specialists and will work with the very latest hardware and software technology.  

We have many open positions that cover a broad range of the compiler toolchain: LLVM, MLIR, Fortran, OpenMP, Clang, AMDGPU backend, compiler performance engineering, and more.

Huge impact at unprecedented scale, amazing teams, hybrid or remote work and lots of benefits are waiting for you. 

Check out our open positions and reach out to us for more info.

Company Description: NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society.

Company Contact: Linda Lim - LiLim@nvidia.com

Job Title: Manager, Compiler Engineering

Job Description: We are now seeking a SW Engineering Manager with strong leadership and mentoring skills to join our DPU Compilers Team. We craft outstanding compilers that realize the potential of NVIDIA's DPUs designed for the world's largest Data Centers. These compilers are key for the performance of AI, HPC and other performance critical software deployed on NVIDIA Data Centers, on the cloud and at super computing centers around the world. In this role you will solve critical problems working alongside an outstanding engineering team with vision in Compiler technology and systems software, doing what you enjoy! Our compiler organization makes its mark on every CPU, GPU, DPU and SoC product that NVIDIA builds. Would you like to be part of this outstanding organization?

What you'll be doing:
Lead a distributed team of compiler engineers to improve code generation for key Network and Virtualization applications accelerated by Bluefield DPU

Establish team objectives to meet schedules and goals, establish and evolve policies and procedures that affect the immediate organization, communicate with senior management for team vision and development.

Collaborating with members of various Networking SW teams and the hardware architecture teams to accelerate the next generation of DPU software.

Insuring the scope of your team's efforts includes: helping drive performance tuning and analysis, crafting and implementing compiler and optimization techniques for workloads and other general software engineering work.

Planning, establishing team objectives to meet schedules and goals, establish and evolve policies and procedures that affect the immediate organization, connect with senior management for team vision and development.

Mentor and coach engineers, encouraging a consistently excellent line management experience for all the engineers in the team.

Craft the team culture and norms in accordance with NVIDIA's values, identify gaps in being a successful team and act on them. In addition improve development practices and collaboration within the team and with others outside the team.

Job Title: Senior Compiler Engineer- Technical Lead

Job Description: We are looking for an experienced Senior Compiler Engineer with technical leadership experience with for an exciting role in our Compute Compiler Team. We deliver features and improvements to CUDA and other compute compilers to better realize the potential of NVIDIA GPUs for a growing range of computational workloads, ranging from deep learning, scientific computation, and self-driving cars. Our compiler organization makes its mark on every GPU NVIDIA produces. We need you as a key member of a small team that is working on a core compiler component for accelerating general purpose computation on the GPU. You will be solving critical problems working alongside some of the top valued diverse minds in GPU computing and systems software, doing what you enjoy. See your efforts in action as HPC and DL developers use features and optimizations to achieve the best performance of their applications. If this sounds like a fun challenge, we want to hear from you!

What you will be doing:
Provide technical leadership to a small team of engineers working on compiler middle-end optimizations.

Analyze the performance of application code running on NVIDIA GPUs with the aid of profiling tools.

Identify opportunities for performance improvements in the LLVM based compiler optimizer.

Design and develop new compiler passes and optimizations to produce best-in-class, robust, supportable compiler and tools.

Interact with Open-source LLVM community to ensure tighter integration.

Work with geographically distributed compiler, architecture and application teams to oversee improvements and problem resolutions.

Be part of a team that is at the center of deep-learning compiler technology spanning architecture design and support through higher level languages.

What we need to see:
M.S or Ph.D. in Computer Science, Computer Engineering, or related fields (or equivalent experience).

12+ years experience in Compiler Optimizations such as Loop Optimizations, Inter-procedural optimizations and Global optimizations and Program Analysis.

3+ years technical leadership experience.

Excellent hands-on C++ programming skills.

Experience writing significant analysis or transformation passes in LLVM framework.

Understanding of Processor ISA (GPU ISA would be a plus).

Strong background in software engineering principles with a focus on crafting robust and maintainable solutions to challenging problems.

Good communication and documentation skills and self-motivated.

Ways for you to stand out from the crowd:

Experience in developing applications in CUDA or other parallel programming language.

Deep understanding of parallel programming concepts.

Experience working on compile-time in JIT compilation.