If you are interested in having your job openings posted, details on becoming a sponsor may be found here.
Company Description: Arm’s processors are shipped in billions of products, across a huge range of markets, each with unique code generation challenges. LLVM is a foundational code generator for our CPUs, GPUs and Machine Learning accelerators. For example, in the past year, about 100 Arm engineers contributed to LLVM and MLIR in areas such as performance optimization, security hardening, support for new instructions and many more.
Company Contact: Kristof Beyls - Kristof.Beyls@arm.com
Your skills and knowledge of compiler fundamentals, and your passion to learn from and contribute to the LLVM community will help us develop innovative technologies that improve the performance and security of the entire field of computing.
Arm always has lots of LLVM and MLIR job vacancies open.
Company Description:
We believe that AI is a net positive force in the world. Our vision and mission are to help rebuild AI infrastructure to advance humanity and our environment. We will do whatever it takes to empower our customers, team, and company to benefit from that pursuit. You can read about our culture and careers here to understand how we work and what we value.
We are owners and advocates for the underlying technologies, developer platforms, product components, and infrastructure. These essential building blocks form the high-quality and coherent experiences our users expect. We aim to drive the pace of innovation for every AI/ML developer.
Company Contact: Kiran Deen - kdeen@modular.com
Job Title: Mojo Compiler Engineer
Job Description: Modular is building a next-generation AI infrastructure platform that unifies the many application frameworks and hardware backends, simplifying deployment for AI production teams and accelerating innovation for AI researchers and hardware developers. A key component of this platform is Mojo, a new programming language that combines the usability of Python with the performance of C, unlocking unparalleled programmability of AI hardware and extensibility of AI models.
We are looking for talented engineers to help us design and implement new features for the Mojo programming language and assist in the development of its novel compiler toolchain. As a Mojo compiler engineer, you will work with world leaders to build a novel programming language for a full spectrum machine learning and heterogeneous compute applications. You will be responsible for implementing new language features, exploring novel optimization techniques, and partnering with a growing open source community to foster a strong Mojo ecosystem.
Join our world-leading team and help us redefine how AI systems are built.
LOCATION: Candidates based in the US or Canada are welcome to apply. You can work remotely from home.
What you will do:
- Design and implement new and innovative Mojo language features utilizing our next generation compiler architecture built on the capabilities of MLIR.
- Explore new Mojo and MLIR-specific optimization opportunities.
- Collaborate with other teams within Modular, machine learning practitioners, and the open source community to foster a new language ecosystem.
- What you bring to the table:
- 3+ years of compiler development experience.
- Experience working with MLIR or LLVM.
- In-depth knowledge of C++, knowledge of Python is a strong plus.
- Familiarity with Clang, CPython, Swift, Rust or another major programming language implementation.
- Strong knowledge of programming language design principles, code generation, or compiler optimization techniques.
- Desire to work with a growing community of open source contributors.
- Creativity and curiosity for solving complex problems, a team-oriented attitude that enables you to work well with others, an enthusiasm for programming technology, and alignment with our culture.
Job Title: Mojo Compiler Engineering Manager
Job Description: Modular is building a next-generation AI infrastructure platform that unifies the many application frameworks and hardware backends, simplifying deployment for AI production teams and accelerating innovation for AI researchers and hardware developers. A key component of this platform is Mojo, a new programming language that combines the usability of Python with the performance of C, unlocking unparalleled programmability of AI hardware and extensibility of AI models.
We are looking for a technical engineering manager to lead the development of the Mojo compiler. In this role you will drive design and implementation of new Mojo language features, align the team on product deliverables, set team execution strategy and engineering practices, collaborate across teams to deliver Mojo to internal and external customers, and work closely with a growing community of Mojo developers.
We are looking for candidates with a background in compiler and programming language design, who possess strong communication and teamwork skills, and excel in collaborating across team boundaries.
Join our world-leading team and help us redefine how AI systems are built.
LOCATION: Candidates based in the US or Canada are welcome to apply. You can work remotely from home.
What you will do:
- Lead the Mojo compiler team, building high-quality compiler infrastructure with state-of-the-art optimizations.
- Contribute to and drive the design and development of new Mojo language features, taking feedback from internal stakeholders and the external user community.
- Provide technical leadership and career guidance to the team engineers helping them to realize their full potential and set and achieve career goals.
What you bring to the table:
- 5+ years’ experience in compiler development, including 2+ years in a management role, and a track record of delivering complex projects on time and on budget.
- A passion for language design, and experience designing and implementing new language features.
- Experience working with C++, Python, and either MLIR or LLVM.
- Proven experience mentoring and growing a high-performance engineering team.
- Ability to work in a fast-paced, dynamic environment and manage multiple priorities.
- Creativity and passion for building high-quality user-facing products.
Helpful but not required:
- Experience working with the open source developer community.
- Exposure to modern ML infrastructure.
- Experience with the modern Python ecosystem.
Job Title: AI GPU Performance Engineer
Job Description: ML developers today face significant friction in taking trained models into deployment. They work in a highly fragmented space, with incomplete and patchwork solutions that require significant performance tuning and non-generalizable/ model-specific enhancements. At Modular, we are building the next generation AI platform that will radically improve the way developers build and deploy AI models.
A core part of this offering is providing a platform that allows customers to achieve state-of-the-art performance across model families and frameworks. As an AI GPU Performance Engineer, you will work with leading experts on performance libraries for GPUs. You will understand, analyze, profile, and optimize AI workloads on state-of-the-art hardware and software platforms. You will also identify bottlenecks and inefficiencies in application code and propose optimizations to enhance GPU utilization.
LOCATION: Candidates based in the US or Canada are welcome to apply. You can work remotely from home.
What you will do:
- Design, develop, and optimize high-performance AI numeric and data manipulation kernels/operators for GPUs.
- Achieve state-of-the-art performance by leveraging software and micro-architectural features of GPUs.
- Work with compiler, framework, runtime, and serving teams to deliver end-to-end performance that fully utilizes GPU workstations and servers.
- Collaborate with machine learning researchers to guide system development for future ML trends.
- What you bring to the table:
- Deep understanding of computer architecture (memory hierarchies, caching, etc.) and their impact on algorithm design.
- 3+ years of relevant experience working on complex code and software systems.
- Self motivated and independent with the ability to execute on agreed upon specifications.
- Experience with GPU programming languages such as CUDA or OpenCL.
- Creativity and curiosity for solving complex problems, a team-oriented attitude that enables you to work well with others, and alignment with our culture.
Job Title: Senior AI GPU Performance Engineer
Job Description: ML developers today face significant friction in taking trained models into deployment. They work in a highly fragmented space, with incomplete and patchwork solutions that require significant performance tuning and non-generalizable/ model-specific enhancements. At Modular, we are building the next generation AI platform that will radically improve the way developers build and deploy AI models.
A core part of this offering is providing a platform that allows customers to achieve state-of-the-art performance across model families and frameworks. As an AI GPU Performance Engineer, you will own architecting performance libraries for GPUs. You will develop kernels and algorithms to increase performance of kernels, reduce the activation volumes, speedup data pre- and post-processing, and improve the end-to-end performance of the AI workloads. You will collaborate with cross-functional teams to analyze and optimize AI performance on diverse GPU architectures across the stack.
LOCATION: Candidates based in the US or Canada are welcome to apply. You can work remotely from home.
What you will do:
- Design, develop, and optimize high-performance AI numeric and data manipulation kernels/operators for GPUs.
- Achieve state-of-the-art performance by leveraging software and micro-architectural features of GPUs.
- Work with compiler, framework, runtime, and serving teams to deliver end-to-end performance that fully utilizes GPU workstations and servers.
- Collaborate with machine learning researchers to guide system development for future ML trends.
- Engage in prototyping exercises to quantify the value proposition and develop execution plans.
- What you bring to the table:
- Deep understanding of GPU computer architecture (memory hierarchies, tensor-cores, etc.) and their impact on algorithm design.
- 10+ years of relevant experience working with modern software systems and use of design patterns.
- Self motivation with the ability to design, communicate and lead new projects.
- Experience with GPU programming languages such as CUDA or OpenCL.
- Familiarity with GPU assembly such as PTX and SASS.
- Knowledge of SOTA frameworks such as CUTLASS, TRTLLM, Triton, etc.
- Able to identify industry trends, analyze emerging technologies and disruptive paradigms.
- Creativity and curiosity for solving complex problems, a team-oriented attitude that enables you to work well with others, and alignment with our culture.