The workshop aims to bring together LLVM contributors and researchers that work on applying machine learning techniques to LLVM, including compiler optimizations, latency estimation, IR comprehension, code generation, or data set curation (to name a few) - to discuss their current, ongoing projects, and explore ways to better collaborate and plot pathways for integration into community LLVM.
Yegor Denisov-Blanch: Does AI Boost Compiler Engineers’ Productivity?
Jonas Devlieghere: AI & LLDB
Yifan Zhang: ProfGen: LLM-Powered Generation of LLVM IR Benchmarks for Profile-Guided Optimization
Jaden Angella: EmitC for MLGO
Aiden Grossman: Large Scale Training for RL Guided Register Allocation
Peter Rong: Careful What You Wish For - What we learned using MLGO for size optimization
– BREAK–
Hongzheng Chen: Magellan: Autonomous Discovery of Novel Compiler Optimization Heuristics with AlphaEvolve
S. VenkataKeerthy: IR2Vec Embeddings: Lessons Learned and The Road Ahead
Viraj Shah: **Towards Learning Latency Impact of Cache Misses: Retrofitting Learned BB Latency Models to Process LBR Traces”