Evolving Abstract Transformers for Gradient-Guided, Adaptable Abstract Interpretation
This program is tentative and subject to change.
Current numerical abstract interpretation relies on fixed, hand-crafted, instruction-specific transformers tailored to each domain, giving rise to three significant limitations. First, extensibility is limited because transformers cannot be reused across domains and new transformers need to be designed for each new domain or operator. Second, precise compositional reasoning over instruction sequences is difficult as transformers are defined only at the instruction level. Third, all downstream tasks are forced to use the same fixed transformer, irrespective of their precision, efficiency, or task-specific requirements. To address these limitations, we propose the Evolving Abstract Transformer, a general transformer that replaces the fixed single-output design of traditional transformers with an adaptable search over a parametric space of sound outputs. This is achieved through two underlying algorithms we develop. First, the Universal Parametric Output Space Encoder (UPOSE) constructs a compact parametric space of sound outputs for any polyhedral numerical domain and any operator in the Quadratic-Bounded Guarded Operators (QGO) class which includes both individual instructions and structured sequences. Next, the Adaptive Gradient Guidance (AGG) algorithm leverages the differentiable structure of the space generated by UPOSE and uses gradient-based updates to efficiently search it according to downstream analysis objectives and available runtime, continually evolving the output as more time is provided. We implement these ideas in the AbsEvolve framework and evaluate their effectiveness across three numerical abstract domains: Zones, Octagons, and Polyhedra. Our results demonstrate that the evolving transformer works across domains and handles diverse instructions and sequences, allowing efficient adaptability in the precision–efficiency tradeoff by adjusting the number of gradient steps in the search, while also reaching the most precise invariants up to 3.2× faster than existing baselines.
This program is tentative and subject to change.
Wed 17 JunDisplayed time zone: Mountain Time (US & Canada) change
15:50 - 17:30 | |||
15:50 20mTalk | Path-Sensitive Abstract Interpretation for WCET Estimation PLDI Research Papers Shangshang Xiao Shandong University, Mengxia Sun Shandong University, Wei Zhang Shandong University, Naijun Zhan Peking University; Zhongguancun Laboratory, Lei Ju Shandong University DOI | ||
16:10 20mTalk | SAIL: Sound Abstract Interpreters with LLMs PLDI Research Papers Qiuhan Gu University of Illinois Urbana-Champaign, Avaljot Singh University of Illinois Urbana-Champaign, Gagandeep Singh University of Illinois Urbana-Champaign DOI | ||
16:30 20mTalk | Evolving Abstract Transformers for Gradient-Guided, Adaptable Abstract Interpretation PLDI Research Papers Shaurya Gomber University of Illinois Urbana-Champaign, Debangshu Banerjee University of Illinois Urbana-Champaign, Gagandeep Singh University of Illinois Urbana-Champaign DOI Pre-print | ||
16:50 20mTalk | Abstract Interpretation with Confidence PLDI Research Papers DOI | ||
17:10 20mTalk | Optimism in Equality Saturation PLDI Research Papers Russel Arbore University of California at Berkeley, Alvin Cheung University of California at Berkeley, Max Willsey University of California at Berkeley DOI Pre-print | ||