[SIGPLAN] Active Learning for Neurosymbolic Program Synthesis
This program is tentative and subject to change.
The goal of active learning for program synthesis is to synthesize the desired program by asking targeted questions that minimize user interaction. While prior work has explored active learning in the purely symbolic setting, such techniques are inadequate for the increasingly popular paradigm of neurosymbolic program synthesis, where the synthesized program incorporates neural components. When applied to the neurosymbolic setting, such techniques can – and, in practice, do – return an unintended program due to mispredictions of neural components. This paper proposes a new active learning technique that can handle the unique challenges posed by neural network mispredictions. Our approach is based upon a new evaluation strategy called constrained conformal evaluation (CCE), which accounts for neural mispredictions while taking into account user-provided feedback. Our proposed method iteratively makes CCE more precise until all remaining programs are guaranteed to be observationally equivalent. We have implemented this method in a tool called SmartLabel and experimentally evaluated it on three neurosymbolic domains. Our results demonstrate that SmartLabel identifies the ground truth program for 98% of the benchmarks, requiring under 5 rounds of user interaction on average. In contrast, prior techniques for active learning are only able to converge to the ground truth program for at most 65% of the benchmarks.
This program is tentative and subject to change.
Fri 19 JunDisplayed time zone: Mountain Time (US & Canada) change
15:50 - 17:10 | |||
15:50 20mTalk | Simplifying Safety Proofs with Forward-Backward Reasoning and Prophecy PLDI Research Papers Eden Frenkel Tel Aviv University, Kenneth L. McMillan University of Texas at Austin, Oded Padon Weizmann Institute of Science, Sharon Shoham Tel Aviv University DOI | ||
16:10 20mTalk | TreeCoder: Systematic Exploration and Optimisation of Decoding and Constraints for LLM Code Generation PLDI Research Papers Henrijs Princis University of Bristol, Arindam Sharma University of Bristol, Cristina David University of Bristol DOI | ||
16:30 20mTalk | [TOPLAS] Guiding LLM-based Loop Invariant Synthesis via Feedback on Local Reasoning Errors PLDI Research Papers Tianchi Li Peking University, China, Zhenyu Yan Peking University, Junhao Liu Peking University, Peng Di Ant Group & UNSW Sydney, Xin Zhang Peking University | ||
16:50 20mTalk | [SIGPLAN] Active Learning for Neurosymbolic Program Synthesis PLDI Research Papers Celeste Barnaby University of Texas at Austin, Jocelyn Qiaochu Chen New York University, University of Alberta, Ramya Ramalingam , Osbert Bastani University of Pennsylvania, Işıl Dillig University of Texas at Austin | ||