AI-powered agents are increasingly being deployed in production settings: they plan, call tools, maintain state, and act over live services and data. Yet the formal foundations needed to make these systems safe, reliable, and trustworthy remain underdeveloped. Ensuring correctness in these settings demands a new discipline, which we term agentic engineering: a research agenda that brings together ideas from programming languages, formal verification, software engineering, and neuro-symbolic reasoning to specify, analyze, test, monitor, and repair agents at scale.
This workshop aims to bring together researchers and practitioners interested in the foundations and practice of safe agentic systems. Our goal is to foster a community around principled approaches for building agents with precise specifications, verifiable behaviors, and runtime safeguards that hold up in real-world deployments. More broadly, the workshop seeks to strengthen connections across the programming languages, formal methods, software engineering, and machine learning communities in order to advance the principles of safe and reliable agentic engineering.
The workshop will feature invited talks and peer-reviewed contributions spanning theory, systems, tools, and practical experience.
Call for Papers
PAgE (Principles of Agentic Engineering) aims to bring together the formal methods and AI communities to advance the principles of safe agentic engineering. It will feature peer-reviewed papers and invited talks from experts in the field. We welcome submissions describing research results, artifacts, datasets, case studies, and experience reports. Topics of interest include, but are not limited to:
-
Specifications and type-safe interfaces for tool use
-
Memory and state management for agents
-
Reliability & groundedness in planning and tool use
-
Static & dynamic analysis and verification of agentic plans
-
Safe integration with software engineering workflows
-
Testing and debugging of agentic workflows
-
Runtimes, compilers and virtual machines for agentic programs
-
Evaluation methods and benchmarks for trustworthy agents, including safety, reliability, and cost/latency tradeoffs
-
Safety, security, and privacy of multi-agent systems
-
Neuro-symbolic and formal methods approaches for agent reasoning and control
We also welcome submissions that explore the use of AI agents in the research and development process itself, including agent-assisted ideation, implementation, experimentation, artifact generation, and writing. Such submissions should clearly describe the workflow used, the extent of human oversight, and any lessons learned regarding reliability, reproducibility, and safety.
Submission Tracks
1. Archived Papers:
Archived submissions should present completed research on topics related to the focus of the workshop. Full papers may be up to 10 pages, excluding bibliography, and must use the standard SIGPLAN conference two-column format. Full papers may be up to 10 pages, excluding bibliography, and should present completed research on topics related to the focus of the workshop. They will be reviewed according to the usual criteria of relevance, soundness, novelty, and significance. Accepted archived papers will be published in the ACM Digital Library. Formatting instructions and templates can be found on the SIGPLAN Author Information page.
2. Non-archival Submissions:
We also invite non-archival submissions describing work in progress, position papers, experience reports, negative results, demos, artifact or dataset descriptions, and talk proposals. These submissions may be up to 6 pages, excluding bibliography, and should also use the standard SIGPLAN conference two-column format. Accepted non-archival submissions will be presented at the workshop but will not appear in the ACM Digital Library.
Please note the extended deadline.
Contributions should be submitted via HotCRP: https://page2026.hotcrp.com/u/1/
For any questions, please contact the workshop chairs (add sbarke@microsoft.com).