Game Theory and Mechanism Design with Large Language Models

Workshop at ACM EC 2026

Rome, Italy · July 6, 2026

Workshop Theme

Large Language Models (LLMs) are increasingly deployed as economic agents: they set prices, negotiate deals, aggregate information from diverse sources, and interact strategically with one another. These language-based agents introduce qualitatively new challenges for economic theory. Unlike classical algorithmic agents whose strategies are fully specified by their designers, LLM-based agents exhibit emergent strategic behavior shaped by pre-training corpora, prompting, and post-training procedures—raising questions that sit at the intersection of microeconomic theory and modern AI.

This workshop focuses on the economics of language-based AI agents. When LLMs are placed in competitive environments, do they collude? How do information design and strategic communication change when the sender, receiver, or both are language models? What happens to welfare when LLM agents mediate economic transactions on behalf of human principals? And how should we evaluate and benchmark the economic behavior of these systems?

We bring together researchers working on these questions from game theory, mechanism design, information economics, industrial organization, and machine learning. Our goal is a venue where formal economic modeling and empirical or computational work on LLM agents inform each other directly.

Topics of Interest

Misalignment and Heterogeneity of LLMs

One important factor in agentic systems is heterogeneity: between LLMs or between human users and LLMs. Heterogeneity could be helpful because it could help agents "make different mistakes," leading to a more robust team. However, heterogeneity could also be harmful if mistakes compound, or when it reflects inherent misalignment between agents.

Benchmark Design

Once benchmarks are used to guide decision-making, evaluation no longer simply measures behavior but shapes behavior. Participants may adapt strategically to the metric itself, improving measured performance while drifting away from the broader objective the benchmark was meant to capture. Thus, benchmark design is not only a statistical problem of measurement, but also a mechanism design problem of incentives.

Robust Mechanism Design for Opaque AI Agents

While classical mechanism design assumes agents optimize well-defined utility functions, the transition to ecosystems of interacting LLMs fractures this foundation. These agents possess complex, latent objectives—derived from high-dimensional training data—that are often opaque to their users and internally inconsistent. We seek research into mechanisms that are robust across shifting objectives, algorithmic sophistication, and communication ability.

Information Aggregation with Crowds and LLMs

A natural and increasingly important setting where language-based agents interact strategically is crowdsourced information aggregation. The emergence of LLM-based agents further complicates this setting, as automated participants can generate persuasive but misleading contributions at scale, strategically adapt to platform incentives, and potentially coordinate to game aggregation rules.

Additional Areas of Interest

  • Algorithmic collusion and emergent strategic behavior of LLM-based agents
  • Information acquisition, aggregation, and design with or by language models
  • Delegation to AI agents: principal-agent problems, incentive alignment, and monitoring
  • Preference elicitation, RLHF, and the microeconomics of LLM post-training

Invited Speakers

BL

Brendan Lucier

Microsoft Research New England

A leading scholar at the intersection of economic theory and computer science, with contributions in understanding how algorithms embedded in online platforms and socio-technical systems influence user behavior. Recent research explores how introducing AI into workplaces and markets affects their outcomes.

SS

Siddarth Srinivasan

Anthropic Fellow; Harvard University

His research connects mechanism design, information elicitation, and AI. Recent work on incentivizing explanations from agents and on self-resolving prediction markets addresses how to aggregate beliefs when agents communicate in natural language.

EC

Emilio Calvano

LUISS University, Rome; Toulouse School of Economics

A leading figure in the study of algorithmic collusion, with foundational experimental work showing that Q-learning pricing algorithms autonomously learn to sustain supracompetitive prices. Recent research investigates AI's impact on recommender systems, consumption patterns, and news diets.

Schedule

Half-day workshop. Schedule subject to changes.

09:00 – 09:05 Opening
09:05 – 09:40 Invited Talk 1 — Brendan Lucier 30' + 5' Q&A
09:40 – 10:00 Two Spotlight Talks 7' + 2' Q&A each
10:00 – 10:35 Invited Talk 2 — Siddarth Srinivasan 30' + 5' Q&A
10:35 – 10:55 Coffee Break
10:55 – 11:30 Invited Talk 3 — Emilio Calvano 30' + 5' Q&A
11:30 – 11:55 Open Problems Small group discussion with final presentation
11:55 – 12:00 Closing Remarks

Contributed Talks and Posters

The workshop will feature both contributed talks and posters. We will solicit extended abstracts and working papers (non-archival). A light review process will select papers for spotlight talks; additional accepted papers will be presented as posters.

Call for Papers

We invite extended abstracts and working papers on topics at the intersection of game theory, mechanism design, and large language models. Submissions are non-archival.

Submission Deadline: May 25, 2026

Topics of interest include (but are not limited to):

  • Algorithmic collusion and emergent strategic behavior of LLM-based agents
  • Information acquisition, aggregation, and design with or by language models
  • Delegation to AI agents: principal-agent problems, incentive alignment, and monitoring
  • Preference elicitation, RLHF, and the microeconomics of LLM post-training
  • Robust mechanism design for opaque AI agents
  • Benchmark design as mechanism design
  • Information aggregation with crowds and LLMs
  • Misalignment and heterogeneity of LLMs in strategic settings

Organizers

YA

Yeganeh Alimohammadi

USC Marshall

Assistant Professor of Data Sciences and Operations. Her research develops principled methods for learning and decision-making in complex systems with incomplete data, strategic behavior, and network structure.

KD

Kate Donahue

MIT & UIUC

METEOR postdoc at MIT, joining UIUC as Assistant Professor of Computer Science in Summer 2026. Her research addresses algorithmic fairness, human-AI collaboration, and game-theoretic models of federated learning.

SF

Sara Fish

Harvard University

PhD student at Harvard. Her research spans EconCS and ML, including work on algorithmic collusion by LLMs, generative social choice, and economic benchmarks for LLM agents.

AH

Andreas Haupt

Stanford University

Human-Centered AI Postdoctoral Fellow at Stanford, jointly appointed in Economics and Computer Science. He studies the elicitation and aggregation of human preferences in ML systems using methods from microeconomic theory.

CL

Ce Li

Boston University

PhD candidate in Economics at Boston University. Her research develops microeconomic theories at the AI-economics interface, with a focus on information design and learning algorithms for human-AI interactions.

GM

Giacomo Mantegazza

USC Marshall

Assistant Professor of Data Sciences and Operations. His research studies competition in online and decentralized markets, with recent work on algorithmic collusion and platform information design.

Contact

For questions about the workshop, please reach out to us at:

h4upt@cs.stanford.edu