CivicChats - Building Tools to Support Voting
Democracy depends on an informed electorate. But political issues and ballot measures can be confusing, obscuring the effects of one outcome versus another. Moreover, politics is personal. Once we make an initial decision about an issue, it can be hard to change our mind or see things from “the other side.” And talking about issues with those with whom we disagree can be challenging, especially when the conversation feels more like a debate than a discussion.
Technology offers ways to alleviate these difficulties, but not without introducing problems of its own. The Internet and social media promised new ways for people to connect, discuss issues, and learn from each other. But in practice, both often inflame passions, solidify echo chambers, and spread misinformation. More recently, LLM chat interfaces may help people stay informed through personalized access to information, but mainstream chatbots tend to match user beliefs rather than clarifying or challenging them.12 Without the kind of pushback you’d encounter in a discussion between disagreeing friends, chatbots are ill-suited for helping people think through political issues in a balanced way.
The goal of CivicChats (https://civicchats.org) is to address these shortcomings. Starting with ballot measures, CivicChats helps people better understand political issues through three different modes of discussion: a Q&A mode for understanding what a measure does and what’s at stake, an argumentative mode that presents competing views to your own, and a reflective mode that helps you examine and develop your own thinking.
Why We Need Better Tools
Ballot measures are where some of the most consequential policy questions in the country are now being decided.3 Questions about housing, criminal justice, labor rights, and changes to state constitutions are put to voters to decide directly. But political issues and ballot measures can be confusing, obscuring the effects of one outcome versus another, and most voters aren't especially familiar with the measures they encounter.4
People come to ballot measures with intuitions shaped by experience, identity, and deeply held values. Deliberation doesn’t ask them to set those aside. It asks them to understand what those convictions rest on, and to articulate them in terms that can engage with someone who sees things differently. Two voters might look at the same sentencing reform measure and weigh public safety against rehabilitation, or near-term costs against long-term outcomes. Deliberation helps make those tradeoffs explicit, so that disagreement can be about reasons and priorities rather than reflexive positions.
The challenge is that voters rarely encounter tools or environments that support this kind of reasoning. Endorsements can be convenient shortcuts, but they often replace, rather than facilitate, the process of facing competing considerations. Campaign messaging is designed to persuade, not clarify. And the digital tools that might seem like natural aids usually fail in predictable ways. Social media rewards reactive content and superficial engagement, while AI assistants tend to accept a user’s frame, downplay points of conflict, and move too quickly toward a tidy answer.
How CivicChats Works
Explore ballot measures. You can browse measures on your state’s ballot or search across the full national picture by topic, state, or status. The home screen also surfaces what’s trending nationally, so you can see which issues are coming to voters across the country.
Discuss with the AI. Click any measure and a chat opens about it.5 Rather than a single mode of engagement, CivicChats offers three conversation styles:
Q&A mode helps you understand what a measure does and what its main considerations are.
Argumentative mode presents strong arguments opposing your position, helping you consider alternative perspectives and challenge your views.
Reflective mode asks questions about what values are driving your reaction, what your position depends on, and what it would take to change your view.
As you discuss a measure, you can record your position — yes, no, or undecided — and update it as your thinking develops.
While CivicChats can never replace deliberation and discussion with other people, it can help make those conversations more productive.
Evaluation and Research Roadmap
In designing the CivicChats and specifically the discussion interface, we focused on the qualities that shape a productive civic conversation. First, we wanted the system's responses to support a comfortable conversational rhythm. This meant attending to response length, latency, and tone, and ensuring that exchanges build on each other rather than trying to convey all relevant information at once. Evenhandedness is a priority in Q&A mode, where the system should present relevant considerations without favoring one side. The standard in the more discursive modes is different. The chatbot is designed to push back on user positions and probe reasoning, but it should be forthright when it disagrees and avoid manipulative or leading language. To systematically evaluate these qualities, we built an evaluation framework and interface, called CivicEval, for learning about and tracking LLM model behavior over time.
We evaluate the chatbot’s conversational behavior using quantitative metrics and structured, LLM-based rubric scoring [3]. A separate model reviews full conversations against checklists that capture the behaviors described above. Whether good chatbot behavior translates into better outcomes for voters is a separate empirical question. To test this, we are running a preregistered user study comparing our three chatbot modes against non-chatbot baselines across several measures of civic reasoning. These include voter understanding of the measure and its tradeoffs, decision confidence, the quality of participants’ justifications for their position, and the stability of these outcomes at one-month follow-up.
Looking ahead, we’re developing personas and discourse styles that more reliably support productive discussion across topics and users. Prompting can induce these behaviors, but it is often fragile. We want to understand what data and training procedures make those behaviors dependable. More broadly, we are interested in how to elicit coherent attitudes and perspectives in models beyond surface style. Our work connects human-AI interaction, evaluation, and model development.
CivicChats is a research project from academics at the University of Chicago and the Australian National University studying how AI can support more informed civic participation.
Thanks to Kyle MacMillan, Dang Nguyen, Sanghee Kim, Skye Scofield, and members of the Chicago Human+AI Lab and Computational Media Lab for helpful discussions and suggestions.
Mrinank Sharma et al., "Towards Understanding Sycophancy in Language Models," ICLR 2024. https://arxiv.org/abs/2310.13548
Myra Cheng et al., “ELEPHANT: Measuring and Understanding Social Sycophancy in LLMs,” 2025. https://arxiv.org/abs/2505.13995
Grace Connolly, “Why Ballot Measures Are Booming in Some States,” University of Oregon College of Arts and Sciences, June 18, 2025. https://socialsciences.uoregon.edu/news/why-ballot-measures-are-booming-some-states
Janine Parry, Jay Barth, and Craig Burnett, “Study Finds Majority of Public Is Unfamiliar With Ballot Measures,” University of Arkansas News, February 27, 2019. https://news.uark.edu/articles/46345/study-finds-majority-of-public-is-unfamiliar-with-ballot-measures
Each chat has, by default, access to vetted ballot measure information from https://ballotpedia.org as well as the ability to search the internet for additional data or context.



