How AI Agents Will Improve the Consultation Process

Bruno Marnette

Since the release of GPT-4 about a year ago, the AI Objective Institute has run a series of socio-technical experiments to explore how modern AI tools could improve consultation and deliberation processes. For instance, we found that modern LLMs, when used in the right way (as part of a carefully crafted data transformation pipeline), can reliably summarize the diverse opinions collected from a large set of participants.

In this post, I would like to share some of our early thinking regarding another application of AI in the same space. More specifically, how AI agents could make consultation processes more interactive and more proactive. By agent, I'm referring here to the kind of AI-powered bot that have recently become easy to build using OpenAI's Custom GPT feature. Such agents could be instructed to help people figure out what they think and communicate it more effectively to others. This, in turn, could make consultation processes better on both sides:

  • from the organizer's perspective, the consultation can elicit more accurate and informative answers from large numbers of participants, at reasonable costs;

  • from the participants' perspective, the process can paradoxically feel more human (despite using more AI) because it is more open, more engaging, and more empowering than responding to a simple survey.

The AI agents that we can be build with today's technology are not going to be as good as a well-trained human facilitator and it will often remain preferable to have face-to-face conversations with participants. However, paying people to facilitate conversations can be too expensive for many uses, and conducting consultations online will often be logistically preferable. This is where there could be an opportunity to leverage different types of AI agents.

Examples

The friendly domain expert

An agent could prove very useful to help participants before they start thinking of an answer. Such an agent could help explain the question if necessary, it could provide additional context, or it could provide hard facts and figures. It could be configured to act like a scientist, ready to answer any technical question that the participant might have. It could also act as a more informal coach, helping the participant to organize their thoughts. For instance, it could help them maintain lists of pros and cons, it could help them think through different scenarios with different consequences, or through different moral considerations that may affect the participant’s position. 

Such an agent could be relatively passive and only provide help when explicitly asked, but in some situations it could be appropriate for this agent to be more active. This could be particularly helpful when a participant is too confused by a question and doesn't know what to ask. This does, however, come with additional risks, since a proactive agent is more likely to hallucinate or introduce bias, as we will discuss later in this post.

The professional interviewer

Another example of an AI-powered agent could be one that responds to the participant's first answer and tries to improve it. It could act like a professional interviewer (think: a skilled journalist or podcast host) and focus on eliciting contributions that will be appreciated by the audience. For instance, this agent could recommend ways to improve the linguistic style or the logical consistency of the original answer. It could encourage the participants to share interesting sentiments or personal stories that may resonate with others. It may ask them to explain why they have a specific position.

Such an agent would need to follow standard principles of journalistic ethics. For instance, it would need to strike a balance between:

  1. being too complacent with the participants, thus missing some opportunities to help them produce better responses, and

  2. trying too hard to control or manipulate the participant, thus making their responses less genuine and authentic.

As much as possible, the interactions should be driven by questions from the AI, rather than directives. Importantly, the agency should always remain with the participant, who will always have the final say in what is shared.

sing an AI agent instead of a human interviewer will have other drawbacks and benefits. On one hand, it seems hard to imagine AI agents establishing the same level of rapport and human connection as a skilled journalist would. Most people are too agreeable to completely discard the questions that a journalist would ask them, because it would feel rude, but it might be a lot easier to discard the questions coming from an AI bot. On the other hand, participants may sometimes feel more comfortable sharing their thoughts anonymously with a computer, because it can feel safer and more private. They may be less concerned about what the interviewer might think of them and thus more likely to give honest answers.

Cross-pollination

A third example of a valuable agent could be one that focuses exclusively on helping participants exchange ideas and perspectives among themselves. Such cross-pollination is usually considered a primary tenet of citizens’ assemblies and other deliberative practices, as it's a powerful way for participants to open their mind to a broader range of ideas, while helping the group as the whole identify possible areas of consensus. 

Whilst there already exists consultation platforms with cross-pollination features (for instance Pol.is or Make.org), the common approach asks people to make multiple suggestions and then vote on multiple suggestions by others (often using a sophisticated recommendation engine under the hood to decide when to show which suggestion to each user).

Using modern AI agents could drastically improve on the state of the art, because platforms would no longer be restricted to presenting (exactly) the response that another user gave. Instead, AI agents can produce a comprehensive summary of all the points made by others. They could even generate new ideas, inspired by what other people have said, but further expanding the space of options brought to a participant's attention. Such agents could also support the engaged party to directly interact with these views in their own ‘sandbox’, with the AI perhaps ‘straw-manning’ the opinions or forcing users to question why they have the feelings they do about this other point of view.

A safe path towards mainstream adoption

As noted in the introduction, I believe that it will take some time before the use of AI agents becomes standard practice in these engagements. There are several good reasons for this, but I believe that the three main issues to address are the following:

  1. The people organizing consultations will need to fully appreciate the risks and benefits of using AI agents. This will require more research and advocacy.  We plan to do our part at the AI Objectives Institute by running a series of socio-technical experiments exploring these possibilities, and supporting other organizations who may wish to run their own experiments.

  2. We will need to build safe AI agents which do not hallucinate too often, do not introduce significant biases, and follow the organizer's instructions diligently. Thankfully, the AI community has already identified some good practices to mitigate such concerns. This is a very active research area and we will not attempt to do a literature review in this post, but just to give a couple of examples:

    1. we can reduce hallucinations by giving agents a very clear and very specific instructions to make them generally less creative and less likely to make things up (via prompt-tuning etc.);

    2. (ii) we can prevent AI agents from searching the wild web and instruct them instead to pull facts and figures from a provided list of trusted sources (via Retrieval Augmented Generation etc.); and

    3. (iii) we can apply various de-biasing strategies at the post-processing stage to identify and flag potential political or ideological biases.

  3. Participants of consultation processes will need a way to verify the integrity of the AI agents. In particular, they will want to know whether the organizer has corrupted the process by explicitly asking the AI (when configuring the prompts) to steer the discussions in particular directions. This will likely require designing new platforms with a strong focus on trust and transparency, but nothing that seems technically impossible. 

I'm optimistic that these three challenges can be overcome in a matter or months, not years. It might then take more time to see AI agents added to existing platforms, because innovator's dilemmas and bureaucracy can still slow down innovation. But solving these issues will at least make it possible for novel AI platforms to emerge that will truly transform the way humanity shares options at scale.

Previous
Previous

Using AI to Give People a Voice, a Case Study in Michigan

Next
Next

The Problem With the Word ‘Alignment’