What should AI be aligned to?
an incubator for
beneficial deployment of transformative technologies
We are a nonprofit research lab for human flourishing, societal alignment, and safe AI
-
We bring together experts in a variety of relevant fields – such as AI, politics, economics, and neuroscience – and provide them with resources, coordination, and leadership to foster collaboration and help them find new avenues for advancing human flourishing.
Our programs focus on specific research areas, such as collective decision-making. We focus on products not on the default path to creation to support humanity’s successful coordination. In particular, create platform technologies that make epistemic infrastructure for all future AI, research, and for-profit systems.
-
We investigate intersections of society and technology:
Alignment of Markets, AI, and Other Optimizers - How do we align these large-scale coordination systems with the needs and values of their constituents?
Scaling Cooperation with AI Assistance - How can recent AI advancements help us better coordinate large groups of people
Human Attention and Epistemic Security - How can we help people take actions in line with their values in an increasingly confusing information ecology?
-
The rapid emergence of advanced AI offers us an unprecedented opportunity to reach widespread human flourishing, but our current systems do not put us on that path.
• Societal Damage from the last technological wave: In 2000, we expected the miracles of real-time connectivity to bring new joys. We didn’t expect freefall into fake news, echo chambers, online bullying, political partisanship, mental health feedback loops, and privacy threats. Social media has infiltrated our lives at every intersection.
• AI risk is exponentially larger than communication technology: The impact of self-improving AI systems in the coming years will be much more drastic than what we’ve experienced with social media. The scale of the technology is more powerful and pervasive into our lives – from persuasion tools dialled to your psychologically to deep fakes and independent AI economic actors who risk the environment and stock market.
• Existing misalignments will scale if not solved: At AOI, we believe that the ways in which human systems will fail at managing advanced AI will not be wholly unexpected: they will take the form of familiar institutional, financial and environmental failures, which we have experienced over the last decade at unprecedented rates. The core of every existential risk is the risk that we fail to collaborate effectively, even when each of us has everything to lose. Let’s learn to coordinate in service of a future that will be better for us all.
Current programs
TALK TO THE CITY : COLLECTIVE INTENT ALIGNMENT
Talk to the City is a new LLM survey tool to improve collective discourse and decision-making – by analyzing detailed qualitative responses to a question instead of limited quantitative data. The project enables automatic analysis of these qualitative reports at a scale and speed not previously possible, helping policymakers discover unknown unknowns and specific cruxes of disagreement. We are incubating this to responsibly serve a variety of use cases – from democracy and union decision-making to understanding the needs of recipients in refugee camps and conflicting groups in a peace mediation setting.
DEFENDING INDIVIDUAL AUTONOMY
Lucid Lens seeks to equip individuals as they engage in an ever evolving technological space to help maintain their autonomy and see through the distortions around them? Enabling people to understand the broader context and more coherently orient towards the systems and information around us. As people understand what motives or buttons are being leveraged, they can orient accordingly. Understanding when an agent, article or system has an underlying goal that isn’t transparent is a first step towards creating better agents.
How we work
AOI runs coordinated research programs to build advanced tools to solve misalignment within AI and human systems. Our aim is to avoid large-scale economic, institutional, and environmental catastrophes while creating new tools to improve human flourishing.
To achieve this goal, we identify misalignments and the tools that can help resolve them. We then build these tools within collaborative research groups – led by experts from a range of disciplines. Each program has a research sub-group that builds frontier knowledge within its domain.
After developing and validating a tool’s impact, we release it. It can then grow its impact within other contexts – non-profits, for-profits, open-source libraries, or independent research programs. Our work is to turn world-changing ideas into real-world tools.