Today, artificial intelligence — and the computing systems that underlie it — are more than just matters of technology; they are matters of state and society, of governance and the public interest. The choices that technologists, policymakers, and communities make in the next few years will shape the relationship between machines and humans for decades to come.
The rapidly increasing applicability of AI has prompted a number of organizations to develop high-level principles on social and ethical issues such as privacy, fairness, bias, transparency, and accountability. Building on those broader principles, the AI Policy Forum, a global effort convened by the MIT Stephen A. Schwarzman College of Computing, will provide an overarching policy framework and tools for governments and companies to implement in concrete ways.
“Our goal is to help policymakers in making practical decisions about AI policy,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “We are not trying to develop another set of principles around AI, several of which already exist, but rather provide context and guidelines specific to a field of use of AI to help policymakers around the world with implementation.”
“Moving beyond principles means understanding trade-offs and identifying the technical tools and the policy levers to address them. We created the college to examine and address these types of issues, but this can’t be a siloed effort. We need for this to be a global collaboration and engage scientists, technologists, policymakers, and business leaders,” says MIT Provost Martin Schmidt. “This is a challenging and complex process for which we need all hands-on deck.”
The AI Policy Forum is designed as a yearlong process. Activities associated with this effort will be distinguished by their focus on tangible outcomes — their engagement with key government officials at the local, national, and international level charged with designing those public policies, and their deep technical grounding in the latest advances in the science of AI. The measure of success will be whether these efforts have bridged the gap between these communities, translated principled agreement into actionable outcomes, and helped create the conditions for deeper trust between humans and machines.
The global collaboration will begin in late 2020 and early 2021 with a series of AI Policy Forum Task Forces, chaired by MIT researchers and bringing together the world’s leading technical and policy experts on some of the most pressing issues of AI policy, starting with AI in finance and mobility. Further task forces throughout 2021 will convene more communities of practice with the shared aim of designing the next chapter of AI: one that both delivers on AI’s innovative potential and responds to society’s needs.
Each task force will produce results that inform concrete public policies and frameworks for the next chapter of AI, and help define the roles that the academic and business communities, civil society, and governments will need to play in making it a reality. Research from the task forces will feed into the development of the AI Policy Framework, a dynamic assessment tool that will help governments gauge their own progress on AI policy-making goals and guide application of best practices appropriate to their own national priorities.
On May 6–7, 2021, MIT will host — most likely online — the first AI Policy Forum Summit, a two-day collaborative gathering to discuss the progress of the task forces towards equipping high-level decision-makers with a deeper understanding of the tools at their disposal — and trade-offs to be made — to produce better public policy around AI, and better AI systems with concern for public policy. Then, in fall 2021, a follow-on event at MIT will bring together leaders from across sectors and countries and, built atop the leading research from the task forces, the forum will provide a focal point for work to move from AI principles to AI practice, and serve as a springboard to global efforts to design the future of AI.