The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global discussion on the impact of artificial intelligence from principles to policy implementation. AIPF was founded at the end of 2020 brings together leaders from government, business and academia to develop approaches to address the societal challenges arising from the rapid advances and increasing applicability of AI.
The AI Policy Forum Co-Chairs are Aleksander Madry, Professor of Cadence Design Systems; Asu Ozdaglar, Associate Dean of Studies at MIT Schwarzman College of Computing and Chair of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, Associate Professor at MIT Sloan School of Management and Director of the MIT AI Policy for the World Project. Here they discuss some of the key issues facing the AI policy landscape today and the challenges related to the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on September 28, which will further explore the issues discussed here.
Q: Can you talk about the ongoing work of the AI Policy Forum and the AI policy landscape in general?
Özdaglar: There is no shortage of discussions about AI in different places, but the talks are often high-level and focused on issues of ethics and principles or just political issues. The approach AIPF takes in its work is to address specific issues with actionable policy solutions and to engage with the stakeholders who work directly in those areas. We are working ‘behind the scenes’ with smaller focus groups to address these challenges and aim to unveil some potential solutions together with the stakeholders working directly on them through larger gatherings.
Q: AI is affecting many sectors, which of course makes us concerned about its trustworthiness. Are there new best practices for developing and deploying trustworthy AI?
Madry: The most important thing to understand when deploying trustworthy AI is that AI technology is not a natural, predetermined phenomenon. It’s something built by humans. People making specific design decisions.
We must therefore advance the research that can guide these decisions and provide more desirable solutions. But we also need to be aware and think carefully about the incentives driving those decisions.
Well, these incentives stem largely from business considerations, but not exclusively. That said, we should also recognize that proper laws and regulations, and the adoption of sound industry standards, also play a major role here.
In fact, governments can set rules that prioritize the value of using AI while being aware of the associated downsides, pitfalls, and impossibilities. Designing such rules will be an ongoing and evolving process as technology continues to improve and change, and as we also need to adapt to socio-political realities.
Q: The financial sector is perhaps one of the fastest developing areas in the use of AI. From a policy perspective, how should governments, regulators and legislators ensure AI works best for consumers in finance?
Videogaray: The financial sector sees a number of trends that pose policy challenges at the interface of AI systems. First, there is the question of explainability. By law (in the United States and many other countries), lenders are required to provide explanations to customers when they take any action that harms a customer’s interests in any way, such as B. Refusal of a loan. However, as financial services increasingly rely on automated systems and machine learning models, banks’ ability to unpack the machine learning “black box” to provide this level of prescribed explanation is becoming tenuous. So how should the financial industry and its regulators adapt to these technological advances? Perhaps we need new standards and expectations and tools to meet these legal requirements.
Meanwhile, economies of scale and data network effects are driving an increase in AI outsourcing, and more broadly, AI-as-a-service is becoming more prevalent in the financial industry. In particular, we are seeing fintech companies providing other financial institutions – be they large banks or small, local credit unions – with the underwriting tools. What does this segmentation of the supply chain mean for the industry? Who is responsible for the potential problems in AI systems deployed across multiple layers of outsourcing? How can regulators adapt to ensure their mandates related to financial stability, fairness and other societal standards?
Q: Social media is one of the most contentious industries, leading to many societal changes and disruptions around the world. What policies or reforms might be needed to best ensure that social media is a force for the public good, not public harm?
Özdaglar: The role of social media in society is of growing concern to many, but the nature of that concern can vary widely – some see social media as not enough to prevent misinformation and extremism, for example, and others see this as improper to certain viewpoints to silence. This lack of a unified view of the problem affects the ability to make changes. All of this adds to the complexity of the legal framework in the US, which includes the First Amendment, Section 230 of the Communications Decency Act and commercial laws.
However, these difficulties in regulating social media do not mean that nothing needs to be done. In fact, regulators have begun tightening their scrutiny over social media companies both in the United States and abroad, whether through antitrust filings or other means. Ofcom in particular in the UK and European Union is already introducing new layers of oversight for platforms. In addition, some have proposed taxing taxes on online advertising to address the negative externalities caused by the current social media business model. So the policy tools are there when there is the political will and the right guidance to implement them.