Heike Schweitzer, one of the panelists invited to speak at the conference, died suddenly a few days before the conference on June 11. The academic world has lost a brilliant legal scholar and intellectual leader in competition law. Our thoughts are with her family and friends.
Imagine there is a conference in Berlin on June 14, 2004 and you have been invited. The invitation arrives in the mail. When it’s time to leave for the conference, you rely on a large paper map and your sense of orientation to find your way there. Twenty years later, this scenario is quite different. While in some ways it is less complicated, it nevertheless has its own pitfalls thanks to the integration of artificial intelligence (AI) into our daily lives. Today, an invitation would arrive by email; perhaps the AI algorithm would sort it into the spam folder. A navigation app would guide you to the venue and the journey could be pleasantly spent with digital activities suggested to you based on your preferences and online behavior.
It was with this little comparison that Maja Adena, Vice Director of the “Economics of Change” research unit at WZB Berlin Social Science Center, opened this year's BCCP Conference and Policy Forum on AI: Prospects, Challenges, and Regulation on June 14, 2024, illustrating not only how much we already rely on AI to guide us through life, but also how quickly it has become one of our most trusted companions.
In 2017, transformers, a new way of making connections in artificial neural networks, started to allow enabled AI to keep track of patterns in their input and grasp context in a more sophisticated way. In a relatively rapid development, anyone reading this text on a digital device now also has access to generative language models such as GPT-4 (GPT stands for generative pre-trained transformer). While AI tools are propagating rapidly in many applications, much uncertainty regarding their capabilities, limitations, and policy implications for competition and regulation persists, which requires further technological innovations, experimentation, and careful policy design. As Tomaso Duso, Head of the Firms and Markets Department at DIW Berlin, pointed out in his introduction: An interdisciplinary discussion that combines lessons from economics, social sciences, law, computer science, and ethics is what we need right now to manage this almost ubiquitous and pioneering technology.
With three sessions and a number of great experts, the 2024 BCCP conference contributed to a better understanding of AI.
The first part of the conference explored the current and future design and governance of AI. It is clear that task-based predictive systems and new generative tools will and already are transforming economic and societal outcomes. They have the power to further improve the allocation and use of resources, enable innovation, and increase welfare. However, uncontrolled deployment and concentration of this power could lead to disruption and societal harm, not only in high-risk use cases of AI. Things could get out of hand, whether through unintentional mistakes or calculated abuse of these systems. The possibilities range from manipulating elections to spreading medical misinformation. This makes it all the more important to create a safe and adaptable legal environment, preferably globally standardized, to avoid loopholes and fragmented oversight. The EU AI ACT is an attempt in the right direction, but its ability to foster beneficial innovation and sustainable growth remains to be seen. While Flavio Calvino of the OECD’s Directorate for Science, Technology and Innovation shed light on how AI fits into the current universe of research based on the limited evidence that has been generated so far. Then, Maximilian Kasy of Oxford University’s Department of Economics looked at the political economy of AI, including ethics, the choice of objectives, and democratic control. Joshua Gans, Professor of Strategic Management at the University of Toronto, then gave an idea of the current somewhat dramatic tone of the discussion on AI regulation, arguing that before we talk about P(doom), we should look at the precautionary principle and address actual identifiable harms such as deep fakes, cybersecurity, and weapons. The final presentation of the first session was given by Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance. She looked at the benefits of EU regulations, such as the General Data Protection Regulation (GDPR), and their advantages over national go-it-alones when it comes to API security. The misuse of information through data breaches is not a new concern, but the dimensions it can take are.
This first session was moderated by Hannes Ulrich, Deputy Head of DIW Berlin’s Firms and Markets Department.
The video is hosted on an external platform (YouTube).
By playing the video, you leave DIW Berlin’s website.
More information in our Privacy Policy.
BCCP Conference and Policy Forum 2024 AI: Prospects, Challenges, and Regulation (Part I): Session I
The second session of the conference picked up on the previously discussed dangers behind the extreme concentration of market power in a few digital ecosystems. This makes competition policy and regulation in digital markets even more important. The perceived failure of ex-post antitrust enforcement to curb the growing abuse of such concentrated power by technology companies has partly led to the introduction of ex-ante regulation such as the DSA (Digital Services Act), DMA (Digital Markets Act), and AIA (Artificial Intelligence Act).
The moderator of this session, Hans W. Friederiszick, Director and Founder of E.CA Economics, opened this session by suggesting AI to be a disruptive innovation with the ability to level the playing field and challenge tech giants such as Google, Microsoft, and Amazon. Oren Bar-Gill, Professor of Law and Economics at Harvard University, talked about different categories of harm, such as price discrimination, that can be caused in consumer markets when the decision-makers are AI-powered algorithms. Using two different scenarios, Emilio Calvano, Professor of Economics at Luiss University, illustrated why the traditional approach in competition policy won't work when it comes to "taming" AI. He focused on the dynamic between theory building and empirical research on AI. To some extent, AI tools are still black boxes—not even those who develop them fully understand their limitations. Moritz Hardt, Director of the Max Planck Institute for Intelligent Systems, provided important insights into machine learning in a social context. In markets in particular, machine learning differs from human strategic behavior in that it does not inherently follow pattern recognition, but rather predictions that lead to causal effects. It has long been assumed that economic forecasts lose their empirical basis as soon as they are published. In machine learning, this is called performative prediction. Joanna Bryson, a returning face from the first session, began her presentation with remarks about the previous talks: So far, little has been said about the accountability of the companies and individuals producing or using these AI tools. Contrary to the black box theory, even if not everything is known about each little AI neuron, responsibility must be clear. Bryson explored the question of whether large corporations are better for the infrastructure.In the ensuing discussion, the first thirty minutes were again devoted to the panelists, focusing on the theories or harms and efficiencies, detection issues, remedies, and market power. The last 30 minutes were devoted to questions from the audience.
The video is hosted on an external platform (YouTube).
By playing the video, you leave DIW Berlin’s website.
More information in our Privacy Policy.
BCCP Conference and Policy Forum 2024 AI: Prospects, Challenges, and Regulation (Part II): Session II
In the final part of the conference, the Policy Roundtable, a panel of political and academic experts discussed the state of current and future AI policy initiatives.
Moderator Anna Sauerbrey, Foreign Policy Coordinator at Zeit, began with an anecdote from her daily life involving a bakery, an AI management system, a woman upset that her favorite bread had not been ordered, and a shop assistant defending the AI system as if it were a fourth grader not quite up to the task. Like the customer in the story, we want to understand the decisions that AI makes on our behalf. The EU's AI law is an attempt to guide the application of AI in areas of varying risk, but its ability to drive beneficial innovation and sustainable growth remains to be proven. Is it killing AI innovation through its regulatory approach or setting the global regulatory agenda?
Issues of data quality, data bias, fairness, and transparency were discussed by Brando Benifei, Member of the European Parliament; Francesca Bria, Honorary Professor at the Institute for Innovation and Public Purpose at the University of London; Alena Buyx, Professor of Ethics in Medicine and Health Technology at TU Munich; and Amba Kak, Executive Director at the AI Now Institute.
The video is hosted on an external platform (YouTube).
By playing the video, you leave DIW Berlin’s website.
More information in our Privacy Policy.
BCCP Conference and Policy Forum 2024 AI: Prospects, Challenges, and Regulation (Part III): Session III
Topics: Competition and Regulation , Digitalization