AI and Machine Learning are moving so quickly that it is hard to keep up, so it is unsurprising that it is especially hard to understand the legal implications for artists and companies using these new technologies. In this fxpodcast, we explain both the unfolding US and European legal situations in depth.
To do this, we spoke with an expert in AI legal policy, Dr Brandie Nonnecke.
Brandie Nonnecke, PhD is the Director of the CITRIS Policy Lab, CITRIS, and the Banatao Institute Co-Director. She is AI Policy Hub Co-Director, Artificial Intelligence, Platforms, and Society at the Berkeley Center for Law and Technology, Berkeley Law. (Her website is here)
In this episode, we discuss artists’ rights and how several key lawsuits in the US around generative AI may be resolved. We discuss the various key US initiatives as well as the EU’s upcoming AI Act. When discussing the AI Act Mike refers to the work of Dr Karin Väyrynen and Arto Lanamäki from the University of Oulu, published at the 42nd International Conference on Information Systems, Hyderabad 2023. Challenges and Critiques of the European AI Act.
Brandie also hosts the TecHype podcast.
Last week, the Biden Administration issued an executive order on all things AI. The order contains several key areas of focus, such as how to protect Americans’ privacy, aspects aimed at standing up for consumer rights and supporting workers, and how to promote innovation and competition while ensuring responsible and effective use in both the private and government sectors.
The Fact Sheet issued by the White House states, “Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”
There are clear implications for firms large and small. For large firms, especially with any Government or Military contracts this has major implications. To gauge the significance of the order for smaller AI startup we spoke to Brian Sathianathan founder of Iterate.ai. His company has just over a 100 employees with about $10M in turnover.Iterate.ai provide a platform for companies to develop AI projects, including coding and the modeling of custom generative AI projects, with secure and private training data.
“The White House’s new Executive Order on Artificial Intelligence is a significant step toward addressing the potential risks and benefits of AI technologies. By establishing a comprehensive framework for the development and deployment of AI systems, this order aims to protect Americans’ privacy and advance equity and civil rights. This focus on safety, security, and trustworthiness will help to ensure that AI is used responsibly and effectively in the years to come,” he explains.
Biden claimed his order was the most significant action globally on the topic, but in reality, it could be argued that the administration is not leading the worldwide debate, and there is still a divide between those who want governments to act urgently on AI and those who warn that politicians risk putting a handbrake on innovation and future prosperity. Sathianathan agreed that there is risk of slowing innovation, a point discussed by Nonnecke explicitly in this week’s episode of the fxpodcast.
Brandie Nonnecke discussed the political issues about passing new laws in the US currently, but as Sathianathan pointed out to us, a smaller growth company such as Iterate.ai is still very aware of wanting to comply with the intent of the Executive Order or face being held out of future contacts. He also pointed out how much his company is already paying attention to the EU AI Act, even before it is passed into law. Most growth startups see their market as international and thus the EU Act and events such as the Bletchley Park summit are monitored very closely by US companies.
Bletchley Park Declaration
Also last week, world leaders, including American Vice President Kamala Harris, British Prime Minister Rishi Sunak, Italy’s PM Giorgia Meloni, and Australia’s Deputy PM Richard Marles, met at Bletchley Park in the UK for a landmark International AI Safety Summit.
Bletchley Park is a historic site regarded as one of the birthplaces of computer science. Alan Turing was one of the computing pioneers and leading minds who worked there, a pioneering mathematician who is often considered ‘the father of computer science. Bletchley Park was the center of the British WW2 code-breaking efforts.
AI did not begin with the launch of ChatGPT last November, but the flood of AI solutions especially in the generative AI space, and the number of software firms and companies building its capability into their services, has caught many governments by surprise. UK Science Minister Ed Husic claimed last week that the summit was “seismic” as it hosted the US, China, and EU, who all signed up to a common agenda.
The summit was less about Terminator-style AI destruction, and more about the immediate challenges with AI-generated disinformation, discrimination, and copyright infringement. All 29 governments present signed on to the Bletchley Declaration which commits them to collaborate on the science of AI, look for a common understanding of the challenges, and continue the process moving forward.