Open sourcing AI models can be the way forward to accelerate innovation and democratize access to artificial intelligence. However, we need to make the distinction between today's large language models (which primarily rely on transformer architectures and techniques such as autoregression) and frontier models which could bring larger safety risks.
A study published this week by Demos in partnership with PwC maps out some of the divisions that exist when it comes to AI and open source, and how to regulate frontier AI in the context of open source ways of working. To conduct the research, Demos selected representatives from the AI community, including CEOs and public policy leads from leading technology companies, AI investors, civil society specialists, government officials and senior advisors. Demos then organized a structured debate where they asked participants to nominate a level of regulatory control that they saw to be necessary for a given series of increasing AI capabilities.
Here are four themes that emerged from these conversations:
Generative AI is a very specialised form of software, for which open source may not bring the same beneficial effects as it does to most other forms of software
Neither closed nor open AI models are unalloyed goods nor unalloyed evils and so any regulatory position, including being entirely laissez-faire, involves trade-offs – this debate is not an exception to that norm
There is a broad consensus that there will be a level of AI capability that would merit restrictions on its openness, though not what that level would be, nor how soon that might arise
Given that it is currently impractical to curb the use of a model that has been made fully open, regulation of an AI model of a certain capability level would need to be in place before that breakthrough was made
As someone who has been involved in the open source community for more than a decade, I believe that open sourcing AI models can be a force for good, but it is important to do so carefully and responsibly.
Comments