Earlier this month, the EU Parliament voted to adopt the draft text of its AI Act. The Act represents the first comprehensive regulation of AI in the world, and it could come into force within the next year. The EU will be first, but governments everywhere are rapidly developing their own frameworks to find a balance between securing the benefits of AI and protecting against its harms.
Wherever the guardrails end up, however, this won’t be the end of the conversation; it is the beginning. For chemistry and chemists, AI presents many opportunities and risks, and it poses ethical and even existential questions that we must address.
AI has already made some notable scientific achievements. Alphafold solved the problem of protein structure determination (if not actually the protein folding problem) and AI has been used to ‘discover’ new antibiotics. Machine learning algorithms have produced the most accurate solutions to the Schrödinger equation and devised potentials for computational chemistry that rival the accuracy of the best force fields. Even allowing for the considerable hype around AI, its potential benefits are inarguable as its progress is inevitable.
Among the various forms of AI, it is the large language models (LLMs) that power the likes of Chat-GPT that have spurred so much of the recent conversations on (and with) AI. A new preprint demonstrates LLMs’ impressive range of applications: over 1.5 days a group of researchers used GPT to build tools that predict chemical properties, visualise molecules, devise exam questions and more. Yet it can also answer exam questions, or write your coursework or your research paper (It is even been used by the editors of popular chemistry magazines). So how should we use it responsibly?
Publishers have banned the use of Chat-GPT as a coauthor on research papers on the grounds that AI cannot meet the criteria of accountability and responsibility of a human author. That seems like a reasonable restriction. Yet whether AI as a writing aid should be banned completely is less clear. It could, for example, help to remove barriers created by the hegemony of English in science communication.
Yet that ability to make knowledge more accessible raises further issues. Chemists on a ‘red team’ (a group tasked with deliberately finding ways to misuse the technology) for Chat-GPT found it could devise routes to illicit drugs and chemical weapons.
As the race to regulate AI continues, it’s worth remembering that using technology responsibly means much more than just following the rules. Innovation increasingly outpaces regulation, so chemists must engage with these discussions and equip themselves to make sound ethical decisions. AI is hardly the first technology to transform society, and it won’t be the last.