A team of researchers has suggested that the best way to combat the malicious use of artificial intelligence (AI) is for governments to develop more powerful AI and control access to it. The researchers argue that in order to control who has access to advanced AI systems in the future, governments would need to control access to the hardware necessary to train and run these models. This means that policymakers could use compute, which refers to the foundational hardware required for AI development, to regulate and monitor AI development and usage.
Governments are already exercising some control over hardware access. For example, the US restricts the sale of certain GPU models used for AI training to certain countries. According to the research, truly limiting the ability for malicious actors to use AI for harm would require the integration of “kill switches” into hardware. These kill switches would give governments the ability to shut down illegal AI training centers remotely. Naive or poorly scoped approaches to compute governance could have adverse effects on areas such as privacy, economic impacts, and centralization of power.
One challenge governments may face is the use of decentralized compute for training, building, and running AI models. Recent advances in “communications-efficient” training make it harder for governments to locate and monitor hardware associated with illegal training efforts. The researchers argue that this could lead to an arms race against the illicit use of AI, where societies must use more powerful, governable compute to defend against emerging risks posed by ungovernable compute.
While the suggestion to put AI in the hands of governments may help combat malicious AI use, it also raises concerns about potential infringement on privacy and the concentration of power. Governments would have to strike a delicate balance between regulating AI for the greater good and respecting individual rights. Considerations such as developing a “blueprint for an AI bill of rights” to protect citizens’ data would need to be taken into account.
The researchers propose that developing more powerful AI and controlling access to it may be the most effective way to combat the malicious use of AI. This would involve governments regulating the hardware necessary for AI development and implementing measures such as kill switches to prevent illegal AI training. Careful consideration of privacy and individual rights is crucial in order to strike the right balance in AI governance.
By proactively regulating AI hardware and implementing kill switches, governments can play a crucial role in protecting against malicious AI use. It’s a step in the right direction to ensure the responsible deployment of AI technology.
As if governments don’t already have enough control over our lives, now they want to regulate AI development too? No thanks.
This is a terrible idea! Governments can’t even handle the power they have now, let alone more control over AI.
Developing more powerful, governable AI can help defend against emerging risks posed by ungovernable AI. It’s essential to stay ahead in the fight against malicious AI use.
More regulations and surveillance? No thanks. I value my privacy and don’t want the government snooping around with kill switches.