Sci-Fi to Security
Artificial Intelligence used to be the subject of cinema and novels. It was part of the imaginary world. Now a days, it is a force that is active in society. Algorithms determine what we read, watch and even purchase. They drive automobiles, aircraft and roads. AI is no longer fiction. It is reality with real risks. Such a transformation requires serious governance.
The Existential Question
The main issue is not only automation. It is about survival. There is a risk of some experts being existential. AI would turn into something that humans would not be able to control. It may jeopardize even the right to life. States are obliged to protect life under international law. That duty may extend to AI. Is it time that states regulate AI in the same way they regulate dangerous biological experiments? The question is urgent.
Tech Governance under Pressure
The AI stays under governed. Technology advances at breakneck speed. Legal norms move slowly. States are having a hard time keeping up. The structures are poor. Notably, AI safety is not the subject of a binding international treaty. Majority of the efforts are voluntary. The distance between technology and law is perilous. The longer the delay the higher the risk.
Precautionary Principle at Play
There is a precision of precautionary principle in law. States should act when there is doubt but there is a high likelihood of bad things happening. Delaying until full evidence is obtained is a lifelong affair. In the case of AI, the stakes are deadly. The consequences are gigantic even with low probability. Regulating AI in the early stage goes in line with this principle. It reflects the way nuclear technology and genetic engineering were treated. It is preferable to prevent rather than to cure.
The Genetic Analogy
The concept of AI being genetically controlled is strong. It puts the oversight of AI in the tight rationing of genetic science. Genetic engineering is associated with limits, experiments and international norms. The same thing can be said about AI. States might develop systems that keep track of design at the beginning. Like labs, safety codes may apply to developers as well through AI safety codes. This comparison is what makes governance tangible.
Risks of Delay
Risks are compounded unless controlled. Artificial intelligence can strengthen fake news, cause cyber warfare and destabilize economies. In more sophisticated versions, it may develop weapons, manipulate power grids or shut down financial networks. The jump between the assistive and the destructive is brief. However, when it is unleashed, it may not be reversible. Waiting before regulation is a risk to human security.
Global Governance Challenge
The regulation of AI cannot be done by a single state. This is because AI systems cross borders in seconds. Indeed, they are international by virtue of clouds and networks. Therefore, it needs to be international governance. International agreement is hard. Competition amongst the leading nations prevents development. Others view AI as a device of control. Others see it as a public good. The result is fragmentation. Consequently, without cooperation global AI regulation risks collapse.
Education rooted in Ancient Patterns
History offers lessons. There were treaties and inspections under nuclear governance. The world agreed to prohibit biological weapons. Climate change made states drift toward increased cooperation. In every case it was difficult yet possible. AI requires similar courage. It is a question of adapting old models. The next horizon of international law may be an AI treaty.
Evolving Legal Norms
There is a gradual change in the legal norms. AI liability is the subject of court and scholarly debate. UN agencies speak on ethical rules. The EU has advanced its AI Act. These are the correct directions. But they remain fragmented. The AI law has a weak backbone without a global spine. States must go beyond national boundaries. They have to influence norms that are commensurate with the power of technology.
The Security Imperative
AI does not merely act as an instrument of innovation. It is a weapon of power. Autonomous drones and cyber systems are already being tested by militaries. This gives it a gray zone of civilian and military use. There is no end to AI dipping into arms races. Ethics require regulation in the same way that security does. Control of this technology is what provides peace.
AI has entered the realm of science fiction, but it is now a part of the mainstream of global security. The actual risks are existential. The obligation of states on the right to life is obvious. They need to control the use of AI at its early stages, in strictness and worldwide. A model is presented by genetic style regulation. Waiting to be caught in the disaster is no alternative. The world should act before AI acts against the world.
Disclaimer: The views and opinions expressed in this article are exclusively those of the author and do not reflect the official stance, policies, or perspectives of the Platform.