+
  • HOME»
  • Artificial Intelligence and threats to the world

Artificial Intelligence and threats to the world

Emergency management must adapt to address emerging safety risks posed by artificial intelligence alongside natural and human-made hazards. AI and hazard classification AI hazards can be categorized as intentional (used to cause harm or compromise systems) or unintentional (resulting from human errors or technological failures). Additionally, emerging threats involve AI taking over human control and […]

Emergency management must adapt to address emerging safety risks posed by artificial intelligence alongside natural and human-made hazards.
AI and hazard classification

AI hazards can be categorized as intentional (used to cause harm or compromise systems) or unintentional (resulting from human errors or technological failures). Additionally, emerging threats involve AI taking over human control and decision-making, prompting calls for a moratorium on its development by experts.

Public safety risks
AI hazards and risks should be integrated into risk assessment matrices at local, national, and global levels. Immediate action is needed to address high-risk AI scenarios, as failure to do so may lead to significant human and property losses.

AI risk assessment
AI hazards are gaining attention, leading to the development of risk assessment frameworks. KPMG’s “AI Risk and Controls Matrix” emphasizes the need for businesses to address emerging risks. Governments, like Canada and the US, have issued directives and guidelines to mitigate AI-related risks, focusing on minimizing harm, ensuring governance, and conducting risk assessments.

Threats and competition

National-level policy focus on AI has primarily centered around national security and global competition, highlighting the risks of falling behind in AI technology. The US National Security Commission on Artificial Intelligence emphasized the national security risks of lagging in AI development compared to other countries, particularly China. While the World Economic Forum’s 2017 Global Risk Report acknowledged the potential risks of AI, the latest 2023 report does not mention AI, suggesting that global leaders did not perceive it as an immediate risk.

Faster than policy
AI development is progressing much faster than government and corporate policies in understanding, foreseeing and managing the risks. The current global conditions, combined with market competition for AI technologies, make it difficult to think of an opportunity for governments to pause and develop risk governance mechanisms.While we should collectively and proactively try for such governance mechanisms, we all need to brace for major catastrophic AI’s impacts on our systems and societies.

Tags:

Advertisement