Supervision policies can shape long-term risk management in general-purpose AI models

Authors: Manuel Cebrian, Emilia Gomez, David Fernandez Llorca

Abstract: The rapid proliferation and deployment of General-Purpose AI (GPAI) models,
including large language models (LLMs), present unprecedented challenges for AI
supervisory entities. We hypothesize that these entities will need to navigate
an emergent ecosystem of risk and incident reporting, likely to exceed their
supervision capacity. To investigate this, we develop a simulation framework
parameterized by features extracted from the diverse landscape of risk,
incident, or hazard reporting ecosystems, including community-driven platforms,
crowdsourcing initiatives, and expert assessments. We evaluate four supervision
policies: non-prioritized (first-come, first-served), random selection,
priority-based (addressing the highest-priority risks first), and
diversity-prioritized (balancing high-priority risks with comprehensive
coverage across risk types). Our results indicate that while priority-based and
diversity-prioritized policies are more effective at mitigating high-impact
risks, particularly those identified by experts, they may inadvertently neglect
systemic issues reported by the broader community. This oversight can create
feedback loops that amplify certain types of reporting while discouraging
others, leading to a skewed perception of the overall risk landscape. We
validate our simulation results with several real-world datasets, including one
with over a million ChatGPT interactions, of which more than 150,000
conversations were identified as risky. This validation underscores the complex
trade-offs inherent in AI risk supervision and highlights how the choice of
risk management policies can shape the future landscape of AI risks across
diverse GPAI models used in society.

Source: http://arxiv.org/abs/2501.06137v1

About the Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may also like these