A Working Group to develop a series of work products supporting the management of risks introduced by operationalizing artificial intelligence technologies. Our first white paper has been published and our second target area of study has just begun…
This working group serves as a platform to bring energy industry companies and our stakeholders from technology together to address the risks introduced by the use of artificial intelligence in our business. As a collaborative group of CCRO members, we seek to develop a comprehensive framework and to document leading practices for managing risks from use of Artificial Intelligence (“AI”) at commodity market participants. AI has proven to be a powerful tool for efficiency and insight, as well as a source of new risks. As AI technologies become increasingly embedded in decision-making, operations, and risk management, the complexity and opacity of these systems introduce new dimensions of risk that traditional frameworks may not fully address. The Committee of Chief Risk Officers (“CCRO”) believes that AI Risk is a critical, emerging area of concern, requiring tailored governance structures and industry-wide collaboration.
In this paper, we use AI to mean systems and tools that apply machine learning, natural language processing, and other algorithmic logic to large data sets to perform tasks that previously required human analysis, input, judgement, or intervention. These tools may be developed in-house, available via open-source arrangements, or delivered by vendors.
AI is being introduced across organizations, some deliberately, some through embedded vendor functionality, and some by users locally, with or without coordination with enterprise technology platforms or cyber risk governance. These tools can create risk if they generate decisions, classifications, or outputs without transparency, validation, or clear ownership. Users may also be unaware of issues like drift, hallucinations, or limitations in output reliability and may naively trust output implicitly.
AI usage can introduce a diverse set of risks that extend beyond traditional model risk. In the context of commodity trading, these risks can impact decision-making, market stability, regulatory compliance, and operational integrity. The CCRO recommends companies define AI Risk categories in keeping with their functional needs.
• Ethical Risks
AI Systems and Tools often lack embedded ethical reasoning, which can lead to unintended consequences. Ethical risks include bias in decision-making, privacy violations, and societal impacts stemming from opaque or unexplainable outputs. These risks are particularly concerning when AI is used in high-stakes environments without adequate human oversight.
• Input Data Risks
Poor data quality, unstructured sources, and poor data lineage can compromise the integrity of AI Systems and Tools. Input risks arise when models are trained on incomplete, outdated, or biased datasets, leading to unreliable or misleading outputs.
• Output Risks
Output risks include Hallucinations, lack of explainability, and over-reliance on AI-generated decisions. These risks are amplified when users treat AI outputs as definitive without understanding their underlying assumptions, limitations, or ramifications.
• Regulatory Risks
The evolving regulatory landscape introduces uncertainty for firms deploying AI Systems and Tools. Regulatory frameworks impose requirements around transparency, accountability, and data protection, and may change from region-to-region. Changes in the regulatory landscape and non-compliance can result in penalties, reputational damage, or disallowed cost recovery.
• Systemic Risks
Systemic risks include AI herding behavior, monopolistic access to data, and capital cost implications. These risks can destabilize markets if multiple firms rely on similar AI models or data sources, leading to correlated actions and reduced market diversity.
• Personnel Risks
Skill gaps, unclear accountability, and insufficient oversight contribute to AI Risk in the personnel space. These arise when AI Systems and Tools are used without proper training, governance, or understanding. Misclassification of AI Tools can also lead to risk underestimation and inadequate controls. Rogue development of AI Tools without centralized oversight can also create risks.
• Model Performance & Validation Risk
Beyond foundational risk categories, firms must address performance-specific challenges. Overfitting occurs when models perform well on training data but fail to generalize in new market conditions. Model drift (e.g., gradual performance degradation) and data drift (e.g., changing input patterns) require ongoing monitoring. Black box opacity in complex models creates validation challenges, especially for deep learning and ensemble methods. Continuous learning models pose unique validation risks as they evolve post-deployment.
Agentic AI Systems that orchestrate multiple tools pose unique validation challenges, particularly when outputs are sent externally without human review.
• Operational & Technology Risks
AI system failures can disrupt critical business operations. System downtime, latency issues, and performance degradation affect reliability. Integration failures with existing infrastructure creates operational risk. Dependency failures from third-party APIs, data feeds, or cloud services can cascade through AI systems. In addition, edge case failures can occur when AI systems encounter scenarios outside their training distribution.
• Vendor Risks
Many AI tools enter the firm through third-party software. These systems may perform key forecasting or reconciliation tasks but offer little visibility into model architecture or training data. Governance practices should include clear protocols for evaluating vendor tools that rely on AI, especially when they impact financial or regulatory outputs.
• Cybersecurity & Security Risks
AI systems face significant cybersecurity threats that can compromise model integrity, data confidentiality, and system availability. Adversarial attacks include data poisoning (corrupting training data), model evasion (fooling deployed models), and model extraction (stealing proprietary models). Prompt injection and jailbreaking pose risks for LLMs. API vulnerabilities, data breaches, and unauthorized access threaten system security. Firms should implement robust access controls, input validation, security monitoring, and incident response protocols specific to AI systems.
