In a real-life example, a Thai-based fintech company with an AI-powered digital lending app has successfully approved loans for over 30% of applicants previously rejected by banks due to a lack of formal income statements or credit history.
The AI model helped them reach customers with more precision and less bias, also improving overall loan portfolio recovery.
This rapid growth highlights the need for consistent, reliable governance frameworks that empower organisations to confidently manage emerging AI risks. Unreliable AI outcomes can cause lawsuits, regulatory penalties, reputational damage, and shareholder value erosion.
Consequently, organisations face increasing pressure from management and governance bodies to ensure AI operates as intended, aligns with strategic goals, and is used responsibly. Addressing these challenges requires a proactive approach: aligning AI solutions with business objectives, reducing bias in data and outputs, and fostering transparency and explainability.
AI Risk Management Considerations
AI-governance framework is a structured system of policies, standards and processes designed to guide the entire lifecycle of AI. It guides responsible AI development, with the primary goal to maximise benefits while mitigating significant risks such as bias and privacy violations, ensuring compliance with evolving regulations, fostering public trust and driving innovation.
Designing a responsible AI governance framework is complex. Successful AI risk management programs often share several key foundational principles. First, there must be a balance between innovation and risk management. AI governance should not be perceived as a barrier to innovation.
Instead, awareness campaigns help stakeholders understand how risk management enhances trust and long-term value. Second, consistency with existing risk management practices—such as model risk management (MRM)—can streamline implementation and improve efficiency.An
Stakeholder alignment is another critical factor. Engaging cross-functional teams, including cybersecurity, IT, legal, and compliance, ensures governance frameworks are comprehensive and well-supported. Additionally, organisations should manage regulatory changes. As AI regulations evolve, strong change management practices are essential to maintain compliance and build a trustworthy AI environment.
Ultimately, AI risk management should be integrated into the broader enterprise risk framework. Leveraging existing structures while tailoring them to AI’s unique challenges can help organisations build resilient and adaptable governance systems.
How Organisations Get Started
Deloitte’s Trustworthy AI framework offers a structured approach to implementing ethical AI practices and mitigating risks throughout the AI lifecycle. Embarking on an AI governance journey requires a structured and holistic approach. Organisations can begin by focusing on three key areas: design, process, and training.
• Design involves conceptualising and documenting the intended use, objectives, and risks of AI systems. This includes consulting diverse stakeholders, gathering use-case scenarios, and identifying potential adverse outcomes. Understanding regulatory and legal requirements is also essential.
• Process refers to the development, implementation, validation, and ongoing monitoring of AI systems. A clear statement of purpose should guide model development, supported by objective model selection criteria and rigorous testing. Development tests should evaluate model assumptions, stability, bias, and behaviour across various input values.
• Training is important. AI ethics training for developers and end-users fosters awareness of potential harms and ethical considerations. Using reputable, well-documented data sources and representative datasets ensures fairness. Data quality metrics should be identified and monitored, with efforts to minimise bias through feature selection and transformation assessments.
Establishing Trustworthy AI Governance
At the heart of effective AI governance lies a commitment to trustworthiness. Deloitte’s Trustworthy AI framework outlines seven core principles forming the foundation of a sound AI risk management program:
• Fair and Impartial: Limiting bias in AI outputs is crucial for all models. Organisations should identify and address bias to prevent unfair outcomes. For example, a Gen-AI chatbot might perform well in one culture but poorly in another, potentially reducing user trust in the tool and the business.
• Transparent and Explainable: Stakeholders should understand how their data is used and how AI decisions are made. Algorithms must be open to inspection. For example, a Gen-AI medical recommendation may require a notation that it was machine-derived, along with accessible, clear explanations of that recommendation.
• Accountable: Clear policies must define who is responsible for decisions made. Whether the enterprise builds a model in-house or accesses one through a vendor, there must be a clear connection between the Gen-AI model and the deploying business.
• Robust and Reliable: AI systems should consistently produce reliable outputs and be capable of learning from both humans and other systems.
• Private: The data used to train and test AI models may contain sensitive data. Consumer privacy must be respected, with data usage limited to its intended purpose. Users should have control over their data sharing preferences.
• Safe and Secure: Gen-AI content can be misused to create false information for stakeholders. To ensure safety and security, businesses must address cybersecurity risks and align AI outputs with both business goals and user interests.
• Responsible: AI should be developed and operated in a socially responsible manner, reflecting ethical values and societal norms. Enterprise leaders need to determine whether an AI use case is a responsible decision for the organisation.
By embedding these principles, trust among stakeholders, compliance, and the full potential of AI in a responsible and sustainable way can be fostered.
Looking ahead to a trustworthy future
The journey starts now—by embracing responsible governance, organisations can lead with confidence, unlock transformative value, and shape a future where innovation and integrity go hand in hand.
Ing Houw Tan
Assurance Leader
Deloitte Thailand