In this new series, entitled “Antitrust-Adjacent,” AAT covers relevant developments of interest & adjacent to competition-law issues. Our first installment on AI Regulation is co-authored by Kenya Primerio Partner Fidel Mwaki and Alfred Nyaga.
By Fidel Mwaki & Alfred Nyaga
Introduction

Artificial Intelligence (AI) is rapidly reshaping Kenya’s digital economy. From financial services and telecommunications to healthcare, logistics and digital platforms, AI systems increasingly underpin critical decision-making and service delivery across both the public and private sectors.
The introduction of Kenya’s Artificial Intelligence Bill, 2026 (the Bill) marks a significant milestone as the country seeks to articulate a comprehensive regulatory and governance framework for AI.
However, as Kenya moves toward formalising this framework, several foundational questions arise. Does the Bill effectively regulate AI systems? How should institutional oversight be structured? What would a practical regulatory model look like in the Kenyan context? And what governance architecture is required to ensure responsible, transparent and innovation-friendly deployment of AI?
Key Recommendations for Policymakers and Industry
To strengthen the effectiveness and practicality of Kenya’s Artificial Intelligence Bill, 2026, the following considerations may be useful:
- Establish clear and operational risk classification criteria
Define objective and measurable criteria for determining what constitutes a high-risk AI system. This should enable developers and deployers to assess compliance obligations at the design stage and reduce regulatory uncertainty.
- Introduce a framework for general-purpose AI (GPAI) models
Recognise the growing role of GPAI models as foundational infrastructure and consider tailored obligations around transparency, accountability and safe deployment.
- Adopt a sector-sensitive regulatory approach
Different sectors present different AI risks. The framework should enable coordination with sector regulators such as finance, telecommunications and healthcare to ensure context-specific oversight.
- Clarify liability and accountability across the AI lifecycle
Establish a clear allocation of responsibility between developers, deployers and users of AI systems, particularly where systems are used in decision-making with real-world consequences.
- Strengthen institutional coordination mechanisms
Provide clear guidance on how the proposed AI Commissioner will coordinate with existing regulators, including the ODPC, CA, CBK, CAK and other sector bodies, to avoid duplication and regulatory fragmentation.
- Provide for independent oversight and audit mechanisms
Introduce provisions for AI audits, documentation standards and oversight processes, particularly for high-risk systems and sensitive applications.
- Embed flexibility through phased and adaptive regulation
Allow for the framework to evolve through secondary regulations, guidelines and regulatory sandboxes, ensuring responsiveness to technological developments without creating uncertainty.
These measures would help ensure that Kenya’s AI regulatory framework is practical, coordinated and capable of supporting both innovation and responsible deployment.
Understanding the Regulatory Gaps: Does the Bill Effectively Regulate AI Systems?
The Bill places considerable emphasis on high-risk AI systems. While this approach mirrors developments in jurisdictions such as the European Union, it is not immediately clear how developers or deployers are expected to determine whether an AI system falls within that category.
In practice, this classification becomes one of the most important compliance questions for organisations building or deploying AI systems. Without clear and predictable criteria, organisations may struggle to assess their obligations. This uncertainty has the potential to affect both compliance and innovation.
Equally important is the treatment of AI systems that fall outside the high-risk category. AI technologies can generate meaningful societal and economic risks even where they are not formally classified as high risk. Issues such as misinformation, manipulation, algorithmic bias and systemic economic disruption may arise from such systems.
The Bill also appears to focus significantly on public sector deployment of AI systems, yet much of the development and deployment of AI in Kenya currently occurs in the private sector. Industries such as financial services, telecommunications, logistics and digital platforms already rely heavily on AI-driven systems. A balanced regulatory approach should therefore account for both domains.
A further gap is the absence of a clear framework addressing general-purpose AI (GPAI) models, including systems capable of generating text, code, images and other forms of content. These models increasingly serve as foundational infrastructure for a wide range of downstream applications and may require tailored regulatory treatment.
Globally, regulators are beginning to address these issues more directly. The EU AI Act introduces detailed risk classification frameworks and obligations for developers and deployers of high-risk systems, while also addressing GPAI models. China’s evolving regulatory framework similarly addresses algorithmic transparency, registration of AI deployers and security assessments prior to deployment.
The Governance Question: How Should AI Oversight Be Structured?
The Bill introduces an important institutional feature through the establishment of an AI Commissioner, tasked with overseeing AI development and deployment, monitoring compliance and issuing guidance on the responsible use of AI technologies in Kenya.
The creation of a dedicated oversight authority reflects an important recognition that AI presents regulatory challenges that extend beyond traditional legal frameworks.
However, the Bill raises a broader governance question: how should this oversight function interact with Kenya’s existing regulatory institutions?
Kenya already has several regulators whose mandates intersect with AI governance, including the Office of the Data Protection Commissioner (ODPC), the Ministry of ICT and Digital Economy, the Communications Authority of Kenya (CA), the Kenya Bureau of Standards (KEBS), the Competition Authority of Kenya (CAK), the Central Bank of Kenya (CBK), and various sector-specific regulators.
The Bill does not yet clearly define how the proposed AI Commissioner will coordinate with these institutions. In practice, effective governance of AI will require structured collaboration across multiple regulators rather than reliance on a single oversight authority.
There are also broader concerns around transparency and accountability across the AI lifecycle. AI systems typically involve multiple actors, including developers, deployers and organisations relying on system outputs. A coherent framework must therefore clearly allocate responsibility when AI systems produce harmful or unintended outcomes.
In addition, several aspects of operational governance are left to future regulations. These include independent AI audits, organisational governance frameworks, model transparency requirements and coordination mechanisms. While this approach provides flexibility, it also means that the governance architecture remains only partially defined at the legislative stage.
Designing a Practical Regulatory Framework for Kenya
These gaps point to a broader question: what would a practical and workable AI regulatory framework look like in the Kenyan context?
AI regulation globally is still evolving. Jurisdictions such as the European Union, China and Singapore are adopting different approaches, creating an opportunity for Kenya to design a framework that reflects both international best practice and local priorities.
A practical regulatory framework would benefit from several structural elements:
- A clear and predictable risk classification approach
Regulatory obligations should be tied to well-defined categories, enabling developers and deployers to assess their obligations at the design stage and reducing uncertainty.
- Recognition of general-purpose AI models
These systems increasingly function as foundational infrastructure and may require tailored transparency, accountability and safety obligations.
- Sector-sensitive regulation
Algorithmic risks vary significantly across industries, and systems deployed in sectors such as healthcare, financial services or critical infrastructure raise different regulatory concerns. Coordination with sector-specific regulators will therefore be necessary.
- Clear allocation of responsibility across the AI lifecycle
AI systems often involve multiple actors, and a coherent framework must assign responsibility across this lifecycle to prevent regulatory gaps.
- Building an Effective Governance Architecture for AI Oversight
Beyond identifying the governance gaps in the Bill, a key question is how Kenya can structure an effective and coordinated AI governance architecture in practice.
The introduction of an AI Commissioner is an important step toward institutionalising AI governance. However, effective oversight will require more than the creation of a single regulatory office.
AI intersects with multiple regulatory domains, including data protection, financial regulation, competition policy, communications regulation and consumer protection. Governance of AI systems will therefore require coordination across multiple institutions.
Organisational transparency and accountability will also be critical. As organisations increasingly rely on AI in decision-making, internal governance structures such as AI risk frameworks, audit mechanisms and oversight committees may become necessary.
At the same time, governance frameworks must ensure that regulation does not unintentionally discourage innovation. Mechanisms such as regulatory sandboxes and collaborative oversight models may help strike a balance between risk management and technological development.
Conclusion
Kenya now stands at a defining moment in shaping its AI regulatory and governance framework. The Artificial Intelligence Bill, 2026 provides an important starting point. However, its effectiveness will ultimately depend on whether the regulatory and governance framework is sufficiently clear, coordinated and capable of evolving alongside technological development.
The framework will need to reflect Kenya’s economic context and development priorities, while drawing from comparative approaches where relevant.
As the Bill moves forward, the upcoming public participation process presents an important opportunity for policymakers, technologists, legal practitioners and industry stakeholders to engage constructively with these issues and help shape a framework that supports both innovation and the responsible deployment of AI in Kenya.
- Fidel Mwaki is the Managing Partner of FMC Advocates LLP (Kenya) and In-Country Partner (Kenya) at Primerio. He advises on corporate, regulatory and governance matters, with a focus on emerging issues in digital regulation and AI governance.
- Alfred Nyaga is a Director at Digital Ethics Hub, a platform focused on shaping policy and practice in digital rights and AI regulation.





