Governments and regulatory bodies around the world are racing to establish AI governance frameworks and regulatory sandbox programs that allow companies to test artificial intelligence technologies in controlled environments — without the immediate threat of regulatory penalties. As artificial intelligence adoption accelerates across every major industry, policymakers from Washington to Brussels to Singapore are deploying new tools designed to balance the demands of innovation with the requirements of public safety and consumer protection. The result is a rapidly expanding ecosystem of AI regulatory sandbox programs that is reshaping how nations govern one of the most transformative technologies in history.
The concept of a regulatory sandbox — a structured environment where companies can test novel products under relaxed regulatory conditions while under close oversight — originated in the United Kingdom’s financial sector in 2015. The UK Financial Conduct Authority pioneered the model to support fintech development, and the results were striking. Companies that completed successful testing within the FCA sandbox received 6.6 times more investment than their peers, and the average time required for market authorization fell by 40 percent compared to the regulator’s standard approval process. That record of success has inspired policymakers worldwide to apply the same model to artificial intelligence.
The EU AI Act Mandates National Sandbox Programs Across Europe
The most sweeping institutional driver of AI regulatory sandbox adoption is the European Union’s AI Act, which entered into force in 2024 and is now being implemented across all 27 EU member states. Article 57 of the AI Act explicitly mandates that each member state establish at least one AI regulatory sandbox at the national level by August 2, 2026. The requirement is unprecedented in scope and has triggered a wave of sandbox design activity from France and Germany to Spain and beyond.
Spain has been among the most proactive EU nations, launching an AI sandbox aligned with its national digital strategy, España Digital 2025. The Spanish Secretariat of State for Digitalization and Artificial Intelligence oversees a standardized testing framework specifically designed to support innovation while ensuring compliance with the AI Act, with particular focus on high-risk AI systems. The Spanish initiative emphasizes practical learning so that national authorities can contribute meaningfully to the development of standards and guidance at both the national and EU levels.
Germany has taken a complementary approach, focusing on the development of experimentation clauses that facilitate learning through sandbox environments. The German government’s objective has been to balance experimental flexibility with compliance with existing legal frameworks — an approach that aims to make sandboxes both legally coherent and practically functional. The EU has also established four sector-specific Testing and Experimentation Facilities (TEFs) that will receive over €220 million in combined funding from the European Commission and member states over a five-year period. These facilities support supervised testing and experimentation in cooperation with national authorities.
Complementing the TEFs is the European Digital Innovation Hub (EDIH) network, which consists of more than 150 regional hubs operating across the EU. These one-stop-shop centers help companies and public sector organizations access technical expertise, testing infrastructure, and compliance guidance. The 2025 AI Continent Action Plan specifically highlighted the EDIH network as a key channel for facilitating companies’ access to AI regulatory sandboxes, particularly for small and medium-sized enterprises that may not have the resources to navigate national regulatory frameworks independently.
United States Federal and State-Level Sandbox Activity Surges in 2025
In the United States, the regulatory sandbox movement for AI gained powerful momentum throughout 2025. The Trump administration’s AI Action Plan, released on July 23, 2025, explicitly called for the creation of AI regulatory sandboxes and AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools. The administration’s support for sandboxes reflects its broader policy of adopting a light-touch approach to AI regulation, prioritizing innovation while building evidence-based frameworks for future governance.
On September 10, 2025, Senate Commerce Committee Chair Ted Cruz introduced the SANDBOX Act (S. 2750) — formally the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation Act — which would create a federal AI regulatory sandbox program. The legislation would direct the White House Office of Science and Technology Policy to establish and operate the program, allowing US companies and individuals to apply for waivers or modifications of federal agency regulations in order to test, experiment with, or temporarily provide AI products and services. The SANDBOX Act program would operate for 12 years unless renewed by Congress, and would coordinate with existing state sandbox programs in Utah and Texas, allowing for joint applications from participants who could benefit from both federal and state regulatory relief simultaneously.
At the state level, Utah became the first US state to operate an AI-specific regulatory sandbox when it enacted the Utah Artificial Intelligence Policy Act (UAIP) in 2024. The UAIP established the Office of Artificial Intelligence Policy to oversee the Utah AI Laboratory Program, known as the AI Lab. Utah’s office has broad authority to grant entities up to two years of “regulatory mitigation” — including exemptions from applicable state regulations, capped civil penalties, and cure periods — while they develop pilot AI programs and gather feedback from industry experts, academics, regulators, and community members. The AI Lab’s first half-year of operation focused on mental health applications, resulting in legislation that directly regulates AI mental health chatbot use under state law.
Texas Establishes a 36-Month AI Testing Program Under TRAIGA
Texas significantly expanded the US sandbox landscape when Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) on June 22, 2025, making Texas the fourth US state to enact comprehensive AI legislation, following Colorado, Utah, and California. One of TRAIGA’s most consequential provisions is the establishment of a regulatory sandbox program administered by the Texas Department of Information Resources in consultation with the newly created Texas Artificial Intelligence Council. The TRAIGA sandbox allows approved participants to develop and test AI systems in a controlled environment, temporarily exempt from certain state licensing and compliance requirements, for periods of up to 36 months. Participants are required to submit quarterly reports on system performance, risk mitigation outcomes, and stakeholder feedback — generating empirical data intended to inform future legislative and regulatory reforms.
Delaware joined this trend when Governor Matt Meyer signed a joint resolution establishing the Delaware AI Sandbox, a supervised environment for testing and deploying artificial intelligence technologies with a focus on high-impact sectors including corporate governance, biotechnology, healthcare, chemicals, and finance. Delaware’s move is particularly notable given its central role in US corporate law, as a large share of Fortune 500 companies are incorporated there. Regulatory observers have noted that Delaware’s sandbox could set important precedents for how AI systems interact with corporate governance obligations across the entire US business ecosystem.
Singapore Leads Asia-Pacific with Dual Sandbox Architecture
Singapore has emerged as one of the most advanced sandbox ecosystems in the world, operating multiple AI testing frameworks simultaneously. The country’s Generative AI Evaluation Sandbox is overseen by the Infocomm Media Development Authority (IMDA) and the AI Verify Foundation — a nonprofit subsidiary wholly owned by IMDA that drives Singapore’s AI governance testing efforts. The sandbox allows participants to collaboratively assess generative AI technologies through an “Evaluation Catalogue,” which compiles common technical testing tools and recommends a baseline set of evaluation tests for generative AI products. This infrastructure gives companies a structured methodology for demonstrating the safety and reliability of their AI systems before deploying them commercially.
In July 2025, Singapore announced a second and more ambitious initiative: the Global AI Assurance Sandbox. This new program specifically targets the rapidly growing field of agentic AI — systems capable of taking autonomous actions across complex environments — and addresses risks such as data leakage and vulnerability to prompt injection attacks. The Global AI Assurance Sandbox reflects Singapore’s recognition that the risks posed by agentic AI systems are qualitatively different from those associated with earlier generations of AI tools, and that governance frameworks must evolve accordingly. Singapore’s dual-sandbox architecture positions the country as a leading testing ground not only for domestic AI innovation but also for international companies seeking to validate AI products before launching them across Southeast Asia and broader Asian markets.
The UAE, Kenya, and Brazil Expand the Global Sandbox Map
The United Arab Emirates established one of the earliest AI-specific regulatory sandboxes in the world when it launched its Regulations Lab in January 2019, following authorization through federal legislation. The UAE’s sandbox allows companies to apply for temporary licenses to test and vet innovations involving artificial intelligence and other emerging technologies. The Regulations Lab also serves a forward-looking legislative function, using the outcomes of sandbox experiments to anticipate and develop future regulatory frameworks. The UAE has stated an ambition to become the world’s leading AI regulatory testing ground, and its early head start has given it a significant advantage in attracting international AI companies seeking a permissive but structured testing environment.
Kenya represents a compelling example of how developing nations are deploying sandbox frameworks to both harness AI’s potential and address localized challenges. The Kenya National Artificial Intelligence Strategy 2025–2030 sets out an ambitious plan to align domestic policy with broader digital trends across sub-Saharan Africa while remaining grounded in local data and market ecosystems. Kenya operates two separate AI sandboxes designed to reflect its domestic priority sectors and its aspirations for participation in global AI scalability trends. Brazil has similarly introduced its own AI-focused regulatory sandbox as the country moves to position itself as a regional AI governance leader in Latin America.
The OECD has played a central role in shaping the global expansion of AI sandbox governance. In its AI Principles, the organization recommends that governments consider using experimentation to provide a controlled environment in which AI systems can be tested and scaled up wherever regulatory sandboxes could provide such a framework. The OECD has also published detailed analysis of the challenges inherent in AI sandbox programs — including the need for interdisciplinary cooperation, the development of AI expertise within regulatory agencies, regulatory interoperability across jurisdictions, and the management of potential impacts on innovation and competition.
Key Tools and Frameworks Supporting AI Governance Sandbox Programs
Effective AI governance sandbox programs are not simply administrative arrangements — they depend on sophisticated technical tools, evaluation frameworks, and governance architectures to function properly. The following are the most widely adopted tools and frameworks currently driving sandbox governance around the world.
- AI Verify Foundation’s Testing Toolkit (Singapore): Developed by the IMDA-affiliated AI Verify Foundation, this toolkit provides a comprehensive suite of technical tests for assessing the safety, reliability, and fairness of AI systems. The framework includes standardized evaluation protocols drawn from a collaboratively maintained Evaluation Catalogue, enabling consistent comparisons across different AI products and developers. It has become a reference standard for several regulatory authorities in the Asia-Pacific region seeking to implement structured AI assessment processes.
- EU AI Act Conformity Assessment Procedures: For AI systems classified as high-risk under the EU AI Act, conformity assessment procedures provide a structured pathway for demonstrating regulatory compliance. Participating in an approved national sandbox environment allows companies to use sandbox documentation as evidence of compliance with the Act, providing significant commercial value beyond the immediate testing period. The procedures are designed to integrate with existing CE marking processes where applicable.
- NIST AI Risk Management Framework (RMF): The US National Institute of Standards and Technology’s AI Risk Management Framework provides a voluntary governance structure that many US state sandbox programs reference or incorporate. The RMF’s four core functions — Govern, Map, Measure, and Manage — give organizations a systematic method for identifying, assessing, and responding to AI-related risks throughout the product lifecycle.
- MITRE Federal AI Sandbox: Announced in 2024, the MITRE Federal AI Sandbox is a partnership with NVIDIA designed to provide a secure environment for US federal agencies to experiment with and deploy advanced AI solutions. The program aims to accelerate AI research and development for government applications, including the training of large language models, enhancement of military command and control systems, infrastructure security, and fraud detection.
- Utah AI Lab Program Framework: Utah’s AI Lab program offers a replicable model for state-level AI governance sandbox implementation. Its “regulatory mitigation” mechanism — which includes statutory exemptions, capped civil penalties, and cure periods — provides a legal architecture that other US states are actively studying as they develop their own programs.
- TRAIGA Sandbox Quarterly Reporting System (Texas): The TRAIGA sandbox’s requirement for quarterly performance and risk mitigation reports creates a structured feedback loop between AI developers and regulators. This data-generation mechanism is designed not only to inform the sandbox’s own operations but to build an evidence base for future legislative action at the state and potentially federal level.
- EU Digital Innovation Hubs (EDIHs) Network: With over 150 hubs operating across EU member states, the EDIH network functions as a distributed support infrastructure for AI sandbox access. The hubs provide technical expertise, compliance guidance, and connections to testing facilities — lowering barriers to sandbox participation for smaller companies and organizations that might otherwise be excluded from regulatory engagement processes.
Challenges and Risks in Scaling AI Regulatory Sandbox Programs
Despite their growing adoption, AI regulatory sandboxes are not without significant challenges. One of the most widely cited concerns is the risk of creating a fragmented regulatory landscape — particularly in countries like the United States, where sandbox programs are proliferating at the state level without a coordinating federal framework. Representative Ted Lieu of California highlighted this tension during a congressional hearing on AI governance, noting that even two different state standards for AI training would create a compliance impossibility for frontier AI laboratories, given the enormous financial and computational resources required to train large AI models.
The SANDBOX Act introduced by Senator Cruz is partly designed to address this problem, with provisions for coordinating federal and state sandbox activities and creating mechanisms for joint applications. However, critics have noted that the 12-year sunset provision and the voluntary nature of the program may limit its effectiveness in creating the durable, nationally consistent framework that AI developers say they need to invest confidently at scale.
There are also important equity and access considerations. Sandbox programs that require significant legal, technical, and administrative resources to enter can inadvertently favor large companies over the startups and smaller enterprises they are ostensibly designed to help. The OECD has specifically flagged the importance of comprehensive eligibility criteria and assessment processes that ensure sandboxes remain genuinely accessible to a diverse range of participants. The EU’s EDIH network is one structural response to this concern, but observers note that the quality of EDIH support varies significantly across regions and member states.
Additionally, while regulatory sandboxes shield participating organizations from administrative fines during the testing period, they do not remove liability for damages caused to third parties. This distinction — between regulatory protection and civil liability — is crucial for companies assessing the risk profile of sandbox participation, particularly in sensitive sectors such as healthcare, financial services, and criminal justice.
Frequently Asked Questions
What is an AI regulatory sandbox?
An AI regulatory sandbox is a controlled environment established by a government or regulatory authority where companies can develop, test, and deploy AI products or services under relaxed regulatory conditions for a defined period, subject to ongoing oversight. The goal is to enable innovation while generating empirical data that can inform future regulation and policy.
Which countries currently have AI regulatory sandboxes?
As of early 2026, countries operating AI regulatory sandboxes include Singapore, the United Arab Emirates, Spain, Germany, the United Kingdom, Brazil, Kenya, and the United States at the state level in Utah, Texas, and Delaware. EU member states are required to establish sandboxes by August 2026 under the EU AI Act’s Article 57.
What does the EU AI Act require from member states regarding sandboxes?
Article 57 of the EU AI Act mandates that each EU member state establish at least one national AI regulatory sandbox by August 2, 2026. Participating companies can use documentation from sandbox testing as evidence of compliance with the AI Act, and they are protected from administrative fines for infringements during the testing period as long as they follow national authority guidance.
What is the US SANDBOX Act?
The SANDBOX Act, introduced by Senator Ted Cruz in September 2025, would establish a federal AI regulatory sandbox program administered by the White House Office of Science and Technology Policy. It would allow US companies to apply for waivers or modifications of federal regulations in order to test AI products and services, and would coordinate with existing state-level sandbox programs in Utah and Texas.
How long can companies participate in an AI sandbox?
The duration varies by jurisdiction. Utah’s AI Lab program grants up to two years of regulatory mitigation. Texas’s TRAIGA sandbox allows participation for up to 36 months. The UAE’s Regulations Lab issues temporary licenses of varying lengths. EU member state sandboxes are being designed according to national frameworks that comply with the AI Act’s requirements.
What risks remain even when participating in a regulatory sandbox?
While sandbox participation typically provides protection from administrative regulatory fines, companies remain fully liable for any damages caused to third parties as a result of their AI system’s operation during the testing period. Organizations must therefore continue to implement robust risk management, monitoring, and incident response practices even within a sandbox environment.
How are AI sandboxes different from general innovation sandboxes?
General innovation sandboxes — common in fintech regulation — focus primarily on testing compliance with existing financial or commercial regulations. AI-specific regulatory sandboxes are broader in scope, addressing risks unique to AI systems such as algorithmic bias, data privacy, transparency, safety, and robustness. They are also more likely to engage with fundamental rights considerations, particularly in high-risk application domains such as healthcare, law enforcement, and education.
Conclusion
The global expansion of AI governance and regulatory sandbox tools represents one of the most significant shifts in technology policy of the decade. From the EU’s landmark AI Act mandating national sandboxes across 27 member states, to the United States’ growing patchwork of state-level programs in Utah, Texas, and Delaware, to Singapore’s dual-framework approach addressing both generative and agentic AI, governments are investing heavily in controlled experimentation as the foundation for evidence-based AI regulation. The UAE’s early leadership, Kenya’s ambition, and Brazil’s regional strategy complete a picture of a world increasingly committed to testing before regulating — learning through structured trials rather than imposing top-down rules in advance of operational knowledge.
Critical technical tools, from Singapore’s AI Verify Toolkit and the EU’s conformity assessment procedures to the NIST AI Risk Management Framework and MITRE’s federal sandbox partnership with NVIDIA, are providing the evaluation infrastructure that makes meaningful sandbox governance possible. At the same time, challenges around fragmentation, accessibility, and the boundaries of regulatory versus civil liability protection must be addressed if sandbox programs are to deliver on their full potential. The coming years, shaped by the EU AI Act’s implementation deadlines, the progress of the US SANDBOX Act, and the operational results of state programs in Utah and Texas, will be decisive in determining whether AI regulatory sandboxes become a durable pillar of global AI governance or a transitional measure overtaken by more comprehensive legislative frameworks.














