AI Law in Brazil and EU AI Act: What Changes for Those Using Artificial Intelligence in 2026
2026 is the year in which artificial intelligence regulation stopped being a theory and became an operational reality. Europe is implementing the EU AI Act with concrete deadlines. Brazil is moving forward with PL 2338/2023, which has already passed through the Senate and is now being processed in the Chamber. And the United States, historically resistant to federal regulation, sees Colorado taking the lead with a state law that comes into force in June.
For those who develop, sell or use AI systems -- from startups to large corporations -- understanding these regulations is not optional. It's a question of survival in the market. Companies that ignore compliance requirements will face fines, market restrictions and loss of costmer trust.
This article details the three main regulations, compares their approaches, explains who is affected, and offers a practical pretotion guide. If you work with technology, digital marketing or any area that touches artificial intelligence, this content is essential.
1. The global regulatory landscape in April 2026
The AI regulatory landscape in 2026 is fragmented. There is no unified global regulation -- each region or country is following its own path, with different philosophies and deadlines. This creates complexity for companies that operate in multiple markets, but it also creates opportunities for those who prepare before their competitors.
Europe led the movement with the EU AI Act, approved in March 2024 and with staggered implementation between 2025 and 2027. Brazil follows a similar path, with thePL 2338 inspired by the European approachbut adapted to the Brazilian reality. The USA, in turn, continues without federal legislation, leaving individual states to take the initiative.
China, Canada, Japan and South Korea also have regulations at different stages of implementation. But for professionals and companies operating in the Brazil-Europe-USA axis, the three regulations that we will analyze here are those that have the greatest impact on their daily lives.
The regulatory race is no accident
Governments are not regulating AI on a whim. Three factors converge in 2026 to make regulation inevitable:
- Real incidents:Documented cases of algorithmic discrimination in hiring, credit, and surveillance have increased public pressure for regulation
- Growing Capabilities:models like GPT-5, Claude Opus 4.6 and Gemini 2.5 are capable of tasks that two years ago seemed like science fiction. With more power comes more regulatory responsibility
- Geopolitical competition:Whoever defines the rules of AI defines global standards. Europe and China want to lead, the USA does not want to be left behind, Brazil wants to have a voice
2. EU AI Act: what is already in force and what is coming until August
The EU AI Act (European Artificial Intelligence Regulation) was approved by the European Parliament in March 2024 and published in the EU Official Journal in July 2024. Since then, it has been implemented in phases:
Implementation timeline
- February 2025:ban on unacceptable AI practices came into force -- social scoring systems, subliminal manipulation and vulnerability exploitation are prohibited
- August 2025:Obligations for general purpose models (GPAI) came into effect. Providers like OpenAI, Google and Anthropic need to document training data, publish technical summaries and comply with copyright rules
- August 2026 (next):obligations for high-risk systems come into force. This is the most impactful phase -- it affects thousands of companies that use AI in areas such as HR, healthcare, education and finance
- August 2027:all obligations in force, including for products already on the market
Mandatory regulatory sandbox
One of the most innovative aspects of the EU AI Act is the requirement that each member country establish at least oneAI regulatory sandboxuntil August 2026. Sandboxes are controlled environments where companies can test AI systems under regulatory supervision, without facing penalties if something goes wrong during the test.
The idea is to allow innovation without stifling it. Startups can develop high-risk systems within the sandbox, validate compliance and only then launch them on the market. Countries like Spain and the Netherlands already operated voluntary sandboxes before the EU AI Act, and now they serve as a model for others.
Fines and penalties
The EU AI Act has three levels of fines:
- Prohibited practices:up to 35 million euros or 7% of global revenue (whichever is greater)
- High-risk violations:up to 15 million euros or 3% of global revenue
- Incorrect information to regulators:up to 7.5 million euros or 1% of global revenue
For SMEs and startups, the values are proportionally adjusted. But the message is clear: Europe takes AI regulation as seriously as it took data protection with the GDPR.
Extraterritorial impact:Like the GDPR, the EU AI Act applies to any company that provides AI systems to the European market, regardless of where it is based. A Brazilian startup that sells AI-enabled SaaS to costmers in Germany needs to comply with the EU AI Act.
3. PL 2338/2023: Brazil’s AI Law
Bill 2338/2023, known as the Legal Framework for Artificial Intelligence, is the main Brazilian legislative proposal to regulate AI. The PL was approved by the Senate in December 2024 and is now being processed in the Chamber of Deputies. As we discussed in ouranalysis of AI and regulation in Brazil, the country is at a critical moment of definition.
Fundamental principles of PL 2338
The Brazilian project adopts a risk-based approach, similar to the EU AI Act, but with adaptations for the Brazilian context:
- Centrality of the human person:AI systems should serve people, not the other way around. Automated decisions that affect fundamental rights need human oversight
- Non-discrimination:AI systems cannot discriminate based on race, gender, sexual orientation, disability or social origin. Algorithmic bias audits are mandatory for high-risk systems
- Transparency:Users must know when they are interacting with an AI system. Automated decisions must be explainable
- Security and privacy:compliance with LGPD and fundamental requirement
- Responsible innovation:regulatory sandboxes designed to foster innovation without excessive bureaucracy
Risk classification in Brazil
PL 2338 classifies AI systems into three categories:
- Unacceptable risk (prohibited):lethal autonomous weapons, social scoring by the government, subliminal manipulation that causes harm
- High risk:systems used in health, education, public safety, work, credit and justice. Require impact assessment, technical documentation and human supervision
- Other systems:Minimum transparency obligations, no heavy regulation
Regulatory body
The PL provides for the creation of a national AI authority -- the National Artificial Intelligence Regulation and Governance System (SIA). This body would be responsible for monitoring, applying sanctions, issuing guidelines and coordinating regulatory sandboxes. There is debate about whether the SIA would be a new agency or whether its functions would be absorbed by existing bodies such as ANPD (National Data Protection Authority).
The budget issue is central. Creating a new regulatory agency is expensive, and Brazil already has a history of underfunded agencies. The alternative of expanding the ANPD's responsibilities is more pragmatic but could overload an agency that already faces challenges with the LGPD.
Regulations change AI skills are permanent
Regardless of which law is in force, professionals with mastery of AI tools are always ahead. 748+ Claude Code-ready skills covering compliance, automation, and analytics.
Ver Mega Bundle -- $94. Colorado AI Act: the first American state to regulate AI
While the American Congress debates and fails to act, the state of Colorado has taken the lead. The Colorado AI Act (SB 24-205) was passed in May 2024 and goes into effect onFebruary 1, 2026, with enforcement from June 2026.
Focus on consequential decisions
Unlike the EU AI Act, which regulates AI broadly, the Colorado AI Act has a specific focus:AI systems that make or influence consequential decisionsabout individuals. Decision consequences include:
- Approval or denial of employment, promotion or dismissal
- Approval or denial of credit, loans or insurance
- Access to education or educational opportunities
- Access to healthcare services or medical coverage
- Access to housing
- Legal or government services
Obligations for deployers (those who use AI)
The Colorado AI Act creates obligations for both developers and deployers (companies that implement AI systems). For deployers:
- Risk management policy:document how the AI system is used, what risks it presents and how they are mitigated
- Impact assessment:Before using AI for decision making, carry out a documentary assessment of the impact on affected individuals
- Consumer notification:inform when a consequential decision was made or influenced by AI
- Right of appeal:offer a mechanism for the individual to challenge the decision and request human review
Why Colorado Matters Nationally
The Colorado AI Act is important not just because of what it regulates, but because of the precedent it creates. Just as California set privacy standards with the CCPA/CPRA before any federal law, Colorado can set the standards for AI regulation in the US. Other states -- California, New York, Illinois, Virginia -- already have similar proposals in the pipeline.
For companies operating in multiple American states, regulatory fragmentation is a logistical nightmare. Complying with 50 different state laws is exponentially more expensive than complying with one federal law. This could be the catalyst that finally forces Congress to act.
5. Comparison: Europe vs Brazil vs USA
To make it easier to understand the differences and similarities, see the side-by-side comparison:
| Aspect | EU AI Act | PL 2338 (Brazil) | Colorado AI Act |
|---|---|---|---|
| Status | In effect (phases) | Approved by the Senate, under analysis by the Chamber | Enforcement Jun/2026 |
| Approach | Risk-based (broad) | Risk-based (broad) | Consequential decisions (narrow focus) |
| Range | Extraterritorial | National + extraterritorial | State (Colorado) |
| Maximum fine | 35M EUR / 7% revenue | R$50M / 2% revenue | $20,000 per violation |
| Regulatory sandbox | Mandatory for parents | Expected, not detailed | Not expected |
| Regulatory body | Each country + EU Commission | SIA (to be created) | Colorado Attorney General |
| GPAI/Foundational models | Specific regulations | Predicted | Not addressed |
| AI Transparency | Mandatory | Mandatory | Mandatory for consequential decisions |
The general pattern is clear: Europe has the most comprehensive and strictest regulations. Brazil follows the European model with adaptations. The US is fragmented, with state legislation filling the federal vacuum. Professionals who work in thelaw and advocacy with AIThey are already specializing in these regulatory differences as an area of practice.
6. High-risk systems: what they are and who is affected
The concept of "high risk" is central to the three regulations. But what exactly makes an AI system “high risk”?
Practical definition
An AI system is considered high risk when its decisions or recommendations could significantly affect a person's life, health, safety, fundamental rights or economic opportunities. In practice, this includes:
- Recruitment and HR:systems that filter resumes, classify candidates or recommend promotions/dismissals
- Credit and finance:credit scoring models, loan approval, fraud detection that affects the costmer
- Health:AI-assisted diagnosis, patient triage, treatment recommendation
- Education:automatic assessment systems, educational content recommendation, admission
- Fairness and security:predictive policing, recidivism risk assessment, biometric surveillance
- Critical infrastructure:AI systems in energy, transport, water and telecommunications
Obligations for high-risk systems
The three regulations agree to require, for high-risk systems:
- Impact assessment before deployment
- Detailed technical documentation (training data, performance metrics, known limitations)
- Meaningful human oversight (not just “rubber stamping”)
- Continuous post-deployment monitoring
- Dispute mechanism for affected individuals
- Bias and discrimination audit
Who is affected in practice:If your company uses AI for any decision that directly impacts a person's life -- whether it's a hiring recommendation, credit approval or medical diagnosis -- you're probably in scope for at least one of these regulations.
7. Regulatory sandboxes: the testing ground for AI
The regulatory sandbox concept is one of the most interesting innovations in AI regulation. It works like this: a regulatory body creates a controlled environment where companies can test AI systems under supervision, without suffering penalties if the system fails during the test period.
The EU AI Act requires each EU member country to have at least one operational sandbox by August 2026. Spain has been operating one since 2022 and has served as a model for European regulation. The Spanish sandbox allowed startups like Ideatis and Sherpa.ai to test high-risk systems in healthcare and finance with direct guidance from the regulator.
Benefits for companies
- Risk reduction:validate compliance before commercial launch
- Access to regulatory guidance:direct feedback from the regulator on what needs to be adjusted
- Competitive advantage:companies that go through the sandbox have a kind of "seal of approval" that generates trust
- Lower compliance cost:fixing problems in the sandbox is much cheaper than fixing them after a fine
Brazil foresees sandboxes in PL 2338 but without details. The Colorado AI Act does not mention sandboxes. This difference in approach reflects different philosophies: Europe believes in regulation supporting innovation, the US believes in minimal regulation with subsequent accountability.
8. How to prepare for AI compliance
Regardless of which regulation directly affects your company, the pretotion actions are similar. Here is a practical roadmap:
Step 1: Map your AI systems
First of all, identify all the AI systems your company uses or develops. Many companies are surprised to discover that they use AI in more places than they imagined -- from spam filters to content recommendations, to marketing automation and costmer service chatbots.
Step 2: Classify the risk
For each system identified, classify the risk level according to applicable regulations. Most digital marketing and commercial automation systems qualify as low or medium risk. But if you use AI for segmentation that affects access to services, personalized pricing, or decisions that impact individuals, it can be high risk.
Step 3: Document everything
All regulations require documentation. Start documenting now:
- What data is used to train or feed each system
- What decisions does the system make or influence?
- What performance and fairness metrics are monitored
- Who is responsible for overseeing each system
- What is the appeals process for affected individuals?
Step 4: Implement human supervision
No regulation accepts “AI on autopilot” for high-risk decisions. Implement human review processes with appropriate training. Reviewers need to understand what the AI is doing and have real authority to override its decisions.
Step 5: Continuously monitor and audit
Compliance is not a one-time event -- it is an ongoing process. Establish an audit cadence (quarterly for high risk, semi-annual for others) and monitor fairness and performance metrics in real time.
9. Practical impact for companies and professionals
For startups and entrepreneurs
Startups developing AI products need to consider privacy by design (now "AI compliance by design"). This is not just cost -- it's a competitive advantage. Enterprise clients increasingly demand proof of compliance as a prerequisite for hiring AI suppliers.
The cost of compliance may seem prohibitive for small startups, but regulations provide for different treatment for SMEs. The EU AI Act has proportional fines and PL 2338 provides incentives for micro and small companies.
For digital marketing professionals
If you use AI for ad personalization, audience segmentation, dynamic pricing or automating decisions that affect the consumer, be careful. The line between “marketing automation” and “consequential decision-making on an individual” is becoming blurred.
In practice, most uses of AI in digital marketing (campaign optimization, content generation, aggregate data analysis) do not qualify as high risk. But uses such as credit scoring for financial offers, individual profile-based pricing or algorithm-based denial of service may fall within the scope.
For developers
Developers building AI systems need to incorporate:
- Detailed logging:record model inputs, outputs and decisions for audit
- Explainability:be able to explain why the model made a certain decision
- Fairness tests:validate that the model does not systematically discriminate against protected groups
- Override mechanisms:allow humans to correct or override model decisions
- Technical documentation:maintain up-to-date documentation on data, training, limitations, and performance
The hidden opportunity:AI compliance is creating a new professional category. Just as the GDPR created demand for DPOs (Data Protection Officers), the EU AI Act and PL 2338 will create demand for professionals specializing in AI governance and compliance. Whoever takes a stand now will reap first.
10. Sources and references
- Where AI Regulation is Heading in 2026 -- OneTrust
- Brazil AI Act: Key Provisions and Implications -- White & Case
- AI Regulations Around the World 2026 -- GDPR Local
- EU AI Act -- Regulation (EU) 2024/1689 of the European Parliament
- PL 2338/2023 -- Federal Senate of Brazil
- Colorado SB 24-205 -- Colorado AI Act
Regulation requires prepared professionals
AI compliance is not about fear -- it's about competence. Master the right tools with 748+ professional skills for Claude Code. $9.
Quero as Skills -- $9FAQ
No. PL 2338/2023 was approved by the Senate in December 2024, but is still under analysis in the Chamber of Deputies. The expectation is that it will be voted on in the second half of 2026, with a possible effect from 2027. Meanwhile, Brazil does not have specific legislation for artificial intelligence.
Yes, if the Brazilian company offers products or services that use AI to citizens or companies in the European Union. The EU AI Act has extraterritorial scope, similar to the GDPR. This means that Brazilian startups that sell SaaS with AI to European costmers need to comply with European regulatory obligations.
Both the EU AI Act and the Brazilian PL 2338 classify AI systems used in recruitment and selection of personnel, credit assessment, medical diagnosis, biometric surveillance, judicial and educational systems as high risk. These systems require impact assessment, transparency, human oversight and detailed technical documentation.
No. The Colorado AI Act is state legislation, not federal. The US does not have a comprehensive federal law regulating AI until April 2026. Colorado was the first state to pass a specific law, which comes into force in June 2026. Other states such as California and New York have proposals in progress, but none have been approved.