Europe's AI Act Arouses Diverse Responses from Tech Firms

Table of Contents
1. AI Act Adopted by the EU
2. EU tech companies oppose a stringent rule for system regulation.
INTRODUCTION
The European Union’s landmark AI Act has received a mixed reaction from the region’s tech industry, with some welcoming the legislation’s attempt to regulate the development and use of artificial intelligence, while others have expressed concerns about its potential to stifle innovation.
- Recently, the EU decided on interim regulations to control the rapidly expanding AI industry.
- The Act lays out a risk-based strategy for controlling various systems that are in use in the area.
- Many worry that the rule's strict standards might impede advancement in the area.
AI ACT ADOPTED BY THE EU
Approved on December 8, the Act establishes a risk-based framework for AI regulation, imposing the strictest rules on systems deemed more dangerous.
The EU Commission states that systems with "minimal risk," such as spam filters or recommender systems, would not face regulatory scrutiny. Stringent regulations will apply to artificial intelligence systems labeled as "high-risk," and those categorized as "unacceptable risk" will face prohibition.
As the complete text of the agreement remains undisclosed, the systems classified as high-risk are still unknown. The IT industry expressed concerns about burdensome requirements for dangerous systems, despite general approval of the regulatory approach.
Regulations for high-risk demand adhere to risk mitigation, quality data, activity logging, detailed documentation, clear user info, human oversight, and cybersecurity.
EU Tech Companies Oppose Stringent Rule for AI System Regulation.
Widespread worry exists that these regulations could burden developers, leading to skilled AI professionals leaving and hindering EU initiatives.
Cecilia Bonefeld-Dahl, DigitalEurope's director general, highlighted that fulfilling new requirements, alongside laws like the Data Act, will require substantial corporate resources. These resources will be allocated to legal compliance instead of employing AI engineers.
High-risk AI projects or systems, as per France Digitale, must undergo a costly and time-intensive process to obtain a CE label.
Europe's current solution amounts to regulating mathematics, which is illogical," France Digitale said. Penalties for forbidden AI apps may reach €35 million (7% of global yearly revenue); other rule violations may incur €15 million (3%). Providing false information may result in €7.5 million (1.5%).
Share Transmission
Broadcast this signal to your network
More News

SEC and CFTC Landmark Interpretation: “Most Crypto Assets Are Not Securities” and What the New Token Taxonomy Changes for Users
In 2026, SEC and CFTC clarified most crypto assets aren't securities, introduced token taxonomy, and provided guidance on staking, airdrops, mining, and wrapping for better regulatory clarity.

The March 2026 FOMC Meeting: How the Fed’s Rate Decision, Dot Plot, and Powell’s Tone Can Swing Crypto Markets
A practical explainer of the March 2026 FOMC decision, dot plot, and Powell’s tone, and why liquidity and risk appetite move Bitcoin and crypto.

Bitcoin’s Surge Past $74,000: How Spot ETF Inflows and Institutional Buying Are Fueling the 2026 Rally
Bitcoin moved above $74,000 amid spot ETF inflows and institutional demand. Learn how spot ETFs work, what flows mean, and key on-chain context.
