Overview
The AI Act—the first of its kind in Europe—is set to become law in 2025. This groundbreaking regulation seeks to frame the development of AI while protecting citizens—but it’s also sparking heated debate, particularly among technology giants.
What lies behind the call for a pause? Let’s anticipate the impact on Swiss and European companies and explore how to turn this challenge into a real opportunity.
What Is the AI Act?
The AI Act is Europe’s first AI law, adopted in August 2024. It aims to regulate the development, marketing, and use of AI systems on the European market.
Its goal is to ensure ethical, human‑centric AI while fostering innovation and supporting the proper functioning of the single market.
Under its risk-based framework, AI systems used for assisted surgery, automated résumé screening, or visa application assessment are classified as “high-risk.”
These technologies must comply with strict requirements around transparency, reliability, and human oversight.
Deepfakes must be clearly labeled as artificially generated. Systems deemed “unacceptable risk”—such as behavioral manipulation or mass biometric surveillance—are outright banned.
The regulation applies to all stakeholders—developers, providers, and professional AI users—including Swiss companies operating in the EU.
Why Are Tech Players Calling for a Pause?
Major tech firms like Apple, Meta, and Google have requested a delay in implementing the EU AI Act 2025.
Their main concern is the complexity and rapid rollout of the AI law, especially since rules around general‑purpose AI models (like ChatGPT or Gemini) are still being finalized.
For instance, a company that develops generative AI must not only disclose that users are interacting with a machine, but also publish summaries of the copyright‑protected data used to train its model.
Industry players worry that this rush could stifle AI innovation, drive up compliance costs, and leave Europe at a disadvantage compared to the U.S. and Asia.
What Will This Mean for Swiss and European Companies?
The AI Act will directly affect European and Swiss companies that market or use AI solutions in the EU.
For example, a Swiss bank using an algorithm for credit approval must demonstrate its AI is non‑discriminatory and explainable—requiring regular audits and customer appeal mechanisms.
In the automotive sector, a manufacturer must ensure autonomous vehicle safety through rigorous compliance testing before market release. Startups and SMEs—often drivers of AI innovation—will need to invest in compliance, which poses financial challenges but also offers a quality‑and‑trust differentiator.
Deadlines are staggered: from February 2025, unacceptable‑risk systems are banned, and from August 2026, transparency and risk‑management obligations expand across many sectors.
Opportunities: An Ethical Framework That Reassures
The European AI law presents real opportunities for companies:
- Building trust: A company that ensures transparency and safety—e.g., explaining how a chatbot works or flagging deepfakes—reassures customers and partners.
- Competitive edge: Firms that anticipate AI Act requirements can position themselves as responsible leaders. For example, a Swiss fintech compliant with the AI Act may gain easier access to the European market and appeal to ethically minded clients.
- Ethical innovation: The framework encourages the development of more responsible AI, becoming a selling point in sectors like healthcare and finance, where trust is paramount.
Risks: Slowing Down Innovation
On the flip side, strict and rapid implementation of AI regulation in Europe carries risks:
Slowdown of AI Innovation:
Startups may be hampered by administrative burdens and compliance costs. For instance, a young company creating recruitment AI must invest in bias management and transparency, potentially delaying market launch.
Talent Exodus:
Researchers and developers might relocate to regions with less stringent AI laws.
Legal Uncertainty:
Lack of clarity—especially around generative AI obligations—could discourage investment and experimentation.
How Can Brands Prepare Now?
To get ahead of AI regulation, companies should adopt a structured approach:
- Map all AI systems in use or development, detailing their purpose and risk level.
- Form multidisciplinary teams (legal, technical, business) to integrate AI Act requirements and raise awareness of bias and transparency.
- Strengthen AI governance by appointing a compliance officer and establishing ongoing oversight processes.
- Update procurement policies and choose partners that comply with the AI law.
- Invest in compliance tools to automate documentation, audits, and risk management.
- Inform users clearly—e.g., disclose when a user interacts with a chatbot or when content is AI‑generated.
Our Perspective at Eminence:
At Eminence, we view the AI Act as a foundational step for AI and digital transformation in Europe.
We support our clients—SMEs, large companies, and startups—with AI system audits, team training, and tailored compliance processes.
We believe the AI law is an opportunity to build sustainable, ethical, and competitive AI. Anticipating regulation not only avoids penalties, but also builds customer trust and opens new markets.
Our conviction: AI and regulation in Europe are now inseparable—and those who start preparing today will be tomorrow’s leaders.
Subscribe to our newsletter and gain access to strategic insights, exclusive analyses, and expert tips to enhance your online presence.
Conclusion
The AI Act marks a major turning point in Europe’s vision for AI’s future: one where innovation and responsibility go hand in hand.
Though the path to the regulation’s calm adoption may be challenging—especially for startups and SMEs—it also paves the way for renewed user trust and sustainable global competitiveness.
At Eminence, we believe the key lies in anticipating, adapting, and putting ethics at the heart of AI innovation.
By choosing this path today, companies don’t just comply—they build a solid foundation for long‑term success in a rapidly evolving digital world.