As the European Artificial Intelligence Act (AI Act) begins its phased rollout, organisations are preparing for a regulatory shift that will influence how they build and communicate with AI. For marketing communications teams, the Act introduces new expectations around transparency, governance, and AI literacy.
To understand what this means, Aspidistra spoke with Arttu Ahava, a lawyer with the intellectual property (IP) law firm Berggren. With over a decade of experience, Ahava emphasises that the AI Act can’t be viewed alone: it intersects with personal data, IP and employee rights legislation. Drawing on three years of training organisations in AI governance, he offers a clear view of how companies should prepare.
The AI Act’s relevance for marketing teams
Where does the AI Act place its primary regulatory focus?
The AI Act places its regulatory focus on high‑risk and prohibited AI systems rather than on everyday tools. Its core obligations apply primarily to the tech companies that develop, modify, or deploy these types of AI in speciality solutions. This includes regulated, safety-critical products, such as medical devices, diagnostic tools, or safety and navigation systems in things like elevators, ships, or aircraft.
Meanwhile, it also includes government decision-making systems, HR and recruitment tools, insurance decision engines, and most biometric recognition technologies. These systems are considered high-risk because errors, bias, or opaque decision-making can have serious real-world consequences for individuals.
What does the Act mean for AI-generated or AI-assisted content?
Article 50’s transparency requirements create most of the impact. The rules require you to mark content that is AI‑generated or AI‑enhanced. Even so, the focus is not on human‑readable labels. Instead, the Act places the responsibility on tech companies. They must add machine‑readable tags or metadata that identify AI involvement. One could argue that the purpose is less about protecting citizens’ rights and more about preventing a “dead internet” [conspiracy theory] scenario, where LLMs iteratively train themselves on synthetic data and become less effective.
In addition, the rules on deepfakes require the prevention of people being misled. If you create an image or video of a real person, even as a parody, you must make it clear that the real person did not say or do those things. While there is some allowance for artistic license, you, as the creator, carry the responsibility for ensuring audiences understand the content is machine-generated or manipulated.
There is also a transparency requirement when you use AI to inform the public on matters of public interest – this applies to areas like political advertising or news content. Again, you must tell people when you use AI. However, don’t have to disclose it if a human oversees the content and takes editorial responsibility. This means that someone has reviewed, approved, and taken accountability for the final content, so the publisher, not the AI, stands behind it.
Clearing up misconceptions about the AI Act
For everyday marcomms use, are the most significant risks in the AI Act or elsewhere, like GDPR or IP law?
For everyday use of AI, the most significant compliance risks don’t come from the AI Act. As long as you’re not engaging in high‑risk or prohibited practices, you’re far more likely to run into issues under GDPR or IP law. The AI Act is still important, but for typical marcomms workflows, it’s a distant second.
The European Commission (EC) has also published recommendations and voluntary codes of practice, and these may end up being more relevant for marketing teams than the Act itself. These documents go much further in shaping responsible content practices, especially for AI‑assisted creation.
In contrast, the AI Act focuses on highly regulated sectors, so it has little direct reach into everyday, low‑risk communications work.
Which parts of the AI Act are being overstated or commonly misunderstood?
There’s far too much emphasis on AI literacy as if it were a major, enforceable requirement of the AI Act. The EC’s guidance on this point doesn’t read like something drafted as a legal obligation, and the accompanying FAQ reinforces that it’s more of an educational initiative than a binding rule. There’s also a voluntary pledge, but as a legal instrument, it has very little weight.
Crucially, there is no enforcement mechanism. The Finnish legislator has explicitly stated that there are no sanctions for failing to meet the literacy provision, and the Act itself does not establish any penalties. Despite this, it’s being discussed as one of the few obligations already “in force,” even though it arguably has little practical effect.
Subsequently, because of this misunderstanding, I’m seeing companies selling training sessions to “fulfil” the literacy requirement, but at this stage, that’s largely artificial.
Strengthening governance and compliance
How will the Act change the way legal, tech, and comms teams work together?
With regard to high-risk AI, it’s going to require a cross-team effort. You’re going to have people from legal, tech, IT, and the business line all working together to get products certified, for example, because they will require CE markings for AI use and compliance processes. It will be similar to what we already see with GDPR or other marketing-related regulations.
What does good AI governance look like beyond the bare legal minimum?
Organisations should place more importance on internal AI governance. The AI Act sets only a baseline. Companies still need to consider consent and transparency when they use someone’s data or likeness. They also need to decide what their own ethical standards require when they publish AI‑assisted content.
This means asking: What is our organisation’s ethical position? How transparent do we want to be about how we create content, especially when combining human and AI contributions? These questions go beyond current and even future legal obligations, but they’re increasingly important.
Companies need to follow what their peers are doing and how industry norms are evolving. Legal compliance is not the same as ethical behaviour, so organisations should aim for a higher standard than simply avoiding unlawful actions.
Preparing for what comes next
What should marcomms teams be doing now to prepare?
If you want to stay compliant and follow the EC’s vision of best practices, there are guidance documents worth reviewing. These include the AI Pact and materials on implementing AI literacy, as well as the draft Code of Practice for transparency.
These documents target AI providers, but the EC ultimately wants all AI‑generated content to carry machine‑readable labels. People can then use tools to check whether AI edited or generated an image, video, or other media. These tools give users a direct way to confirm how the content was created.
What message would you share with comms professionals feeling anxious about the Act?
Stay updated, but don’t expect the Ai Act to give you detailed guidance on what is or isn’t appropriate. The more important work is building your own governance framework and deciding how your organisation will use AI: what tasks are suitable for automation, what tasks must remain human-led because they are too sensitive or strategically important, and where you draw the line on control and oversight.
Overall, the AI Act will have only a limited direct impact on marketing communications. The most significant direct effect will come from the transparency provisions, and even those may not affect you significantly. The broader impact will be indirect. It will shape expectations, prompt organisations to think more carefully about their use of AI, and encourage partners or service providers to adopt more transparent and ethical practices.
Main image by and machines on Unsplash