As of August 2, 2025, the EU AI Act begins imposing fines and new obligations specifically for General Purpose AI (GPAI) models, marking a key compliance deadline for companies using or developing AI in Austria. In this interview with Alexandra Ciarnau, we address what these changes mean in practical terms for Austrian businesses navigating the evolving regulatory landscape.
As of 2nd of August the AI Act fines may be imposed and the obligations for General Purpose AI („GPAI“) are applicable. What does that mean for Austrian companies?
Alexandra Ciarnau: Well, Austria is still behind schedule with the implementation of the national law and has not designated the authorities yet. Since this is missing, no penalties can currently be imposed in Austria for violations of the AI Act (up to EUR 35 million or 7% of global annual turnover). However, this is not a free ticket for non-compliance! In the event of violations of applicable laws, regulations and directives, companies and its authorised representatives remain liable under general civil law and data protection laws.
Companies should thus evaluate the applicability of the GPAI obligaitons. These apply not only to providers of GPAI models, but also to operators who integrate and modify these models in their systems. This means that they can assume the role of “downstream providers”. In this case, providers must implement the requirements of Art. 51 ff AI Regulation.
How can companies identify GPAI models?
Alexandra Ciarnau: This is merely a technical question. I recommend however to draft an AI inventory analysis. Not every model of an AI system is regulated. According to Article 3 No 63 of the AI Act, GPAI models have been trained with a large amount of data, display a significant generality and are capable of competently performing a wide range of distinct tasks, regardless of how they are placed on the market. Another characteristic is that they can be easily integrated into a large number of downstream systems or applications. If models have been developed for specific purposes, they are not classified as GPAI and not specifically regulated. This must be critically examined from a technical perspective in practice.
The Comission has recently published GPAI Guidelines. Do they provide any additional support in distinguishing GPAI from Foundation Models?
Sinc GPAI is presumed to be met if the AI model has at least one billion parameters, EU Commission developed additional indicative criteria based on the training calculation. Accordingly, a model is deemed GPAI, if
a training calculation value of 10²³ FLOP is exceeded and
language-based or text-to-image or text-to-video outputs are generated.
It is questionable whether, according to the guidelines, output should be decisive for classification and whether models for chess and computer games or for weather forecasts, for example, should be excluded per se. After all, the legal definition does not refer to the results. Regardless of this, however, the examples provide a good point of reference for better identifying and classifying GPAI.
How are GPAI models usually provided?
Alexandra Ciarnau: To facilitate the inventory, companies can have a closer look on the form of provision. GPAI models are made available, for example, via libraries (e.g. Hugging Face, GitHub), application programming interfaces (API), direct download or as physical copies.
Is every GPAI model treated similar to GPT from OpenAI or other hyperscalers?
Alexandra Ciarnau: No. Once GPAI models have been identified, the question arises as to whether they pose systemic risks and whether extended provider obligations apply. This means that Models are further distinguished in simple GPAI and GPAI with systemic risks.
Classification of GPAI with systemic risks is assumed in accordance with Article 52(2) of the AI Act, if the cumulative amount of calculations used for its training, measured in floating point operations, exceeds 10(25)FLOP. In addition, the Commission may also identify models as GPAI with systemic risks.
What are the relevant obligations for GPAI model providers?
Alexandra Ciarnau: According to Article 53 et seq. AI Act, providers of GPAI models must meet the following requirements, among others, from August onwards:
Documentation: technical documentation for each individual model, including its training and testing procedures and the results of its evaluation during its life cycle.
Provision of information to providers of AI systems: facilitation of integration for providers of AI systems so that they can understand the capabilities and limitations of the AI model and fulfil their obligations.
Copyright compliance: implementation of a strategy to comply with EU copyright law.
Transparency: Publication of a summary of the training content used. A template is provided for this purpose.
Additional obligations apply to providers of GPAI models with systemic risks: risk analyses and model evaluations, reporting of serious incidents to the AI Office, ongoing cybersecurity measures.
However, exceptions to the scope and obligations for GPAI providers exist under certain conditions for open source software, provided that it does not pose systemic risks.
You mentioned that operators of AI systems might become providers of GPAI models. How can that be? Which scenarios might lead to this shift in roles?
Alexandra Ciarnau: If GPAI models are significantly modified by downstream actors who integrate the model into their own systems, they become downstream providers. However, in line with the Blue Guide, the guidelines clarify that not every change is relevant. It must be “significant.” This can be achieved, for example, by fine-tuning the model.
In addition, the new guidelines on GPAI support the distinction between insignificant and significant changes with indicative thresholds:
Modifications using more than one third of the training calculation for the original model;
If the training effort of the original model is neither known nor reliably estimable, the following indicators can be used: If the original model is a GPAI model with systemic risk, one third of the threshold value for AI models with systemic risk must be applied, currently one third of 10²⁵ FLOP. For GPAI models without systemic risk, one third of the general indicative threshold should be used, currently one third of 10²³ FLOP.
Therefore, caution is advised when making in-house adjustments and customisations. Eager changes to licensed software – including open source software, unless exempted – may also lead to a new role under the AI Act.
Thank you for your insights! To sum it up, what is your advice?
Alexandra Ciarnau: Thank you for having me! I am a huge fan of stepplans and checklists. It helps to organize and use resources efficiently:
☐ Identify GPAI models and roles
☐ Classify GPAI models
☐ Question significant modifications to GPAI models
☐ Implement obligations as a provider for the developed model or as a downstream provider for the modified part of the model
☐ Document the models, roles and obligations
☐ Ongoing re-evaluation of obligations and adaptation of documentation throughout the entire life cycle
Our interview partner: Alexandra Ciarnau
Alexandra Ciarnau is Co-Head of DORDA‘s Digital Industries Group and specialized in IT, IP and data protection law. She advises national and international clients on new technologies, particularly artificial intelligence, blockchain and XR. Alexandra further manages the DORDA sphere in metaverse, is President of Women in AI Austria and regular lecturer at Universities.
DORDA: Three generations of lawyers. Internationally renowned and assertive. With the aim of providing clarity in all areas of business law. DORDA pursues a holistic advisory approach and supports Austrian companies, international groups, and start-ups in developing their business and in challenging situations. Their award-winning experts provide support in all areas of business law. Strong companies have put their trust in DORDA for over 45 years.