Guide: How to write your own AI Policy [incl. Checklist + free Template]
Artificial intelligence has long since become part of our daily lives: in office tools, customer service, marketing automation, and data analysis. This is precisely why it becomes dangerous when employees simply “go ahead” without clear guardrails. This practical guide for medium-sized and large enterprises shows step by step how to develop a robust, understandable, and legally compliant AI policy.

1. Why your company really needs an AI Policy
AI is already embedded in many standard tools and specialized applications. However, without clear rules, companies risk data leaks and the disclosure of trade secrets, the processing of personal and sensitive data without a legal basis under the GDPR, violations of copyright, trademark, and personal rights, as well as ethical issues arising from discriminatory or biased AI outputs. In addition, there are potential reputational damages and significant liability risks if erroneous AI outputs are adopted without review.
An AI policy should:
- Create clarity (“Who may do what?”),
- Protect sensitive data and know-how,
- Consider legal requirements (e.g., GDPR, copyright law, EU AI Regulation),
- Safely enable employees to use AI productively and without fear,
- Assign clear responsibilities for review, training, and updates,
- Generate trust among customers, partners, and employees.
What is important here is a lean, practical document – not a legal tome. Otherwise, the document may not be fully read or remembered by employees.
2. Approach: How to start pragmatically
A pragmatic start can be organized in five steps:
Form a team
An interdisciplinary team ensures that technical, legal, and operational perspectives are incorporated into your AI policy from the outset. At a minimum, the following should be involved: IT, business unit(s), data protection, possibly legal/compliance, and HR. This team is often continued later as an AI committee or AI office.
Conduct an inventory (“AI Audit”)
A systematic inventory makes visible which AI tools are already in use – officially and unofficially – and where action is needed.
The question is: Which AI systems are already being used (officially and “in the shadows”)?
Examples of AI that are often already in early use include office AI functions, chatbots, translation tools, analytics tools, and image generators. You can use internal surveys or workshops to obtain a realistic picture.
Define the target vision
A clear target vision establishes what value AI should deliver in the company and which risks you want to consciously limit.
What do you want to achieve with AI?
Examples include efficiency, quality, and innovation. However, you may also simply need to respond to internal or external pressure or prevent shadow AI.
How risk-tolerant is your company?
Some companies are more restrictive: use is generally prohibited, only explicitly approved tools are allowed;
Other companies are more open: use is generally permitted, but with clear boundaries and rules.
Formulate initial rules (Minimum Policy)
With a few well-understood basic rules, you quickly create guidance on what is permitted and what is not when using AI.
- Which tools are permitted, which are prohibited?
- May corporate emails be used for registrations?
- May confidential or personal data be entered? If yes: where, by whom, and under what conditions?
- May AI outputs be used externally without review? (Recommendation: no.)
Plan training & communication
Through targeted training and transparent communication, you ensure that the AI policy is understood and can actually be implemented in daily operations.
Policy alone is not sufficient. Training is needed on
- How AI works,
- Risks (hallucination, bias),
- Data protection, copyright,
- Internal approval and reporting channels.
Training should be regularly updated and integrated into onboarding.
In particular, AI officers must possess sufficient technical and legal AI competence and keep it current through regular training. If internal expertise is lacking, external expertise should be specifically engaged.
3. Core components of an AI Policy – structural proposal
You can use the following components directly as a structure for your own AI policy.
3.1 Introduction & purpose
Define briefly and clearly:
- Why you are introducing the policy (e.g., secure, legally compliant, ethical use),
- What objectives you are pursuing (protection of individuals, data, and trade secrets, promotion of innovation),
- That AI should support decisions, but not replace human decisions.
State explicitly that humans remain responsible for critical decisions.
3.2 Scope
Specify:
- Which locations, entities, and business units the policy applies to,
- Which groups of people (employees, contractors, external service providers with system access),
- Whether there are country-specific annexes (e.g., due to differing legal frameworks).
3.3 Definitions
Ensure a common understanding of key terms, e.g.:
- AI system / AI application (possibly with reference to the definition in the EU AI Act),
- Personal data and special categories of personal data (Art. 9 GDPR),
- Confidential / highly confidential / secret information (aligned with your information classification),
- Prompt / prompting (input to an AI system to generate results).
Clear definitions facilitate training, implementation, and auditing.
In our AI Glossary we’ve collected the most important definitions in the field of artificial intelligence.
3.4 Responsibilities & AI governance
Without clear responsibilities, any policy is ineffective. Typical roles are:
Executive Management / Board
Approves the policy,
Bears overall responsibility for AI use and compliance.
Process Owner / Business Units
Assess whether an AI system is operationally useful and necessary,
Are responsible for operation within the respective process,
Document compliance with internal and legal requirements throughout the entire lifecycle.
Legal/Compliance
Reviews copyright, trademark, and personal rights aspects,
Evaluates terms and conditions and licensing terms of AI providers,
Supports on liability issues.
AI Office / AI Committee / AI Officer
Maintains a register of all AI systems in the company,
Classifies applications by risk level,
Decides on approvals, changes, and decommissioning,
Provides checklists and templates for reviews and approvals.
Data Protection Officer
Is involved early whenever personal data may be affected,
Reviews consents, legal bases, technical and organizational measures, and data protection impact assessments.
IT / Information Security
Ensures secure technical integration and access controls,
Evaluates hosting (on-premises, EU cloud, third country), logging, backup, and emergency concepts.
3.5 Scope of application & list of approved AI Tools
Your policy should contain a transparent list of approved AI tools, e.g., in an annex:
- Tool name,
- Provider/manufacturer,
- Risk category (no/low/high risk/prohibited),
- Permitted use cases (e.g., text drafts, translations, internal analyses),
- Prohibited use cases (e.g., personnel decisions, legal advice, medical diagnoses),
- Operating location (in-house, private cloud, external service provider).
In the procurement and testing of AI tools, there are overlaps with “normal” software governance, but due to data and bias risks, AI requires additional reviews (risk classification, data and output rules).
Therefore, additionally define:
- Whether and when new tools may be tested,
- What an approval process for new AI applications looks like (application, review, decision, documentation).
3.6 Fundamental principles of AI use
Formulate clear guardrails that apply to everyone:
- Responsible Use
- AI is a tool for support, not a final authority.
- Decisions with impacts on people are always made by humans.
- Only Approved Tools
- For business purposes, only AI systems approved by IT/business units may be used.
- Private or public tools may not be used for business purposes without approval.
- Separation of Private/Business Use
- Business AI applications are not used privately,
- Private AI accounts are not used for business tasks.
- Transparency & Labeling
- AI-generated content must be identifiable as such.
- Requirement for uniform labeling, e.g., “Created with the assistance of AI.”
- No Fully Automated Decisions with Legal Effect
- Decisions with legal or similarly significant effect on natural persons may not be made solely automatically within the meaning of Art. 22 GDPR.
- AI may provide recommendations; the decision rests with humans.
3.7 Handling data: input rules
Without clear data rules, an AI policy is practically worthless.
Distinguish in particular:
3.7.1 Confidential corporate data & trade secrets
Data classified as “confidential,” “highly confidential,” or “secret” may generally not be entered into external AI systems operated outside the company’s control.
This includes, among others:
- IT security information,
- Trade and business secrets,
- Unpublished financial data and strategic plans.
Exceptions are only permissible in justified individual cases with documented approval from executive management and only in sufficiently secured systems.
Can I even use AI when working with business-critical data?
Yes, AI can handle business-critical data – however not all tools are suitable for such use. The ONTEC AI Platform has been developed for organizations with sensitive and complex data and fulfills all security requirements to be implemented into the most important processes.
3.7.2 Personal data
Principle: The entry of personal data into AI systems is prohibited unless expressly defined otherwise.
If the processing of personal data by an AI system is exceptionally permissible and necessary:
- A clear legal basis is required (e.g., contract, legitimate interest, consent),
- The principle of data minimization applies (only what is truly necessary),
- Special categories of personal data (e.g., health data) require strict additional protective measures and generally explicit consent,
- The data protection officer must be involved.
3.7.3 Intellectual property / copyright
Before entering texts, images, code, or other content into AI systems, it must be verified that the necessary usage rights exist.
No inputs may be used that are intended to deliberately replicate or copy copyrighted works.
In case of uncertainty, the legal/compliance department should be consulted.
3.8 Use of AI results: output rules
Define binding rules for handling AI outputs:
Plausibility and quality review
- All AI results must be reviewed for accuracy, completeness, and plausibility before they are used further or communicated externally.
- This includes, in particular, the verification of facts, data, source references, and calculation examples.
Review for hallucinations and bias
- AI systems can generate erroneous or fabricated information (“hallucinations”).
- Results must be reviewed for factual errors, logical inconsistencies, and discriminatory or stereotypical content.
- If systematic errors are identified, this must be reported internally (e.g., to supervisors or the AI office).
Labeling of AI-generated content
- Texts, images, videos, or other content substantially created by AI must be labeled accordingly.
- Where possible, technical labeling (metadata, watermarks) should also be used.
Copyright status of AI outputs
- AI-generated content enjoys no or only limited copyright protection in many jurisdictions.
- This should be considered in your IP strategy and in agreements with customers and partners.
3.9 Training & awareness
Training is not a “nice-to-have,” but a prerequisite for secure AI use:
- Employees may only use approved AI applications after participating in AI training.
- Training should cover at least the following topics:
- Fundamentals of AI, opportunities and risks,
- Typical system malfunctions (hallucinations, bias),
- Data protection and information security,
- Handling protected data and trade secrets,
- Internal processes (tool approval, incident reporting, documentation).
- Recurring refreshers (e.g., annually) and specific training for high-risk applications are advisable.
- Training should be firmly embedded in the onboarding of new employees.
3.10 Ethical guidelines
Give the policy a clear value foundation:
Fairness & non-discrimination
- AI systems should not disadvantage individuals or groups based on characteristics such as gender, origin, religion, age, etc.
- Employees are encouraged to critically review outputs and report discriminatory content.
Transparency & traceability
- Processes in which AI is used should be documented and comprehensible to those affected.
- It should be clear where and how AI is integrated into decision-making processes.
Human-centricity
- Humans remain responsible, especially for safety-relevant, legally binding, or ethically sensitive decisions.
- AI serves to support, not replace, human judgment.
3.11 Legal framework
List the most important legal foundations and translate them into understandable internal rules:
- European and national AI regulation (e.g., EU AI Regulation),
- Data protection law (especially GDPR and national data protection laws),
- Copyright, trademark, and personal rights law,
- Labor law requirements (e.g., co-determination by works council when deploying certain systems).
Important: The policy should be developed in close coordination with data protection and legal departments and regularly adapted to new requirements.
3.12 Monitoring, compliance & sanctions
Rules only work if their compliance is monitored:
- Define how compliance with the policy is monitored (e.g., through evaluation of log data, internal audits, spot checks).
- Involve data protection and, if applicable, employee representatives, especially when logs contain personal data.
- All controls must be documented; significant findings should be regularly reported to executive management.
- Violations of the policy may result in employment law and possibly civil or criminal consequences; managers have a special role model and supervisory function.
- Specify that the policy is reviewed at least annually and adapted as needed (e.g., for new tools, technologies, or laws).
Also define how to handle unavoidable deviations: In which cases are deviations tolerated? What is the corresponding process?
5. Concrete start: Checklist for your first AI Policy
Finally, a compact checklist with which you can verify whether the most important points are covered:
- Objectives & Scope Clarified?
- What do we want to achieve with AI?
- Which areas, tools, and persons does the policy apply to?
- Responsible Parties Named?
- Is there an AI committee or a responsible person?
- How are data protection, IT, legal/compliance, and HR involved?
- Tool List Created?
- Which AI tools are explicitly permitted, which are prohibited?
- Is there a register and a defined approval process?
- Data Rules Defined?
- Which data may be entered into which systems?
- How do we handle confidential and personal data?
- What absolute prohibitions apply?
- Output Rules & Labeling Regulated?
- Is there an obligation for quality and plausibility review?
- How and where is AI content labeled?
- How do we prevent fully automated decisions with legal effect?
- Training & Awareness Planned?
- Are there mandatory training sessions and refreshers?
- Is AI embedded in onboarding?
- Monitoring & Evaluation Established?
- Who reviews compliance, at what intervals, and with what methods?
- How are violations handled?
- How are new legal requirements incorporated into the policy?
6. Free Template: Table of Contents for your AI Policy
It is not advisable to reuse the AI policies of other companies: each company has its own approach to AI, and a prefabricated text tempts one not to take the topic seriously enough.
Nevertheless, some initial assistance is helpful. Our template for a table of contents of an AI policy serves this purpose: guidance without revealing too much.
Summary
A good AI policy is:
- clear – employees know which AI tools they may use and how,
- protective – it prevents data leaks, legal violations, and misuse,
- binding – responsibilities, processes, and consequences are defined,
- living – it is trained, implemented, and regularly updated.
If you systematically apply these components to your company, you create the foundation for secure, productive, and trustworthy AI use – instead of leaving AI to chance.