Guide: How to write your own AI Policy [incl. Checklist + free Template]

Artificial intelligence has long since become part of our daily lives: in office tools, customer service, marketing automation, and data analysis. This is precisely why it becomes dangerous when employees simply “go ahead” without clear guardrails. This practical guide for medium-sized and large enterprises shows step by step how to develop a robust, understandable, and legally compliant AI policy.

street signs showing "one way" but targeting into different directions

1. Why your company really needs an AI Policy

AI is already embedded in many standard tools and specialized applications. However, without clear rules, companies risk data leaks and the disclosure of trade secrets, the processing of personal and sensitive data without a legal basis under the GDPR, violations of copyright, trademark, and personal rights, as well as ethical issues arising from discriminatory or biased AI outputs. In addition, there are potential reputational damages and significant liability risks if erroneous AI outputs are adopted without review.

An AI policy should:

What is important here is a lean, practical document – not a legal tome. Otherwise, the document may not be fully read or remembered by employees.

2. Approach: How to start pragmatically

A pragmatic start can be organized in five steps:

Form a team

An interdisciplinary team ensures that technical, legal, and operational perspectives are incorporated into your AI policy from the outset. At a minimum, the following should be involved: IT, business unit(s), data protection, possibly legal/compliance, and HR. This team is often continued later as an AI committee or AI office.

Conduct an inventory (“AI Audit”)

A systematic inventory makes visible which AI tools are already in use – officially and unofficially – and where action is needed.

The question is: Which AI systems are already being used (officially and “in the shadows”)?

Examples of AI that are often already in early use include office AI functions, chatbots, translation tools, analytics tools, and image generators. You can use internal surveys or workshops to obtain a realistic picture.

Define the target vision

A clear target vision establishes what value AI should deliver in the company and which risks you want to consciously limit.

What do you want to achieve with AI?
Examples include efficiency, quality, and innovation. However, you may also simply need to respond to internal or external pressure or prevent shadow AI.

How risk-tolerant is your company?
Some companies are more restrictive: use is generally prohibited, only explicitly approved tools are allowed;
Other companies are more open: use is generally permitted, but with clear boundaries and rules.

Formulate initial rules (Minimum Policy)

With a few well-understood basic rules, you quickly create guidance on what is permitted and what is not when using AI.

  1. Which tools are permitted, which are prohibited?
  2. May corporate emails be used for registrations?
  3. May confidential or personal data be entered? If yes: where, by whom, and under what conditions?
  4. May AI outputs be used externally without review? (Recommendation: no.)

Plan training & communication

Through targeted training and transparent communication, you ensure that the AI policy is understood and can actually be implemented in daily operations.

Policy alone is not sufficient. Training is needed on

Training should be regularly updated and integrated into onboarding.

In particular, AI officers must possess sufficient technical and legal AI competence and keep it current through regular training. If internal expertise is lacking, external expertise should be specifically engaged.

3. Core components of an AI Policy – structural proposal

You can use the following components directly as a structure for your own AI policy.

3.1 Introduction & purpose

Define briefly and clearly:

State explicitly that humans remain responsible for critical decisions.

3.2 Scope

Specify:

3.3 Definitions

Ensure a common understanding of key terms, e.g.:

Clear definitions facilitate training, implementation, and auditing.

In our AI Glossary we’ve collected the most important definitions in the field of artificial intelligence.

3.4 Responsibilities & AI governance

Without clear responsibilities, any policy is ineffective. Typical roles are:

Executive Management / Board
Approves the policy,
Bears overall responsibility for AI use and compliance.

Process Owner / Business Units
Assess whether an AI system is operationally useful and necessary,
Are responsible for operation within the respective process,
Document compliance with internal and legal requirements throughout the entire lifecycle.

Legal/Compliance
Reviews copyright, trademark, and personal rights aspects,
Evaluates terms and conditions and licensing terms of AI providers,
Supports on liability issues.

AI Office / AI Committee / AI Officer
Maintains a register of all AI systems in the company,
Classifies applications by risk level,
Decides on approvals, changes, and decommissioning,
Provides checklists and templates for reviews and approvals.

Data Protection Officer
Is involved early whenever personal data may be affected,
Reviews consents, legal bases, technical and organizational measures, and data protection impact assessments.

IT / Information Security
Ensures secure technical integration and access controls,
Evaluates hosting (on-premises, EU cloud, third country), logging, backup, and emergency concepts.

3.5 Scope of application & list of approved AI Tools

Your policy should contain a transparent list of approved AI tools, e.g., in an annex:

In the procurement and testing of AI tools, there are overlaps with “normal” software governance, but due to data and bias risks, AI requires additional reviews (risk classification, data and output rules).

Therefore, additionally define:

3.6 Fundamental principles of AI use

Formulate clear guardrails that apply to everyone:

  1. Responsible Use
    • AI is a tool for support, not a final authority.
    • Decisions with impacts on people are always made by humans.
  2. Only Approved Tools
    • For business purposes, only AI systems approved by IT/business units may be used.
    • Private or public tools may not be used for business purposes without approval.
  3. Separation of Private/Business Use
    • Business AI applications are not used privately,
    • Private AI accounts are not used for business tasks.
  4. Transparency & Labeling
    • AI-generated content must be identifiable as such.
    • Requirement for uniform labeling, e.g., “Created with the assistance of AI.”
  5. No Fully Automated Decisions with Legal Effect
    • Decisions with legal or similarly significant effect on natural persons may not be made solely automatically within the meaning of Art. 22 GDPR.
    • AI may provide recommendations; the decision rests with humans.

3.7 Handling data: input rules

Without clear data rules, an AI policy is practically worthless.

Distinguish in particular:

3.7.1 Confidential corporate data & trade secrets

Data classified as “confidential,” “highly confidential,” or “secret” may generally not be entered into external AI systems operated outside the company’s control.

This includes, among others:

Exceptions are only permissible in justified individual cases with documented approval from executive management and only in sufficiently secured systems.

Can I even use AI when working with business-critical data?

Yes, AI can handle business-critical data – however not all tools are suitable for such use. The ONTEC AI Platform has been developed for organizations with sensitive and complex data and fulfills all security requirements to be implemented into the most important processes.

3.7.2 Personal data

Principle: The entry of personal data into AI systems is prohibited unless expressly defined otherwise.

If the processing of personal data by an AI system is exceptionally permissible and necessary:

3.7.3 Intellectual property / copyright

Before entering texts, images, code, or other content into AI systems, it must be verified that the necessary usage rights exist.

No inputs may be used that are intended to deliberately replicate or copy copyrighted works.

In case of uncertainty, the legal/compliance department should be consulted.

3.8 Use of AI results: output rules

Define binding rules for handling AI outputs:

Plausibility and quality review

Review for hallucinations and bias

Labeling of AI-generated content

Copyright status of AI outputs

3.9 Training & awareness

Training is not a “nice-to-have,” but a prerequisite for secure AI use:

3.10 Ethical guidelines

Give the policy a clear value foundation:

Fairness & non-discrimination

Transparency & traceability

Human-centricity

List the most important legal foundations and translate them into understandable internal rules:

Important: The policy should be developed in close coordination with data protection and legal departments and regularly adapted to new requirements.

3.12 Monitoring, compliance & sanctions

Rules only work if their compliance is monitored:

Also define how to handle unavoidable deviations: In which cases are deviations tolerated? What is the corresponding process?

5. Concrete start: Checklist for your first AI Policy

Finally, a compact checklist with which you can verify whether the most important points are covered:

  1. Objectives & Scope Clarified?
    • What do we want to achieve with AI?
    • Which areas, tools, and persons does the policy apply to?
  2. Responsible Parties Named?
    • Is there an AI committee or a responsible person?
    • How are data protection, IT, legal/compliance, and HR involved?
  3. Tool List Created?
    • Which AI tools are explicitly permitted, which are prohibited?
    • Is there a register and a defined approval process?
  4. Data Rules Defined?
    • Which data may be entered into which systems?
    • How do we handle confidential and personal data?
    • What absolute prohibitions apply?
  5. Output Rules & Labeling Regulated?
    • Is there an obligation for quality and plausibility review?
    • How and where is AI content labeled?
    • How do we prevent fully automated decisions with legal effect?
  6. Training & Awareness Planned?
    • Are there mandatory training sessions and refreshers?
    • Is AI embedded in onboarding?
  7. Monitoring & Evaluation Established?
    • Who reviews compliance, at what intervals, and with what methods?
    • How are violations handled?
    • How are new legal requirements incorporated into the policy?

6. Free Template: Table of Contents for your AI Policy

It is not advisable to reuse the AI policies of other companies: each company has its own approach to AI, and a prefabricated text tempts one not to take the topic seriously enough.
Nevertheless, some initial assistance is helpful. Our template for a table of contents of an AI policy serves this purpose: guidance without revealing too much.

Summary

A good AI policy is:

If you systematically apply these components to your company, you create the foundation for secure, productive, and trustworthy AI use – instead of leaving AI to chance.