AI Acceptable Use Policy Template: A Complete Guide for Your Organization

AI Security, ChatGPT Security, Compliance, Data Protection, e, Enterprise Security, Policy Templates

In the early days of ChatGPT, a Samsung engineer pasted proprietary source code into ChatGPT to help debug a problem. Within weeks, Samsung had banned all employee use of generative AI tools, but the damage was done. That code was now part of OpenAI’s training data, and Samsung had no way to get it back.

The engineer wasn’t being careless. He was trying to work faster. And that’s the real problem: your employees are already using AI tools, often without realizing the risks. As employees use generative AI for work more and more, clear policies are no longer optional. An AI acceptable use policy template gives you a way to say “yes” to AI while drawing clear lines around what’s off-limits.

As part of our vCISO work at Adelia Risk, clients have been asking in nearly every meeting, “How can our team safely use AI?”  As AI tools become more common and more powerful, it’s so critical for heavily-regulated companies to get this right.

Who Needs an AI Acceptable Use Policy?

If you manage people who touch keyboards, you need an AI policy. This isn’t limited to tech companies. Law firms, financial advisors, healthcare providers, government contractors, HR teams, marketing departments, and any group handling sensitive information need clear rules for AI use. We’ve found that most companies have many more people using AI than they think, and they don’t fully understand the generative AI security risks that come without proper management.  Having an AI policy for employees protects both your organization and your team members.

Specific roles who should care about this:

  • CISOs and IT directors are responsible for data protection
  • Compliance officers at regulated firms (SEC, HIPAA, CMMC)
  • HR leaders drafting employee handbooks
  • Operations managers overseeing productivity tools
  • Business owners at companies with 20-500 employees

What usually triggers the need for a policy:

  • A client asks about your AI practices during due diligence
  • An auditor wants to see documented controls around AI
  • Someone on your team asks if they can use ChatGPT for client work
  • Your company adopted Microsoft 365 Copilot and nobody talked about the rules
  • You read about another company’s AI-related data leak

The good news is that you don’t have to write this from scratch. Below is a complete artificial intelligence policy template that we put together for our clients that you can adapt for your organization. We’ll walk through each section, explain why it matters, and show you how to customize it for your situation.


The Complete AI Acceptable Use Policy Template

Below, you’ll find our full policy template. Each section appears in a quoted block, followed by guidance on how to adapt it. Feel free to copy and paste, or enter your email to get access to a fully editable version.


Section 1: Introduction

POLICY TEMPLATE EXTRACT

Introduction

This policy establishes guidelines for safe and compliant use of AI technologies at (COMPANY NAME) to protect sensitive data, maintain regulatory compliance, and preserve competitive advantage while enabling productive AI adoption.

Why this section matters: The introduction sets the tone. Notice it doesn’t lead with fear or restrictions. Instead, it frames AI adoption as something the company supports, within guardrails.

How to customize it: Replace “(COMPANY NAME)” with your organization’s name. If your company has specific strategic priorities around AI (like “becoming an AI-first organization”), you can add a sentence acknowledging that goal while noting the need for safeguards.

Common mistake: Writing an introduction that sounds like you’re trying to stop AI use entirely. Employees will ignore policies that feel out of touch with how they actually work.


Section 2: Purpose and Scope

POLICY TEMPLATE EXTRACT

Purpose and Scope

2.1 Scope

Applies to all employees, contractors, and third parties using AI tools for business purposes with company data or on behalf of (COMPANY NAME).

2.2 Integration with Existing Policies

This policy works in conjunction with the company’s:

  • Information Security Policy
  • Data Classification and Handling Policy
  • Third-Party Risk Management Policy
  • Incident Response Plan
  • Code of Conduct
  • Intellectual Property Policy
  • Training and Awareness Program
  • Disciplinary Action Policy

Why this section matters: Scope defines who has to follow the rules. Without it, contractors and vendors might assume the policy doesn’t apply to them.

How to customize it:

If you use staffing agencies or offshore teams, call them out specifically in the scope
Review the “Integration” list—remove any policies your company doesn’t have yet
Add policies that are specific to your industry (e.g., “HIPAA Privacy Policy” for healthcare, “Investment Advisory Procedures” for RIAs)

For smaller companies: If you don’t have all eight of these policies, that’s fine. List what you have. This template assumes a relatively mature compliance program. A 25-person company might only reference their employee handbook and data handling guidelines.


Section 3: Definitions

POLICY TEMPLATE EXTRACT

Definitions

Artificial Intelligence (AI), Large Language Models (LLMs), and Generative AI:These terms are used interchangeably in this policy to refer to computer systems that generate text, images, code, or other content based on user inputs (examples: ChatGPT, Claude, Gemini, Copilot).

Built-in AI Features:AI capabilities integrated into existing business tools (examples: Zoom’s meeting summaries, Microsoft 365 Copilot, Gmail’s Smart Compose).

Sensitive Data:Information classified as Confidential, Restricted, or Regulated under the company’s Data Classification Policy, including (edit based on your industry):

  • Personal Identifiable Information (PII)
  • Protected Health Information (PHI)
  • Financial account information
  • Proprietary business information
  • Source code and technical documentation
  • Customer data
  • Third-party confidential information

AI Hallucination:False or misleading information generated by AI and presented as fact.

Why this section matters: People often don’t realize that Zoom’s meeting summary feature, Gmail’s “Smart Compose,” and even Grammarly all count as AI. The definitions make it clear that built-in AI features are in scope—not just standalone chatbots.

How to customize it:

* If you use staffing agencies or offshore teams, call them out specifically in the scope
* Review the “Integration” list—remove any policies your company doesn’t have yet
* Add policies that are specific to your industry (e.g., “HIPAA Privacy Policy” for healthcare, “Investment Advisory Procedures” for RIAs)

For smaller companies: If you don’t have all eight of these policies, that’s fine. List what you have. This template assumes a relatively mature compliance program. A 25-person company might only reference their employee handbook and data handling guidelines.

Industry-Specific Additions:


Section 4: AI Acceptable Use Guidelines

POLICY TEMPLATE EXTRACT

Acceptable Use Guidelines

(Note: Customize these lists based on your organization’s specific needs and risk tolerance. These are only examples.)

Approved Use Cases

  • Drafting and editing non-sensitive internal documents
  • Summarizing publicly available information
  • Brainstorming and ideation with non-confidential topics
  • Writing and debugging non-proprietary code
  • Analyzing anonymized or synthetic data
  • Creating training materials with public information
  • Improving internal processes using data stripped of identifying info


Prohibited Uses

  • Processing sensitive data without documented exception
  • Making critical autonomous decisions
  • Creating deceptive content or deepfakes
  • Bypassing security controls or company policies
  • Violating laws, regulations, or ethical standards
  • Processing regulated data in non-compliant tools
  • Uploading proprietary code, trade secrets, or competitive information


Conditional Use Cases (Requires Review and Approval)

  • Customer-facing content generation (requires human review and approval)
  • Analysis of de-identified customer data
  • Integration with internal systems or databases
  • Development of AI-powered features or products
  • Use of AI in regulated business processes
  • Built-in AI features in approved tools processing meeting recordings or emails
  • External communications using AI-generated content

Why this section matters: This is the heart of your policy. It tells employees what they can do without asking, what’s off-limits, and what needs approval. The three-tier structure (approved/prohibited/conditional) prevents the policy from being either too restrictive or too vague.

How to customize it:

For the Approved list:

* Be specific about what “non-sensitive” means by referencing your data classification policy
* Add use cases that match how your teams actually work (e.g., “Generating first drafts of marketing copy for internal review”)

For the Prohibited list:

* Call out specific data types that should never go into AI tools
* For regulated industries, reference the relevant rule (e.g., “Inputting data subject to SEC Rule 17a-4 retention requirements”)

For the Conditional list:

* Make the approval process clear (we’ll cover that in Section 6)
* Consider adding: “Using AI transcription during client meetings (requires advance notice to all participants)”

Size-Based Adjustments:


Section 5: Data Handling and Security

POLICY TEMPLATE EXTRACT

Data Handling and Security

Core Principle

The Public Test: Before inputting any information into an AI tool, ask: “Would I post this publicly on the internet?” If no, do not input it.

User Accountability

Critical: The person using AI is fully responsible for:

  • Verifying all outputs for accuracy
  • Catching and correcting any errors or hallucinations
  • Ensuring compliance with this policy
  • Any consequences of AI-generated content they approve or distribute

AI is a tool. Like any tool, the person using it bears responsibility for the results. AI errors become your errors if you fail to catch them.

Data Input Restrictions

Never Input:

  • Passwords, API keys, or credentials
  • Customer PII or payment information
  • Proprietary algorithms or source code
  • Confidential business strategies
  • Information subject to legal privilege
  • Any data you wouldn’t post publicly

May Input:

  • Public information
  • Properly anonymized data
  • General questions without sensitive context
  • Non-proprietary code examples

Why this section matters: This section addresses the question we hear constantly: ‘What happens to data I put into AI tools?’ Here’s a general rule of thumb:

1. If you’re using a paid, well-known LLM tool (like ChatGPT, Claude, and Gemini) that is administered by your IT and security team, then it’s usually safe to use any kinds of data.
2. If you’re using a free LLM tool, or one that isn’t administered by your IT and security team, then generative AI data security really comes down to one simple rule: assume that anything you enter in to these systems could become public.

The “Public Test” gives employees a simple mental model. If they wouldn’t post it on LinkedIn, they shouldn’t paste it into an AI tool that isn’t provided by their company.

How to customize it:

  • Add industry-specific “Never Input” items:
  • Healthcare: “Patient names, dates of birth, or any PHI as defined by HIPAA”
  • Financial services: “Client account numbers, portfolio holdings, or trade recommendations”
  • Legal: “Client names, matter details, or privileged communications”

Please note that your list will look different.  You may have enough safety measures in place that you’re comfortable using your AI tools to process proprietary source code or sensitive client data.

A note on enterprise vs. consumer tools: Many employees don’t know the difference. Consumer tools (free ChatGPT, personal Copilot accounts) often use your inputs for model training. Enterprise tools with proper agreements typically don’t, but you need to verify this with each vendor. We’ll cover approved tools in the next section.

AI & LLM Policy Template (#37)

Ready to Build Your Own AI and LLM Policy?

Enter your email to get an editable version you can start modifying right away.


Section 6: Approved Tools and Access

POLICY TEMPLATE EXTRACT

Approved Tools and Access

Approved Enterprise AI Tools:
(List your approved enterprise licenses)

  • Example: Microsoft Copilot for Business – general document assistance
  • Example: GitHub Copilot – code development

Built-in AI Features:
(List approved features in existing tools)

  • Example: Zoom AI Companion – if recording notice is given
  • Example: Slack AI – for public channel summaries only

Restricted Tools:
(List tools requiring special authorization)

  • Example: Custom API integrations
  • Example: AI tools with database access
  • Example: AI services (like Google Gemini or AWS Bedrock) that can be accessed from self-hosted software applications or workflows

Prohibited Tools:

  • Free/consumer versions of AI services for business use
  • Personal accounts for any business-related AI usage
  • Unapproved browser extensions or plugins

Any tool not explicitly approved

New Tool Approval Process
(Describe your existing procurement/security review process here.)

Example: Submit requests for new AI tools through the standard IT procurement process. Your IT and Security team will review to assess data handling, compliance certifications, and vendor agreements before approval.

Account Management

  • Use only company-provided AI accounts
  • Enable multi-factor authentication (MFA) when available
  • Never share login credentials
  • Report unauthorized tool usage immediately

Why this section matters: This answers several questions at once: “Do I need separate policies for ChatGPT, Copilot, and Zoom summaries?” (No—this section covers all of them.) “Can employees use personal AI accounts for work tasks?” (No—see Prohibited Tools.) “How do I handle AI tools embedded in software we already use?” (List them under Built-in AI Features with any restrictions.)

How to customize it:

1. Audit what you already have. Before filling in this section, inventory the AI capabilities in your current software stack. Microsoft 365, Google Workspace, Slack, Zoom, Salesforce, HubSpot—most major platforms now include AI features. List them explicitly.

2. Distinguish between enterprise and consumer versions. For each tool, note which version you’re approving. “ChatGPT Enterprise” and “ChatGPT Free” have very different data handling practices.

3. Handle browser extensions carefully. Tools like Grammarly, Jasper, and dozens of others install as browser extensions and can see everything in the browser window. Either approve them explicitly with documented data handling practices, or add them to your prohibited list.

Enterprise vs. Consumer: A Quick Reference


Section 7: Roles and Responsibilities

POLICY TEMPLATE EXTRACT

Roles & Responsibilities

All Users

  • Follow this policy completely
  • Verify all AI outputs before use
  • Report policy violations per existing procedures

Managers

  • Approve conditional use cases for their teams
  • Review AI-generated content as appropriate
  • Ensure team compliance

IT/Security Team

  • Maintain approved tools list
  • Configure privacy settings and access controls on all AI platforms
  • Monitor usage for security and compliance
  • Conduct vendor security assessments
  • Disable access upon employee termination

Legal/Compliance Team

  • Monitor regulatory requirements
  • Update policy for new regulations
  • Review AI vendor agreements

Why this section matters: Policies are just words on paper until they’re put into action.  You need to make it clear who approves requests for new AI tools, who monitors for violations, and who keeps the policy updated.

How to customize it:

* Match these roles to your actual org structure. A 30-person company might have the CEO handling “Manager” responsibilities and an outsourced IT provider handling the “IT/Security Team” tasks.
* Add a review cadence: “This policy will be reviewed quarterly by the IT/Security Team and updated as new tools or regulations emerge.”


Section 8: Compliance and Regulations

POLICY TEMPLATE EXTRACT

Compliance & Regulations

General Requirements

All AI use must comply with applicable laws and regulations. Users remain responsible for compliance regardless of AI assistance.

Industry-Specific Considerations

(Select and customize relevant sections:)

  • Healthcare (HIPAA): AI tools processing PHI require Business Associate Agreements.
  • Financial Services (SEC/FINRA): AI-generated client communications need appropriate disclosures and supervisory review.
  • Government Contractors (CMMC): CUI requires FedRAMP-authorized or on-premise AI solutions only.
  • Payment Card Industry (PCI-DSS): No cardholder data in any AI tool not certified for PCI compliance.
  • Legal Services: Jurisdiction-specific disclosure requirements for AI use in client work.

Documentation Requirements

  • Log AI use in regulated processes
  • Document AI involvement in client deliverables
  • Maintain records per existing retention policies

Why this section matters: Artificial intelligence regulatory compliance varies dramatically by industry. A wealth management firm needs SEC/FINRA language. A healthcare company needs HIPAA language. A government contractor needs CMMC/FedRAMP language. Most companies can delete the sections that don’t apply.

How to customize it:

* Keep only your industry’s section(s). A dental practice doesn’t need the government contractor language.
* Add specific disclosure language. For financial services, consider adding: “AI-generated content in client communications must be reviewed by a registered principal before distribution. Disclosures should note when AI tools assisted in content creation, per current FINRA guidance on supervision.”
* Reference your retention schedule. If you have a records retention policy, cite it specifically for AI-related documentation.


Section 9: Built-In AI Features

POLICY TEMPLATE EXTRACT

Built-In AI Features

(Examples only, please customize)

Meeting and Communication Platforms
AI features in approved platforms (meeting transcription, email summaries, chat assistance) may be used when:

  • All participants are notified
  • No confidential information is discussed
  • Features are configured per IT security standards
  • Outputs are treated with same caution as direct AI tool usage

Productivity Software
AI features in Microsoft 365, Google Workspace, or similar platforms:

  • Must be centrally managed by IT
  • Should be disabled by default for high-risk departments
  • Require the same data handling precautions as standalone AI tools

Why this section matters: Companies often need to have different rules (which are often more lenient) around built-in AI tools, like Microsoft CoPilot, Google Gemini, Zoom’s AI features, etc.  The rationale is that these tools are “inside the walled garden” of platforms like Microsoft and Google, and comply with all of the same security programs as their email and productivity tools.

This also addresses an important legal and compliance question: “Do I need consent before enabling AI meeting transcription?” Short answer: yes. The participant notification requirement protects you from recording people without their knowledge, which has legal implications in many states.

How to customize it:

* List the specific platforms you use and their AI features
* For meeting transcription, add: “Recording and AI transcription must be announced at the start of each meeting. Participants may request that recording be stopped for sensitive discussions.”
* Consider which departments should have AI features disabled by default (HR, Legal, and Executive teams often fall into this category due to the sensitive nature of their communications)

AI & LLM Policy Template (#37)

Ready to Build Your Own AI and LLM Policy?

Enter your email to get an editable version you can start modifying right away.


Frequently Asked Questions

What AI tools are my employees already using without my knowledge?

More than you think. A 2024 survey found that over half of employees using AI at work haven’t told their employers. This is exactly why an AI usage policy that addresses shadow IT is essential. The most common shadow uses of AI tools that we see with our clients include:

  • ChatGPT (free version) for drafting emails and documents
  • Grammarly for writing assistance
  • AI features built into browsers (Edge, Chrome)
  • Bing Chat / Copilot through Microsoft Edge
  • AI transcription apps for meeting notes

To find out what’s actually in use, consider running an anonymous survey before rolling out your policy. Better yet, ask your I.T. team to use their monitoring tools to look for which AI tools people are using across the company.

What’s the difference between free tools and paid/business tools for data privacy?

Free tools may train on your data; business tools typically don’t (but verify this in the settings and legal agreement).

For example, when you use ChatGPT Free, OpenAI’s terms allow them to use your conversations to improve their models, unless you opt out in settings, which most people don’t know about. ChatGPT Enterprise and Team plans include contractual commitments that your data won’t be used for training.

This same pattern applies to most AI tools. The free version is the product testing ground, and your data and inputs help make it better. The paid version treats your data as confidential. Always ask for a vendor’s data processing agreement (DPA) and confirm training exclusions in writing.

Who is liable when AI-generated content contains errors—the employee or the company?

Both, depending on the situation. Here’s how to think about it:

The company is liable for harms caused by AI-generated content distributed to clients or the public. If your marketing team publishes an AI-written blog post with false claims, the company owns that liability.

The employee may face disciplinary action if they violated policy. For example, by failing to review AI outputs before publishing, or by inputting prohibited data into an AI tool.

This is why the “User Accountability” section of the AI acceptable use policy template states that the person using AI is responsible for verifying outputs. When someone signs off on AI-generated content, they’re taking ownership of it.

How do I handle contractors and third parties using AI with our data?

Add AI requirements to your vendor contracts and contractor agreements. At minimum, include:

  1. A clause requiring the third party to follow your AI acceptable use policy (or provide their own equivalent policy for your review)
  2. Notification requirements if they intend to use AI tools with your data
  3. Restrictions on which AI tools they can use and for what purposes
  4. Data handling requirements that match or exceed your internal standards

For existing contracts, send a written notice clarifying your expectations. Some companies might find it appropriate to add an AI addendum to their master service agreements that all vendors must sign.

What training do employees need before using approved AI tools?

At minimum, cover these three areas:

  1. What data can and cannot be entered — Walk through specific examples. For example, “A client’s name” isn’t allowed. “A generic question about retirement planning” is fine.
  1. How to verify AI outputs — AI makes confident mistakes. Train employees to fact-check statistics, verify cited sources actually exist, and check calculations independently.
  1. How to request access to new tools — Make the approval process clear so employees don’t just sign up for consumer accounts on their own.

Many companies add a short AI awareness module to their annual security training. Others require employees to acknowledge the policy in writing before being granted access to AI tools.


Putting Your AI Acceptable Use Policy to Work

We hope this helps you get started.  Most organizations can adapt this AI acceptable use policy template themselves. Every AI policy for companies should be customized to fit the organization’s specific risk profile. Start by customizing the Sensitive Data definitions and Approved Tools lists for your situation. Get input from IT, Legal, and department heads who understand how your teams actually work.

And if you get stuck on the tricky parts—like what counts as ‘de-identified’ data or how to handle that one department that’s already using five different AI tools—we’re happy to take a look.  Get in touch, and we’ll see how we can help.

AI & LLM Policy Template (#37)

Ready to Build Your Own AI and LLM Policy?

Enter your email to get an editable version you can start modifying right away.

 

Table of Contents

Picture of Josh Ablett

Josh Ablett

Josh Ablett, CISSP, has been meeting regulations and stopping hackers for 20 years. He has rolled out cybersecurity programs that have successfully passed rigorous audits by the SEC, the FDIC, the OCC, HHS, and scores of customer auditors. He has also built programs that comply with a wide range of privacy and security regulations such as CMMC, HIPAA, GLBA, SEC/FINRA, and state privacy laws. He has worked with companies ranging from 5 people to 55,000 people.

Share

Related Posts

Welcome to our Audit and Accountability (AU) Guide for CMMC Level 2 Compliance. This guide is

Sending Secure Email When you want to send someone a secure email, simply put one of

When was the last time you checked your Wi-Fi security settings? If your answer is “not

Do you think we might be a good match?