Do I need an AI policy before letting staff use ChatGPT, Copilot or Gemini at work?
8 May 2026
Do I need an AI policy before letting staff use ChatGPT, Copilot or Gemini at work?
Yes, you need an AI policy before normal staff use of ChatGPT, Copilot or Gemini at work. For most UK businesses, the first version can be a practical 2 to 4 page policy covering approved tools, banned data, acceptable use cases, checking requirements, client confidentiality, UK GDPR responsibilities and escalation. Without that, staff will still use AI, but they will make their own rules.
Why you need a policy before staff start using AI tools
The blunt answer is that staff are probably already using AI, with or without a policy. The risk is not that someone asks ChatGPT to rewrite a dull email. The risk is that someone pastes in a customer complaint, a spreadsheet of prospects, private HR notes, unreleased product plans or client legal correspondence because nobody has told them where the line is.
UK government research published by DSIT in 2026 found that around 1 in 6 UK businesses were already using at least one AI technology. Among businesses using AI, natural language processing and text generation were used by 85 percent of adopters, and 30 percent of staff were using AI on average. That matters because ChatGPT, Copilot and Gemini are not niche IT tools any more. They are everyday work tools.
The same DSIT research also said that 84 percent of businesses using AI had at least some human input or checking over AI outputs, with 67 percent reporting significant input or checking. That is the right instinct. Your policy should turn that instinct into a rule: AI can assist, but a named person remains accountable for the final work.
An AI policy is not mainly about stopping people. It is about making useful AI use safe enough to scale. If your only rule is "do not use AI", people will either ignore it or fall behind. If your rule is "use these approved tools, for these approved tasks, with these data limits", you get productivity without pretending the risks do not exist.
What should a simple workplace AI policy actually cover?
For most SMEs, the first version should be short enough that people read it. Two to four pages is usually better than a long document nobody opens. The policy should answer seven practical questions.
- Which tools are approved? Name them. For example: ChatGPT Business, Microsoft 365 Copilot Chat, Microsoft 365 Copilot, Gemini for Google Workspace or an approved internal tool. Do not leave staff guessing whether a free consumer account is acceptable.
- What information is banned? Ban customer personal data, employee records, passwords, access keys, unpublished financials, commercially sensitive contracts, client confidential information and regulated data unless the tool and use case have been approved.
- What work is allowed? Good first uses include rewriting public copy, creating first draft meeting agendas, summarising non-confidential notes, brainstorming marketing ideas, explaining public documentation and drafting internal templates.
- What work is not allowed? Ban unsupervised legal advice, HR decisions, medical advice, financial recommendations, automated customer decisions, final client deliverables without review and anything that makes a decision about a person.
- Who checks the output? Require human review for accuracy, tone, bias, confidentiality and suitability before anything leaves the business.
- How do staff disclose AI use? Decide when staff must tell a manager, client or colleague that AI helped produce the work.
- Who owns exceptions? Give one person or role the authority to approve new tools, sensitive use cases and experiments.
You should also connect the policy to your existing data protection, information security, acceptable use and disciplinary policies. AI should not sit in a separate fantasy world. If customer data cannot be emailed to a personal Gmail account, it should not be pasted into a personal AI account either.
What are the UK GDPR and security issues?
The UK GDPR point is simple: if personal data goes into an AI tool, your organisation is still responsible for how that data is processed. The AI vendor does not magically take the risk away. You need a lawful basis, a clear purpose, data minimisation, security, retention controls and a way to respect individual rights where those rights apply.
The ICO has made clear in its AI and data protection guidance that accountability, governance, transparency, lawfulness, fairness and accuracy all matter in AI systems. It has also highlighted generative AI questions around lawful basis, purpose limitation, accuracy and data subject rights in its generative AI consultation work. You do not need to be a lawyer to understand the practical implication: do not let staff put personal data into tools unless you have checked the tool, purpose and safeguards.
The NCSC gives a similar leadership message from a security angle. Its AI and cyber security guidance is aimed at managers, board members and senior executives with a non-technical background. It says leaders do not need to be technical experts, but they should know enough about AI risks to discuss them with key staff. That is exactly what a practical AI policy does.
There are four risks I would put at the top for a UK SME. First, confidential data leakage through prompts or uploads. Second, inaccurate outputs being treated as fact. Third, staff relying on free consumer tools when business-grade privacy settings are available. Fourth, unclear accountability when AI-generated work causes a client, customer or employee problem.
Are ChatGPT Business, Microsoft Copilot and Gemini safer than free consumer AI tools?
Usually, yes, if they are configured properly and used under business terms. But "safer" does not mean "risk free". You still need rules.
OpenAI says that by default it does not use data from ChatGPT Enterprise, ChatGPT Business, ChatGPT Edu, ChatGPT for Healthcare, ChatGPT for Teachers or its API platform, including inputs or outputs, for training or improving models. It also describes encryption in transit and at rest, and retention controls for qualifying organisations, on its business data privacy page.
Microsoft says Microsoft 365 Copilot is compliant with existing Microsoft 365 commercial privacy, security and compliance commitments, including GDPR and the EU Data Boundary. Its documentation also says prompts, responses and data accessed through Microsoft Graph are not used to train foundation large language models used by Microsoft 365 Copilot.
Google says in its Workspace Gemini privacy material that Workspace business, education and enterprise users get protections where chats and uploaded files are not reviewed by human reviewers or used to train generative AI models outside the domain without permission.
These business protections are useful. They are a strong reason to prefer approved business accounts over random free tools. But a vendor privacy promise does not tell staff what to upload, what not to upload, when to check facts, whether AI can write client advice or whether a customer should be told AI was involved. That is your policy's job.
| Tool route | Typical risk | Policy position |
|---|---|---|
| Free personal AI account | Weak organisational control, unclear business retention and staff use outside company accounts | Do not use for confidential, personal, client or commercially sensitive work |
| Business AI account | Better contractual controls, but still needs permissions, review and staff training | Approved for defined use cases after configuration check |
| AI built into Microsoft 365 or Google Workspace | Can access large amounts of internal content if permissions are messy | Approve only after permission review and clear staff guidance |
What should you do before day one of staff AI use?
Do not overcomplicate this. Before day one, do these five things.
- Choose the approved tools. Pick one or two. For example, Microsoft 365 Copilot Chat if you are a Microsoft business, Gemini for Google Workspace if you are a Google Workspace business, or ChatGPT Business for a cross-team AI assistant.
- Write a red, amber and green data list. Green means public or non-sensitive. Amber means internal but not personal or client confidential. Red means personal data, client confidential data, credentials, contracts, HR, finance, legal and regulated material.
- Publish five allowed use cases. Make it concrete. "Rewrite this public blog draft" is allowed. "Analyse this employee sickness record" is not.
- Train staff for one hour. Cover examples, data boundaries, hallucinations, bias, source checking and escalation.
- Create an approval route. If someone wants to use AI with sensitive data or automate a workflow, they ask first.
That first version is enough to stop most preventable mistakes. You can improve it later. The worst approach is waiting six months for a perfect policy while staff quietly build habits you will have to unwind.
When this does NOT apply
You do not need a full AI governance programme before someone uses AI to improve a public LinkedIn post, summarise a public government page or draft a generic agenda. If there is no personal data, no client confidential information, no regulated decision and no final output going out unchecked, a light rule is enough.
You also may not need a separate AI policy if you already have strong policies that explicitly cover generative AI. The word "explicitly" matters. A general IT acceptable use policy written in 2019 is not enough. It will not deal with uploaded documents, prompts, model training, hallucinated references, AI-generated code, copyright uncertainty or the temptation to automate judgement calls about people.
Finally, a policy is not a substitute for good tooling. If your Microsoft 365 permissions are a mess, Copilot may expose that mess faster. If staff have access to files they should not see, AI can make the problem more visible. Fix permissions and governance alongside the policy.
The practical answer for UK business leaders
If you have fewer than 50 staff, start with a simple policy this week. Do not wait for enterprise governance. Write the rules, choose the tools, train the team and revisit the policy in 90 days.
If you have 50 to 250 staff, add a small approval process, a DPIA trigger for personal data use cases, tool ownership by IT or operations and quarterly reporting to leadership.
If you handle regulated, sensitive or high-risk data, involve your data protection lead, legal adviser or external specialist before approving anything beyond low-risk use cases.
The honest answer is that an AI policy will not make AI risk disappear. It will stop the avoidable mistakes: personal data pasted into the wrong tool, unreviewed output sent to a client, private files exposed through bad permissions and staff assuming "the AI said so" is evidence. That is enough reason to write one before you scale usage.
If you want to explore whether your current AI use is safe enough, book a short call with Precise Impact AI. No pitch, no pressure. We will tell you where the real risks are and whether you need a policy, a tool review, staff training or nothing more than a clear one-page rule sheet.
Is This Right For You?
This applies if your staff handle customer information, employee records, contracts, financial data, code, proposals, board papers, client work, sales pipelines or anything commercially sensitive. In other words, it applies to most real businesses.
It does not mean you need a huge governance programme before anyone can ask AI to summarise a public article. Start small. Write a clear policy, approve a short tool list, train staff for one hour and review it every quarter. That is enough for many SMEs.
This may not apply if you are a sole trader using only your own non-sensitive information, or if your organisation already has mature data protection, information security and acceptable use policies that explicitly cover generative AI. Even then, check the wording. Most older policies do not cover prompts, uploaded files, generated outputs, hallucinations or AI-generated client advice.
Frequently Asked Questions
Can staff use free ChatGPT at work?
For public, non-sensitive tasks, sometimes. For customer data, employee data, client confidential work, financial information, contracts, passwords, code secrets or regulated work, no. Use an approved business tool with company controls instead.
Do I need a lawyer to write an AI policy?
Not for a first version in a normal SME. You need clear rules on approved tools, banned data, review, accountability and escalation. Get legal or data protection advice if staff will process personal data, automate decisions about people or use AI in regulated work.
Does Microsoft Copilot remove the need for an AI policy?
No. Microsoft 365 Copilot gives stronger enterprise controls than random consumer tools, but it does not decide your internal rules. You still need to define permitted use cases, data limits, permissions, human review and accountability.
Is an AI policy enough for UK GDPR compliance?
No. A policy helps, but UK GDPR compliance also depends on lawful basis, transparency, minimisation, security, retention, processor terms, risk assessment and data subject rights. For high-risk uses, consider a DPIA before launch.
How long should a workplace AI policy be?
For most small businesses, 2 to 4 pages is enough. If it is longer than that, staff may not read it. Put detailed legal or technical material in appendices, not in the practical staff rules.
Should we ban AI until the policy is ready?
If sensitive data is involved, pause that use until rules are in place. For low-risk public content tasks, you can allow limited use under temporary rules while you finish the policy. A total ban is rarely realistic.
How often should we review the AI policy?
Review it after the first 90 days, then at least every six months. Review sooner if you adopt a new AI tool, connect AI to internal systems, process personal data or have an incident.