As AI continues to dominate conversation and make its way into your workplace, you need to make sure you have an AI policy in place.
Why?
Having a clear policy that outlines how to use AI safely not only gives clear guidance to your team, but can help with compliance.
AI is a powerful tool that comes with a lot of risk. This includes:
- Private data being uploaded into public tools
- IP and copyright issues around the content it produces
- Compliance with upcoming and existing regulation
- Ethical risks surrounding bias or false information
- Reduction in quality or accuracy of work
By creating an AI policy, you actively support your employees in the safe adoption of the technology. Without, you can either face two extremes: employees overuse AI without safeguards, or avoid it entirely and fall behind.
Microsoft research shows that 75% of employees are already using AI. If you don’t control it now, it will only bring more risk.
What should I include in a workplace AI policy?
These are our recommendations of things to include in your AI policy:
Clear Definition
Define what you mean by AI within your policy. For example, generative AI, chatbots, machine learning. This will remove any confusion from employees as to what types of AI can and can’t be used.
Data Handling
Make clear restrictions on what data can or cannot be entered into AI engines, and how they can be used.
For example: are you happy with employees using them to write emails; do you want to restrict them from using it for data analysis.
AI Tools and Uses
State an approved list of AI tools that can be used and how they can be used.
Check your approved tools to see if there is an option to stop the data entered from being used to train other AI tools.
Our recommendation would be to allow Microsoft Copilot and block access to all other AI tools. Unlike some AI engines, Copilot does not use the files you upload to train their generative models.
It’s important to highlight ethics around using the tools too. Ensure staff are aware of potential bias outputs, and that they are not using AI to create deepfakes, impersonations, or anything misleading.
Training and Awareness
Train your staff regularly on data security and AI to help them feel confident in using it safely.
AI will continue to develop, and so will the regulations surrounding it. It’s important to keep your employees up to date to minimise risk with its use.
Monitoring and Compliance
Put in place monitoring for AI tool usage and block access to any unapproved AI tools on your network.
Write AI into existing policies, reinforce existing data protection and information security policies, and do regular compliance checks.
In the event that someone is suspicious that AI is being misused in the workplace, clearly outline how this can be reported.
Anything Business Specific
There may be processes within your organisation that are more specific that you must outline.
There may even be different allowances per department. Consider how the work you do and tools you use for your business could be impacted.
AI doesn’t need to be overwhelming. With clear guidance in your policy, you support your employees in understanding how to use these tools responsibly and effectively.