On Thursday 19 June, we welcomed clients and local businesses to Norwich Theatre for a morning of knowledge sharing on Copilot, AI and cyber security.
After a light breakfast and time to network, Breakwater Co-Owner, John Gostling, took the floor to talk all things AI. This included ethics, badly used AI, well-used AI, live demos of Copilot and AI policies. Here are the discussion notes from the morning:
The Good, The Bad, and The Ugly of AI
The Ugly:
When it comes to AI, there are ethical concerns we should all consider:
- Is AI biased?
There is a risk of bias in the responses AI gives you based on what data has been used to train it. - Data Privacy:
Do you know what data has been used to train it? And are you putting sensitive data into it? - Accountability:
If AI gives you or an employee an incorrect response or data, who is held accountable? - Environmental:
Materials used, electronic waste, high water and electricity usage.
“Globally, AI-related infrastructure may soon consume six times more water than Denmark” Reference Article
“To train ChatGPT Four, the latest iteration, used twice as much electricity as Norwich would use in a year.” Reference Video
Key point: This isn’t to say you shouldn’t use AI, a Google search uses electricity, water etc. However, AI’s usage of these materials is already said to be on a higher scale. We should, therefore, be considerate about how we use it and what for.
The Bad:
John gave a few examples of how AI has been misused. This includes stories of companies pretending to use AI, and of AI giving false information. You can read the stories through the links below, or download and view our presentation further down.
An ‘AI’ fast food drive-thru is mostly just human workers in the Philippines
HP Support Number:
A HP used asked ChatGPT for a support number. The number it gave turned out to be a scam. The user realised this when the ‘support agent’ was pressuring them to give remote access to their device. Ironically, they then asked ChatGPT if the telephone number was a scam, and it replied “yes”.
The lesson here, be careful with trusting all the information AI gives you.
Good AI:
How to write good prompts for AI:
1. Tell your AI who it needs to be
Most people just type WHAT they want, without thinking about WHO they want to answer: “ACT AS A… [IT ENGINEER]”
2. Give your AI context
Explain your situation. Let it understand why: “I’M TRYING TO… [UNDERSTAND THIS PROBLEM]
3. Be specific
Don’t say: “I need a script”. Say: “CREATE… [a PowerShell script that will export the top 10 largest mailboxes in terms of items]”
4. Format for the function
How should AI format your response, so it works for you: “FORMAT THE OUTPUT AS… [A table with these headings…]”
Key point: Remember that even when you use the exact same prompt, AI may not always give you the same response.
Microsoft Copilot:
At this point in the presentation, we focused on Microsoft Copilot. John discussed the differences between Copilot in Windows (free) and Copilot for Microsoft 365 (paid licence). You can recap this here.
Key point: Microsoft Copilot protects you data within your tenant. It will not send your data beyond your tenant or use it to train other AI models, such as ChatGPT.
John then went through some live use cases for Copilot, both the free and paid version, as well as some recently released features.
AI Policy:
Microsoft research shows that 75% of employees are already using AI. But do you have a policy in place of how to safely use it in your organisation?
Here are some of the key things you should include in your AI policy:
Data Handling
Restrict certain data being entered into AI engines. This may include not putting client data into AI engines.
AI Tools
State an approved list of AI tools that can be used. Our recommendation would be to allow Copilot and block access to all other AI tools.
Training and Awareness
Train your staff regularly on data security and AI.
Monitoring and Compliance
Put in place monitoring for AI tool usage, and regular compliance checks.
Anything Business Specific
There may be processes within your organisation that are more specific that you must outline. For example, our policy has outline the use of PowerShell scripts and not using AI for this.
We then handed over to Sash Roshan, Sales Engineer at Huntress.
Following John’s talk, we welcomed Sasha Roshan, Sales Engineer at Huntress to discuss how AI has impacted cyber security, and how Huntress can protect your business from these threats.
Sasha highlighted how AI crime is rising rapidly, covering:
AI-Driven Phishing:
- Deepfake video and voice phishing being used to impersonate
- AI chatbots being used to harvest credentials
- Auto personalised phishing emails
- Faster malware distribution
Malware Enhancements:
- Faster, automated ways to create new malware
- Self-mutating malware
Risks of AI Use:
- Giving private company data to AI engines
- Overreliance on AI, causing increasing errors
- Rapid changes in legislation that your organisation needs to keep up to date with
The key tips Sasha left our attendees with were:
- Reassess and simplify security policies for today’s hybrid work reality
- Implement 24/7 Threat Detection + Response
- Conduct regular security audits and vulnerability assessments
- Update + practice a living Incident Response Plan
- Shift from ‘Zero Trust’ theory to practical identity and access hardening
- Conduct continuous, adaptive Security Awareness Training
- Perform regular, verified backups + test recovery readiness
- Centralise and standardise documentation in a shareable platform
- Embrace automation + AI to reduce manual tasks and boost productivity
- Benchmark your stack against frameworks like Cyber Essentials, NIST or CIS