AI is no longer a future discussion for health and safety teams. It is already here, and in many organisations, it is already being used informally, whether that is to summarise incident notes, draft communications, or help make sense of complex guidance.
That shift creates a real opportunity. AI can save time, reduce admin, and help safety professionals focus more of their energy on leadership action. But it also raises an important question – when is a general-purpose AI assistant enough, and when do you need something built specifically for health and safety?
This was the focus of a recent presentation from Andy Dumbell, Co-Founder and CTO at Notify, delivered for IOSH Magazine Live. His argument was not that tools like ChatGPT have no place in safety. Quite the opposite – they can be very helpful. The issue is that health and safety work often carries higher stakes, requires more context, and brings tighter legal responsibilities than a general AI tool is designed to handle. You can catch the full recording here.
How is AI already being used in health and safety?
AI adoption is moving quickly. In practice, many safety teams are already experimenting with it long before a formal AI strategy is in place.
At the same time, regulators are making clear that AI does not sit outside existing obligations. The HSE has said that UK health and safety law already applies to AI used in the workplace, because the Health and Safety at Work etc. Act 1974 covers AI just as it covers other workplace technologies. Employers still need to assess risk and put suitable controls in place.
So, the real challenge for safety leaders is not whether to engage with AI, but how to do it in a way that is useful, proportionate, and defensible.
Where can general AI help in health and safety?
General AI tools can be genuinely useful for health and safety teams.
They are particularly strong at content-heavy tasks. For example, they can help draft toolbox talks, summarise long guidance documents, structure training outlines, translate safety materials, or turn rough investigation notes into a more readable first draft. They can also help safety leaders communicate more clearly with different audiences, from frontline teams to senior leadership.
When used this way, AI can be a great productivity tool. It helps you get from a blank page to a workable draft much faster. That can be valuable in busy environments where safety teams are balancing operational pressure with things like reporting and compliance activity.
The key, though, is to treat the output as a starting point. A competent human still needs to review it, sense-check it, and take ownership of it. In other words, AI can support the first pass, but it should not replace professional judgement.

Where are the gaps when using general AI in health and safety?
General-purpose AI starts to struggle in the areas that matter most to safety professionals.
The first gap is context. A tool like ChatGPT can work with the information you give it in a session, but it does not automatically build a reliable, lasting understanding of your organisation, your site risks, your incident history, or your working practices over time. That means users often have to re-explain the same context again and again, and outputs can vary depending on what was included in the prompt.
The second gap is regulatory depth. General AI can help summarise regulations and guidance, but it does not take responsibility for knowing which rules apply to your situation, whether the source is current, or whether the interpretation is appropriate in context. That still sits with the user.
The third gap is accuracy. AI models can hallucinate. They can generate information that sounds plausible and confident, but is simply wrong. In a safety context, that matters. Whether you are reviewing an incident, creating a briefing, or interpreting legal duties, a polished answer is not the same as a correct one.
This is not a theoretical concern. UK courts have already had to deal with false citations and fabricated authorities in cases involving AI-generated material, reinforcing a simple principle that applies just as much in safety as it does in law – the human user remains responsible for the output.
What are the legal and GDPR risks of using general AI in health and safety?
For many safety leaders, the biggest concern with using general AI tools, is data security.
Incident records often include personal data and, in some cases, health data. Under UK GDPR, data concerning health is special category data, which carries extra protection and requires a lawful basis under Article 6 as well as a separate condition for processing under Article 9.
That means teams need to think carefully before pasting real incident details into any external AI tool, especially if the organisation has not agreed how those tools should be used, what data can be shared, or where that data is processed.
The ICO is also clear that data protection obligations do not disappear just because AI is involved. If you are using AI with sensitive workplace data, governance matters. The tool, the configuration, the supplier arrangements, and the internal policy all matter.
For safety leaders, that should prompt a broader question – even if a general tool can do the task, is it the right environment for the data involved?

What are the alternatives to general AI in safety?
This is where purpose-built, safety-specific AI starts to earn its place.
A specialist tool can be grounded in your existing safety data and workflows rather than relying on copy and paste. It can be designed to reflect your policies, procedures, and operational context. It can be updated against relevant safety and regulatory requirements. It can also provide better controls around auditability, consistency, permissions, and data handling.
In practice, that means less friction for users and more confidence for the organisation. Instead of asking individuals to prompt well, upload the right documents, remember the right caveats, and manually verify everything each time, the system does more of that work by design.
That does not mean specialist tools are a magic fix. They still need thoughtful implementation, clear ownership, and strong data. But when the stakes are higher, they offer a much stronger fit.
How can Notify help?

At Notify, this is exactly the problem we are working to solve.
Notify is a cloud-based health and safety platform that helps organisations manage incidents, audits, inspections, risk assessments, actions, and documentation in one place. Building on that foundation, Notify Spark brings AI into the workflows safety teams already use.
Rather than sitting outside the platform, Spark is designed to work from real incident and investigation data already captured in the system. That allows it to generate post-incident summaries, executive briefings, safety briefings, toolbox talks, and root cause analysis support much more efficiently and with stronger context.
AI that is embedded in safety workflows, grounded in your organisation’s data, and shaped around how your teams actually work. For safety leaders, that means less time spent on manual write-up and more time focused on learning, communication, and prevention.
Want to see it in action and explore how Notify Spark could help you save time and ensure consistency by generating high-quality post-incident documentation in seconds? Book a demo today.
If you are still exploring the wider AI landscape in health and safety, you may also find our guide to the best AI tools in health and safety useful. And if your focus is on investigations and reporting, our article on how AI is transforming safety incident analysis looks at one of the most practical use cases in more detail.