Take a free self-guided product tour to see our solutions in action Take a tour

ChatGPT vs safety-specific AI: what safety leaders need to know

Logistics manager with laptop

Access a quick summary of why safety-specific AI software outperforms general AI tools like ChatGPT, with AI

ChatGPT | Perplexity | Google AI

 

In short, ChatGPT can effectively support safety teams. However, in high-stakes environments, specialist AI can provide stronger context, tighter controls, and more dependable outputs:
  1. General AI tools like ChatGPT can save health and safety teams time by helping with drafting, summarising, translating, and structuring content – but they should be treated as a starting point, not a final answer.
  2. The biggest risks appear when tasks become more safety-critical. General AI may lack organisational context, rely on outdated information, or produce confident-sounding inaccuracies that create compliance and decision-making risks.
  3. Purpose-built, safety-specific AI can offer a safer and more practical alternative by working inside existing workflows, using real safety data, and supporting stronger governance, accuracy, and consistency.

Jump to key topics

AI is no longer a future discussion for health and safety teams. It is already here, and in many organisations, it is already being used informally, whether that is to summarise incident notes, draft communications, or help make sense of complex guidance.

That shift creates a real opportunity. AI can save time, reduce admin, and help safety professionals focus more of their energy on leadership action. But it also raises an important question – when is a general-purpose AI assistant enough, and when do you need something built specifically for health and safety?

This was the focus of a recent presentation from Andy Dumbell, Co-Founder and CTO at Notify, delivered for IOSH Magazine Live. His argument was not that tools like ChatGPT have no place in safety. Quite the opposite – they can be very helpful. The issue is that health and safety work often carries higher stakes, requires more context, and brings tighter legal responsibilities than a general AI tool is designed to handle. You can catch the full recording here.

How is AI already being used in health and safety?

AI adoption is moving quickly. In practice, many safety teams are already experimenting with it long before a formal AI strategy is in place.

At the same time, regulators are making clear that AI does not sit outside existing obligations. The HSE has said that UK health and safety law already applies to AI used in the workplace, because the Health and Safety at Work etc. Act 1974 covers AI just as it covers other workplace technologies. Employers still need to assess risk and put suitable controls in place.

So, the real challenge for safety leaders is not whether to engage with AI, but how to do it in a way that is useful, proportionate, and defensible.

Where can general AI help in health and safety?

General AI tools can be genuinely useful for health and safety teams.

They are particularly strong at content-heavy tasks. For example, they can help draft toolbox talks, summarise long guidance documents, structure training outlines, translate safety materials, or turn rough investigation notes into a more readable first draft. They can also help safety leaders communicate more clearly with different audiences, from frontline teams to senior leadership.

When used this way, AI can be a great productivity tool. It helps you get from a blank page to a workable draft much faster. That can be valuable in busy environments where safety teams are balancing operational pressure with things like reporting and compliance activity.

The key, though, is to treat the output as a starting point. A competent human still needs to review it, sense-check it, and take ownership of it. In other words, AI can support the first pass, but it should not replace professional judgement.

Man reviewing a document

Where are the gaps when using general AI in health and safety?

General-purpose AI starts to struggle in the areas that matter most to safety professionals.

The first gap is context. A tool like ChatGPT can work with the information you give it in a session, but it does not automatically build a reliable, lasting understanding of your organisation, your site risks, your incident history, or your working practices over time. That means users often have to re-explain the same context again and again, and outputs can vary depending on what was included in the prompt.

The second gap is regulatory depth. General AI can help summarise regulations and guidance, but it does not take responsibility for knowing which rules apply to your situation, whether the source is current, or whether the interpretation is appropriate in context. That still sits with the user.

The third gap is accuracy. AI models can hallucinate. They can generate information that sounds plausible and confident, but is simply wrong. In a safety context, that matters. Whether you are reviewing an incident, creating a briefing, or interpreting legal duties, a polished answer is not the same as a correct one.

This is not a theoretical concern. UK courts have already had to deal with false citations and fabricated authorities in cases involving AI-generated material, reinforcing a simple principle that applies just as much in safety as it does in law – the human user remains responsible for the output.

What are the legal and GDPR risks of using general AI in health and safety?

For many safety leaders, the biggest concern with using general AI tools, is data security.

Incident records often include personal data and, in some cases, health data. Under UK GDPR, data concerning health is special category data, which carries extra protection and requires a lawful basis under Article 6 as well as a separate condition for processing under Article 9.

That means teams need to think carefully before pasting real incident details into any external AI tool, especially if the organisation has not agreed how those tools should be used, what data can be shared, or where that data is processed.

The ICO is also clear that data protection obligations do not disappear just because AI is involved. If you are using AI with sensitive workplace data, governance matters. The tool, the configuration, the supplier arrangements, and the internal policy all matter.

For safety leaders, that should prompt a broader question – even if a general tool can do the task, is it the right environment for the data involved?

Meeting at work

What are the alternatives to general AI in safety?

This is where purpose-built, safety-specific AI starts to earn its place.

A specialist tool can be grounded in your existing safety data and workflows rather than relying on copy and paste. It can be designed to reflect your policies, procedures, and operational context. It can be updated against relevant safety and regulatory requirements. It can also provide better controls around auditability, consistency, permissions, and data handling.

In practice, that means less friction for users and more confidence for the organisation. Instead of asking individuals to prompt well, upload the right documents, remember the right caveats, and manually verify everything each time, the system does more of that work by design.

That does not mean specialist tools are a magic fix. They still need thoughtful implementation, clear ownership, and strong data. But when the stakes are higher, they offer a much stronger fit.

How can Notify help?


At Notify, this is exactly the problem we are working to solve.

Notify is a cloud-based health and safety platform that helps organisations manage incidents, audits, inspections, risk assessments, actions, and documentation in one place. Building on that foundation, Notify Spark brings AI into the workflows safety teams already use.

Rather than sitting outside the platform, Spark is designed to work from real incident and investigation data already captured in the system. That allows it to generate post-incident summaries, executive briefings, safety briefings, toolbox talks, and root cause analysis support much more efficiently and with stronger context.

AI that is embedded in safety workflows, grounded in your organisation’s data, and shaped around how your teams actually work. For safety leaders, that means less time spent on manual write-up and more time focused on learning, communication, and prevention.

Want to see it in action and explore how Notify Spark could help you save time and ensure consistency by generating high-quality post-incident documentation in seconds? Book a demo today.

If you are still exploring the wider AI landscape in health and safety, you may also find our guide to the best AI tools in health and safety useful. And if your focus is on investigations and reporting, our article on how AI is transforming safety incident analysis looks at one of the most practical use cases in more detail.

FAQs

Good due diligence starts with more than supplier claims. A vendor should be able to explain how they measure performance, quality, and reliability, but the real test is how the system performs in your environment.

That means trialling it with your own data, your own workflows, and your own operational context. What works well for one organisation may not work in the same way for another. A sensible approach is to begin with a controlled testing phase, measure the quality of the outputs, and agree a baseline standard before rolling anything out more widely.

In short, good due diligence means validating the tool in practice, not just taking the supplier’s word for it.

Yes – provided it is solving a real problem.

AI can be useful for organisations of all sizes, especially where teams are being asked to do more with limited time and budget. For smaller businesses, the value may not come from advanced trend analysis or large-scale data modelling. It may come from reducing manual admin, speeding up routine tasks, and helping teams produce better-quality outputs more efficiently.

The key is to start with the pain points. Look at the tasks that repeatedly drain time or create bottlenecks, then assess whether AI or another form of technology could help streamline them without lowering quality.

It varies.

Some providers use model APIs directly, while others use hosted environments such as Microsoft Azure to give them greater control over data handling, security, and regional processing. In more highly regulated environments, some organisations may also choose to self-host models for even tighter control.

What matters most is not just which model is being used, but how it is being deployed. Safety leaders should ask where data is processed, what controls are in place, and whether the setup supports their compliance and governance requirements.

AI can support these processes, but it should not replace specialist expertise.

With the right training data and configuration, AI can help structure information, generate drafts, or support the early stages of specialist assessments. However, highly technical or niche assessments still need human oversight from people with the relevant subject matter expertise.

The most effective approach is usually collaborative. AI helps with speed and structure, while a qualified specialist reviews, validates, and takes responsibility for the final assessment.

Agentic AI refers to AI that does more than respond to prompts. It is built into workflows, systems, or automations so it can help analyse information, reason through tasks, and support decision-making as part of a wider process.

In a health and safety context, that could mean combining operational data, workflow rules, and AI reasoning to generate recommendations, highlight trends, or support next actions. The real opportunity is not using AI for its own sake, but embedding it where it creates practical value inside day-to-day work.

As with any AI use case, the priority should be identifying where it can make a meaningful difference rather than simply adding technology for the sake of it.

Sometimes, but not always.

Access depends on the tool being used, the provider behind it, and whether commercial agreements are in place with publishers or data sources. Some AI platforms may be able to reference or surface information from licensed sources, while others may not have access at all.

For users, the important point is not to assume an AI tool has access to every credible source. If a piece of subscription-only research is important to your decision-making, it is still worth checking the original source directly and verifying exactly what the AI has used.