Identifying Use Cases for Using AI in Safety Processes

How can Safety Leaders begin to identify areas in their current processes where AI can offer the greatest value?
To identify where AI can offer the greatest value, start by mapping out your current safety processes – from incident reporting to audits, risk assessments, and communication. For each step, ask: is this task repetitive? Does it involve large amounts of data? Is it a bottleneck or commonly delayed? These questions will help you pinpoint areas ripe for automation
or AI support.
Next, talk to your team. Ask what slows them down or feels like “admin for admin’s sake.” Common candidates include writing up reports, summarising findings, reviewing repetitive incidents, or extracting insights from large datasets. If they’re spending hours copying info between systems, chasing overdue actions, or formatting documents – AI can likely help.
Finally, test the waters with practical pilots. Start small – use AI to draft toolbox talks, summarise a risk assessment, or suggest actions based on previous reports. Measure the time saved or quality improved, and build from there. The goal isn’t to replace human judgement but to remove the grind and make space for smarter, more proactive safety work.

In what practical ways can AI assist me in my daily tasks?
AI can save you time on everyday tasks by acting like a smart assistant. It can summarise incident reports, draft emails, write safety briefings, or turn messy notes into polished documents. Instead of starting from scratch, you can prompt an AI tool to give you a solid first draft. You review, tweak, and move on faster. That alone can save hours over a week.
It can also help you spot trends in your safety data. Say you’ve got hundreds of near-miss reports – AI can sift through them, highlight recurring issues, and suggest where you might focus your next audit or toolbox talk. It’s like having an extra pair of eyes that never gets tired, helping you move from reactive to proactive safety management.
Finally, AI can engage and assist frontline teams too. Think voice-to-text reporting, automatic hazard suggestions, or translating reports into different languages. These kinds of tools lower the barrier to reporting and improve the flow of information across the business.
It’s about making safety easier, not more complex for you and your team.
Is Notify able to suggest mitigations to lower the risk of the incident happening again? Or would you need to put the risk assessment into the system as well?
Yes, it can already suggest mitigations to help reduce the risk of similar incidents happening again. And we’re planning to link risk assessments into the system in future- so the guidance can become even more contextual, pulling in relevant controls, hazards, and existing measures to strengthen the recommendations.
Does the Notify software depend on the quality and detail in the incident report? We often have issues with people not providing much detail or context to incident reports?
Yes. The quality of the analysis absolutely depends on the quality of the incident report. If the report is vague, missing context, or poorly written, then the AI can only do so much – it’s working with limited input, so the output will be limited too. Think of it like asking someone to investigate a blurry photo. They can guess, but they’re not seeing the full picture.
But remember, you can use these tools as the incident goes through its life cycle, for example during the investigation.
That said, we plan to use AI to also help flag when reports are too light, suggest what’s missing, or even prompt users to add more detail in future. Over time, this can actually improve reporting quality by giving real-time nudges and feedback. So while it can’t fix bad data on its own, it can play a role in encouraging better habits.
We have also created an assessment tool that allows us to sample the quality of our solution for our customers, and help ensure we have the right quality of data.

How would Notify’s AI assist in accident and incident finding: underlying and root causes etc. I mean these findings are very exhaustive, thorough and require human resource. I don’t understand how Notify AI can fulfil that visual element of reporting. AI can suggest scenarios for lunch and learn, but passive reporting require exactly what happened to the bone, so how would Notify AI work this critical task aspect?
You’re absolutely right, identifying root causes is a critical, human-led process, and AI can’t replace that level of judgement or on-the-ground insight. What our software does is support that process, not take it over.
It helps by analysing the incident report and any linked data to suggest likely contributing factors and root causes. Think of it as a digital assistant, not someone doing the investigation for you, but someone who’s read every similar case, and can suggest valid insights or event is nudging you to look deeper or from another angle.
Implementation & Using AI in Safety – tools and integration

Do you feel it’s best to invest in self-hosting or go with an “off-the-shelf online” option for AI when using it for Safety?
It depends on your risk profile and how sensitive your safety data is. If you’re dealing with highly confidential or regulated information, like personal injury reports, health records, or legally sensitive investigations, self-hosting or using a private cloud option gives you more control over data residency, access, and security. It’s a bigger upfront investment, but you’re not handing critical data to a third party.
That said, for many organisations, a well-vetted off-the-shelf AI tool can still be a smart and safe choice, as long as it’s transparent about how your data is handled. Look for tools that offer enterprise-grade privacy settings, let you opt out of data being used for model training, and align with regulations like GDPR. These options are often easier to deploy, quicker to use, and regularly updated with new features.
In short: if your AI use cases are high-risk or you need to meet strict compliance standards, self-hosting may be worth it. But if you’re starting small and want to prove value first, a trusted online AI service with strong controls can strike the right balance between agility and security.
We operate a project management platform across 15-20 temporary sites. Most data is recorded via the platform – how would a LLM or bespoke AI model typically access the data, which requires a
personal login?
To access data from a project management platform that requires personal login, a language model or bespoke AI typically wouldn’t connect directly as a user – that’s not secure or scalable. Instead, you’d usually take one of two approaches:
1
API Integration – If your platform offers an API (most do), you’d create a secure integration where the AI system connects via an authorised service account or token. This allows controlled, auditable access to the data your team permits, like incidents, reports, or project updates – without sharing personal logins. You can also apply role-based access to limit what the AI sees.
2
Data Sync / Export Layer – If direct integration isn’t possible, you might sync selected platform data to a secure data lake or database (e.g. nightly exports of incident logs or risk assessments). The AI model can then analyse that data safely, without ever touching the live system.
In both cases, you’ll want strong data governance, making sure the AI only accesses what it needs, keeps logs of what it touches, and never stores sensitive information unless encrypted and authorised. The goal is to let AI assist with insights, summaries, or decision support, while your platform remains the single source of truth.
How would you export the info out of ChatGPT / LLMs and build this into your own existing systems? Would you need an expert company to make that happen – unsure of the mechanism for putting this to work?
To get information out of ChatGPT or any LLM and into your existing systems, you typically connect via the OpenAI API (or whichever model provider you’re using). That lets you send data into the LLM (e.g. an incident report), get a response back (e.g. a summary, insight, or action suggestion), and then pipe that response into your systems – like your project platform, a SharePoint doc, or an internal dashboard.
Here’s a simplified view of how it works:
- Your system (or middleware) sends a request to the LLM API with a prompt and your data.
- The model returns a result – e.g. a summary, root cause analysis, action suggestions.
- That result is either displayed to a user, saved, or used to trigger another workflow.
You don’t necessarily need an expert company, but if you don’t have in-house dev capability, partnering with someone who understands APIs, secure data handling, and prompt engineering will speed things up.
You can also start with no-code tools (like Power Automate, n8n, or Zapier) that let you connect AI services to your existing systems without writing code. These platforms offer pre-built connectors to both AI services and business systems, making it possible to create workflows like “When a new incident is logged, send it to GPT for analysis, then add the insights to a SharePoint document and notify the safety team.”

Big ChatGPT user for efficiency and fact checking and reviewing as a second look missing gaps in lengthy RAMS and risk assessments. Was recently at the annual customer forum for Evotix. 2 questions…. how would your system if we couldn’t stretch to the AI bolt on be better than Evotix? and would the AI be skewed if for example reports or documents that haven’t had a reviewed by the H&S team or users have entered data incorrectly into accident reports?
Even without AI, we’ve designed Notify to prioritise usability, clarity, and practical workflows that align with how safety teams actually operate. That means faster access to critical info, intuitive reporting, and less “click-heavy” admin. We also focus on flexibility whether it’s tailoring forms, setting up smart actions, or managing investigations, without needing constant vendor intervention. So even before AI enters the mix, our baseline is a system that feels lighter, easier, and faster to work with.
Yes, AI can only work with what it’s given. If the input data is poor (e.g. vague incident reports, incorrect classifications, or unchecked entries), then the insights it provides could be off the mark. That’s why we always position AI as an assistive tool, not a decision-maker.
Accuracy, Trust and Reliability of AI and Safety Tools
What if I try AI and it gives me the wrong answer? Can I trust it?
AI isn’t always right. It can make mistakes, especially without clear context or good data. Think of it like a junior colleague: useful, fast, and often helpful, but still needs checking. You can trust AI to assist, but not to make final decisions without human oversight. Spend time evaluating the tools and you’ll get a handle on their quality, where they shine, and where they don’t.

AI (Copilot in my case) is not infallible. It can get things wrong, like referencing incorrect or out of date legislation. How do you keep aware and fact check the AI?
You’re absolutely right. Tools like Copilot can be incredibly helpful, but they’re not always accurate, especially when it comes to referencing legislation or technical standards. I treat AI like a fast-thinking assistant, not a legal expert. It’s great at speeding up research and drafting, but I always verify anything important using trusted, up-to-date sources – particularly when working in regulated environments.
For fact-checking and more up to date deep research, I’ll often use a second AI tool like Perplexity, which cites its sources and makes it easier to double-check claims. If something feels off, I follow my gut and dig deeper, whether that’s checking with a subject matter expert, going back to the legislation directly, or asking the AI a follow-up question to test its reasoning. The goal isn’t perfection, it’s to build a process where mistakes get caught before they matter.

AI is a fantastic tool for speed and creating documents and reports for people but doesn’t this make people lazy and lose the importance of involving the human brain in creating such important information and analysis of information? How can we implicitly trust what AI generates? if we can’t be 100% trusting in its content we are exposed to risk and therefore, what’s the point in using it?
You’re absolutely right to raise this – it’s a tension at the heart of using AI in any critical field.
AI is brilliant for speed, structure, and getting you 80% of the way there, but it doesn’t understand the content the way a human does. If we start skipping the thinking bit because “the AI already wrote it,” we risk losing the deep judgement, insight, and accountability that actually make safety work meaningful and reliable. That’s when AI stops being a tool and starts becoming a risk.
The value of AI isn’t in replacing people, it’s in removing the grunt work so professionals can focus on what really matters. We should never trust AI implicitly. We trust our process: generate with AI, then verify, challenge, and adapt using human expertise.
That’s the sweet spot, faster output without compromising quality or critical thinking.
My feeling is that ChatGPT/AI will only encourage formulaic risk assessments rather that ensuring accurate risk assessments are completed. What are your thoughts?
That’s a valid concern. AI can definitely churn out generic or formulaic content if it’s used blindly. But when used well, it can actually enhance accuracy by prompting users to think critically, highlight overlooked hazards, or compare against best practices. The key is using AI as a thinking partner, not a copy-paste machine – it should support professional judgement, not replace it.
I’d like to know Andy’s thoughts on the recent paper regarding LRMs, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity”? The limitations of LLMs are already largely known, in that they will produce so-called “hallucinations” that look right, read right, but don’t contain
accurate information.
I’ve not read the paper but understand the question.
The idea that LRMs can simulate intelligent thinking but often fail when tasks become truly complex resonates with what I see in practice. It’s easy to get lulled into a false sense of confidence when the output sounds right.
Another related issue I see is sometimes the research is stale and gives outdated facts. There are approaches to limit this risk but its’ still a thing.
My own approach is to treat these tools like fast collaborators, not truth machines. I often use more than one for deep research and then use Perplexity to help me fact check things. This is a great tool and it includes citations and reference links to check.
When something matters (especially in regulated or safety-critical contexts), I always cross-reference across tools and sources. And if I’m not sure about a result, I listen to that little voice that says “something feels off” – gut instinct still matters. I’ll often follow up with more questions to probe the logic or dig deeper into the why.

Does the usage of AI help to control the data? Specifically personal data?
AI doesn’t automatically help you control personal data. In fact, it can introduce more risk if not handled properly. Some AI systems process large volumes of sensitive information, and if that data isn’t well-governed (e.g. stored securely, anonymised where possible, and access-controlled), it can lead to privacy breaches. To use AI responsibly, you need clear data policies, ensure compliance with laws like GDPR, and choose tools that prioritise data protection by design.
Legal, Compliance and Regulatory Implications of Using
AI in Safety

What are the risks of using AI in safety? Should we be worried?
Using AI in safety comes with risks, but most are rooted in human factors, like biased data, over-reliance, or lack of transparency. If you blindly trust AI outputs without review, you could miss critical context or make poor decisions based on flawed assumptions. The key is using AI as a co-pilot, not a replacement – helpful, but always under human oversight.
In my area in healthcare, most prosecutions are as a result of doing accurate and specific risk assessments. How will AI reduce the risk of prosecutions?
AI can help reduce the risk of prosecutions by improving the quality, consistency, and timeliness of risk assessments but only when used as a support tool, not a shortcut. It can prompt assessors with relevant hazards, suggest control measures, and surface insights from past incidents, helping teams cover more ground. Crucially, it still relies on human expertise to apply context and ensure assessments are specific, thorough, and legally defensible.
Has anyone had experience of how the HSE currently view using AI in safety management decision making?
In May 2025, the Health and Safety Executive (HSE) published a report titled “Understanding How AI is Used in HSE-Regulated Sectors”
AI has potential to revolutionise submissions to regulators of Safety cases‚ and similar documents. I believe it is extremely important that the regulatory authorities know whether the document has been written by a human being, by an AI or by a combination of both. There is a high likelihood of AI saying exactly the right things in exactly the right ways BUT not reflecting the actual business reality. I would be interested in the speakers views on this and how regulators might address and deal with such usage.
I agree that transparency around how a document was created, especially in safety-critical contexts like Safety Cases, is important but I’m more interested in how it’s been verified. AI can generate content that sounds authoritative and ticks all the right regulatory boxes, but that doesn’t guarantee it reflects how things actually work on the ground. If a Safety Case is written by AI with limited oversight, there’s a real risk it becomes more about presentation than substance which is dangerous when lives and compliance are on the line.
This is fundamentally a human problem, not an AI problem. It’s less about whether AI contributed to the document – which will become standard practice for most content in the future – and more about what governance has been applied, how the content has been verified, and ensuring this verification is demonstrable.
When something goes wrong, “AI didn’t get it right” might be cited as part of the root cause, but the true cause will almost always be lack of proper oversight. Regulators need to ask whether the content was checked and verified by someone competent. There should be a clear record of human review and ownership, particularly for sections where AI tools have been involved.
A good analogy is financial auditing. It’s fine to use automation, but ultimately, a responsible person has to sign off and be accountable.
I’d like to see regulators introduce guidance around AI-assisted submissions. That might include a declaration of AI usage, evidence of human validation, and perhaps even updated standards on documentation traceability.
Used well, AI can improve consistency, reduce duplication, and save time, but it must never replace the deep domain understanding and accountability that sits at the heart of safe operations.

How does Andy think, assuming AI is used more and more, that OSH responsibility will be handled from a legal standpoint the future? For example, how far away are we from someone standing up in court and claiming “but ChatGPT said it was reasonably practicable / compliant to only do X”?
I don’t think we’re far off from someone trying that defence – but I personally don’t think it’ll stick.
Responsibility in OSH will remain firmly with the duty holder. AI might inform decisions, but it doesn’t remove accountability. If someone stands up in court and says, “ChatGPT told me it was OK,” they’ll still be asked: Did you verify it? Did you apply professional judgement? Did you meet your legal obligations under the regulations? The courts won’t accept AI as a get-out-of-jail-free card.
The legal expectation will be that AI is a tool you can use to help manage your safety processes, not a shield. So, if you use it, document it, review it, and be able to show that human expertise was applied before acting on it. Delegating your safety brain to a chatbot won’t cut it.
What are the risks around AI/cyber security in relation to information uploaded to AI?
One of the main risks when using AI tools is inadvertently uploading sensitive or confidential information into systems you don’t fully control. If you’re using a public AI service like ChatGPT without proper safeguards, that data could be stored, analysed, or used to further train the model, depending on the provider’s terms. In regulated environments like health and safety, that could mean breaching GDPR or exposing sensitive incident or employee data.
There’s also a broader cyber security risk: AI systems can be targeted just like any other digital platform. Poorly secured third-party AI tools may become entry points for attackers, especially if they’re integrated into internal workflows or granted access to other systems (like email or file storage). If an AI tool gets compromised, anything you’ve fed into it, or anything it can access could be exposed.
The best defence is a clear AI usage policy. Only use trusted tools that let you opt out of data sharing, avoid uploading identifiable or sensitive information into public models, and make sure IT and security teams are involved when embedding AI into business systems. AI can make you faster but without proper data governance, it can also make you vulnerable.
What about moral responsibility for AI action?
Moral responsibility for AI actions ultimately rests with the humans who design, deploy, and oversee the system. AI doesn’t have intent or values, it simply follows patterns based on the data and instructions it’s given. That means organisations and individuals using AI are responsible for its outcomes, especially in safety-critical environments where poor decisions can have real-world consequences.
It’s tempting to blame “the system” when something goes wrong, but AI is a tool, not a scapegoat. Just like with any piece of machinery or software, we need clear accountability, governance, and human judgement in the loop to make sure AI is used ethically and safely.
Adoption, Culture and Change Management when Using
AI in Safety
How would you recommend H&S professionals upskill in the AI space to stay ahead of the curve?
Start by getting hands-on with AI tools – not to become a tech expert, but to understand what they can (and can’t) do. Tools like ChatGPT, Perplexity, and Copilot are great for exploring how AI can support writing, analysis, and decision-making in day-to-day safety work.
Next, focus on digital literacy: learn how data flows in your organisation, how decisions are made using that data, and where AI could fit. You don’t need to code, but understanding the basics of data quality, prompt writing, and ethical use will go a long way.
And finally, stay curious. Follow thought leaders, attend webinars, listen to podcasts, and join cross-functional conversations. The goal isn’t to become an AI expert, it’s to become the safety leader who knows how to apply it wisely, and find the right tools.
How do I convince my team to start using AI when they are sceptical or overwhelmed?
Start small by showing how AI can take away boring or repetitive tasks they already dislike, like drafting reports or summarising data. Give them low-risk, practical examples and let them experiment in a safe space without pressure. Once they see real benefits, the fear tends to fade, and curiosity takes over.
As AI evolves rapidly, how can Safety Teams stay agile and continually adapt their practices to stay ahead of risk?
Start by creating a culture of experimentation – encourage your team to test AI tools in small, low-risk areas and share what works. Stay connected to industry trends, attend webinars, and build relationships with tech-savvy peers who can help filter the noise.
Most importantly, treat AI like a moving target: review your tools and practices regularly, adapt based on what’s changing, and keep humans in the loop to guide decisions with context and judgement.
Is it not going to lose the human element in accident management and controls? When carrying out an investigation a lot of what people are saying is based in their body language and their tone, how will AI capture that?
That’s a really important point and you’re right to be cautious.
AI can support investigations by helping organise evidence, highlight patterns, or even draft initial summaries, but it can’t replace the human element. It doesn’t pick up on hesitation, discomfort, or shifts in tone — the things you notice in a face-to-face conversation that tell you there’s more beneath the surface. Those soft signals are often where the real insight lies.
So while AI might streamline parts of the process, the investigative skill, empathy, and professional judgement of a trained person remain absolutely critical. In my view, AI should be used to support better conversations, not replace them.
What is your experience with the potential cultural impact of with a workforce following the introduction of an AI based safety system?
Introducing an AI-based safety system, like anything new, can have a real cultural impact, both positive and challenging. It’s about change management for me.
On the positive side, it can boost engagement by making safety processes feel more modern, efficient, and responsive. For example, AI that helps write reports or surface insights can reduce admin load and encourage more frequent reporting, especially from frontline teams who might otherwise feel it’s a chore.
But there can also be resistance – some people worry AI is watching them, replacing their judgement, or making safety too impersonal. The key is transparency: explain what the AI does (and doesn’t do), emphasise that it’s there to support people, not replace them, and involve workers in shaping how it’s used. Done well, AI can reinforce a proactive, learning-focused culture but only if it’s introduced with care and trust.
Final thoughts on Using AI in Safety
As AI evolves, its impact on safety management will only grow. But successful adoption isn’t about replacing people, it’s about removing the manual grind so safety leaders like you can focus on high-value work like risk strategy, compliance, and cultural change.
Using AI in safety offers huge potential, if it’s applied with clarity, caution, and collaboration.
Book a demo to see how Notify’s safety software and AI ready safety tools are helping Safety Leaders like you save time and proactively reduce risk— all while keeping humans at the heart of decision-making.