A solid ISO 12100 risk assessment example does not start with a spreadsheet. It starts with machine limits, real tasks, real access, real energy sources, and one uncomfortable question: what actually happens when an operator, technician, or electrician gets involved? The worst kind of risk assessment is not chaotic. The worst kind looks professional. It has scores, standard references, comments, and a polished final PDF that makes everyone feel the issue is closed. But ISO 12100 does not grade document aesthetics. It cares whether the safety process was actually carried out and documented.
That is where the trouble starts. A spreadsheet does not protect logic. It does not force the team to move from machine limits to real task scenarios. It does not remind anyone that a protective measure can create secondary risk. It does not preserve a strong audit trail. And when risk reduction begins, it often lets teams do what industry loves most: jump straight from a hazard to a guard, a procedure, or a warning as if the ISO 12100 sequence were optional decoration. It is not. It is the backbone.
ISO 12100 risk assessment example: why a polished spreadsheet can still fail
Let us get one myth out of the way. Can you do a risk assessment in Excel? Yes. Of course you can. That is not the real question.
The real question is whether the spreadsheet can support a process you can later reconstruct, verify, and defend technically, logically, and during an audit. That is the line between a table and engineering.
Excel is not risk assessment documentation
ISO 12100 does not ask for Excel. It asks for risk assessment documentation. Those are not the same thing.
A spreadsheet can be useful. It can collect hazards, scores, standard references, and comments. It can be a team notebook, a draft, or working data. It may even look tidy. Still not enough. If it cannot turn a chain of decisions into defensible risk assessment documentation, it remains a spreadsheet. Useful working data, maybe. Documentation, not yet.
Good risk assessment documentation has to show more than what was entered into cells. It has to show:
- which assumptions the team worked from,
- what the machine limits were,
- which users and tasks were considered,
- which hazardous situations and hazardous events were analyzed,
- how risk was estimated,
- which protective measure was selected,
- in what order risk reduction was applied,
- whether secondary risk appeared,
- how residual risk was evaluated,
- and who made the key decisions.
If the file cannot show that chain, then the file may look professional, but it is still only working data. ISO 12100 does not ask whether the file looks complete. It asks whether the risk assessment process was actually performed and documented.
The order of risk reduction is not optional
This is where many assessments quietly fall apart. The team identifies a hazard and jumps straight to a guard, an interlock, a procedure, or an instruction. Fast. Familiar. Wrong.
ISO 12100 requires a sequence. First come inherently safe design measures. Then safeguarding and complementary protective measures. Only then comes information for use. Reverse that order and you do not have an efficient shortcut. You have broken the logic of risk reduction.
The tool does not save you from that mistake. Not Excel. Not a generic spreadsheet. Not even Safety Software if it only digitizes cells and does not protect the engineering reasoning behind them.
ISO 12100 risk assessment example: where secondary risk shows up
Secondary risk is the moment when weak assessments start pretending the problem is gone. A protective measure does not end the process. A protective measure changes the risk picture.
Sometimes the change is clearly positive. Sometimes it solves one problem and creates another. If you do not check that second effect, your documentation describes a world that looks safer than the real one.
Mechanical example: an interlocked movable guard can shift the problem
Take a packaging machine where operators clear jams near dangerous motion. The team adds an interlocked movable guard. On the surface, that is the right move. Access to the hazard zone during automatic operation is restricted. Good.
But then reality kicks in. During cleaning and jam clearing, the operator now has to reach deeper into the machine, work through a narrower opening, twist the wrist in an awkward position, and work with worse visibility. The primary mechanical risk dropped, but a new question appears: did the protective measure create a new problem with access, ergonomics, or trapping points during intervention?
There is an even uglier version of the same mistake. A large hinged guard solves a minor rubbing hazard but introduces a pinch or shearing hazard when the guard drops or swings uncontrolled. Now the protective measure may create consequences more severe than the original problem. That is secondary risk, and it is not a side note.
Automation example: a correct interlock can still provoke bypass
Now look at control logic. A team adds an interlock, manual reset, and restart prevention after guard opening. On paper, the safety function is correct. So far so good.
The problem starts when the new sequence makes every intervention slower, more repetitive, and more annoying. If the logic is implemented without checking how people actually work, you can create a behavioral secondary risk: operators resetting by reflex, poor visual check of the zone before restart, attempts to bridge protective devices, or pressure to speed the process up outside the intended safety concept.
In other words, the safety function itself may be technically correct, yet the implementation still pushes the human side of the system toward unsafe behavior. That is not a detail. That is the kind of thing that turns a good-looking design into a bypassed design.
Electrical example: good intentions inside the control cabinet can backfire
Electrical cases are no different. Suppose there is shock risk during diagnostics in the control cabinet. The team tightens things up: better segregation, isolation procedure, verification of absence of voltage, access restricted to authorized people. Sounds solid. Often it is solid.
But what if the diagnostic process becomes longer, more dependent on the exact order of steps, and more vulnerable to organizational shortcuts? Then you need another question: did we reduce electrical risk while increasing the chance of procedural error, skipped steps, or unclear responsibility?
There is also a classic trap with equipotential bonding. Someone notices an issue on one machine part and decides to improve safety by adding equipotential bonding only to that one section. The intention is good. The execution can be bad. If the measure is applied selectively, one part of the machine may end up with a different reference potential than the rest. Under fault or transient conditions, that can increase the risk of potential difference when a person touches two parts at once. A measure introduced as an electrical improvement can make the overall situation worse if it is fragmentary and not assessed as part of the whole system.
That is what ties these examples together. In every case, someone did something that looked reasonable at first glance: added a guard, improved the control logic, added equipotential bonding. The problem was not that the protective measure existed. The problem was that nobody checked what it did to the rest of the human-machine-energy-process system.
Audit trail is not admin. It is proof that someone actually thought
Many teams treat audit trail as optional system decoration. Big mistake.
In practice, the problem is rarely that the document has no result. The result is usually there. A table exists. A score exists. A protective measure exists. The trouble starts later, when somebody asks simpler and far more dangerous questions:
- Who changed this evaluation?
- When was it changed?
- Why was the risk considered acceptable?
- Why was one case handled with information for use while another needed a safety function?
- Why was PL d selected for this function?
The final PDF shows the destination. The audit trail shows the road. In risk assessment, the road matters.
It shows whether decisions were consistent, whether the team revisited assumptions, whether re-evaluation happened after a protective measure was added, whether secondary risk was noticed, and whether somebody accepted something too quickly just to close the document.
Here is a common failure mode. A hazardous situation is changed from medium to low risk. The spreadsheet shows only the latest state. Fine. But why did it change? Was a protective measure added? Were the task parameters corrected? Was the scenario redefined after a better analysis? Or did someone simply want the document to stop causing trouble? Without an audit trail, those very different situations look identical.
Another classic case: a team writes low risk and acceptable, but the file gives no reason. Was exposure rare? Was the likely injury minor? Were only trained people involved in controlled conditions? Or did the number just look low enough that nobody wanted to revisit it? Again, without an audit trail, there is no difference between justified engineering and cosmetic cleanup.
And then there is the most dangerous version: PL d appears in the document, everything looks serious and professional, but nobody can explain what exact safety function required it, what it monitors, what it stops, or whether the decision came from the actual risk assessment or from habit. If the answer is habit, that is not engineering. That is copy-paste confidence.
An audit trail is not there mainly for the auditor. It is the engineering memory of the project. Without it, you are not defending the risk assessment. You are defending only the last saved version.
ISO 12100 risk assessment example: one machine, sixteen hazardous situations
Now let us get concrete. Take an automated packing station. Not a vague machine on paper. A real system with dangerous motion, operator intervention, jam clearing, maintenance activity, electrical and pneumatic energy, and safety functions dependent on control logic.
In this ISO 12100 risk assessment example, the analysis did not stop at two or three generic entries. It produced sixteen distinct cases: thirteen task-based hazardous situations and three hazardous events in line with ISO 12100 logic.
The scenarios covered things such as minor intervention during operation, jam clearing, adjustment of protective devices, functional testing, energy isolation, replacement of wear parts, fault diagnostics, unintended start-up, object ejection, and pneumatic energy release.
That level of separation is not bureaucracy. It is the only way to see the real risk picture. A well-run assessment does not evaluate the machine in general. It evaluates the relationship between person and machine during specific tasks, in specific zones, under specific conditions of access.
And that is exactly what weak assessments miss. They talk about dangerous motion in general, mechanical risk in general, electrical risk in general. Real risk does not happen in general. It happens during a task.
What this example revealed in practice
First, the main problem was not the mere existence of dangerous motion. The main problem was access to it during intervention, jam clearing, and fault diagnostics.
Second, not every scenario deserved the same response. Some cases could legitimately be accepted as low risk, but only because the documentation clearly limited frequency, execution conditions, and user role. That kind of acceptance is defensible only when the logic is visible.
Third, several medium cases were impossible to defend with a lazy note such as guard plus instruction. Those cases required real risk reduction, clear description of the safety function, restart prevention, manual reset conditions, access rules, and a residual risk evaluation that actually meant something.
Fourth, the example exposed why the decision chain matters. Initial risk estimation alone tells you very little. A real assessment has to show the sequence: initial evaluation, decision on acceptability, selection of protective measure, check for secondary risk, re-evaluation, and then residual risk. Simplified documents usually show only the last line and hide the entire engineering process that led there.
Take jam clearing inside a guarded zone. Initial risk may be unacceptable because a person can access dangerous motion during recovery. The solution might include an interlocked movable guard, interlock monitoring, restart prevention, and manual reset outside the hazard zone. But that is still not the end. The team then has to re-evaluate: does the new arrangement reduce visibility, worsen reach, or encourage shortcut behavior? If that second pass does not happen, the residual risk decision is weak from the start.
Or take access to live parts during fault finding in the control cabinet. You cannot wave that away with trained personnel only. The documentation has to show the conditions for isolation, verification of de-energized state, restrictions on access, and whether the revised diagnostic procedure introduced a new path for human error.
What a defensible ISO 12100 risk assessment example must prove
If you want a document that holds up, it has to do more than display rows of hazards and scores. It has to prove that the team understood the machine, the tasks, the users, and the consequences of the chosen protective measures.
At minimum, a defensible example should make the following visible:
- intended use and reasonably foreseeable misuse,
- machine limits,
- who uses the machine and who intervenes inside it,
- the real tasks carried out during operation, cleaning, set-up, diagnostics, maintenance, and recovery,
- the relevant hazard, hazardous situation, and hazardous event for each case,
- initial risk estimation,
- the selected protective measure and the order of risk reduction,
- the check for secondary risk,
- re-evaluation after the change,
- residual risk and the basis for acceptance or rejection,
- and a clear audit trail showing who decided what, when, and why.
That is the difference between looks professional and is defensible. It matters during CE work under 2006/42/EC (or new EU Machinery Regulation (EU) 2023/1230), and it matters even more after a customer challenge, an audit, or an incident.
So yes, you can build a file in Excel. You can also build one in a dedicated system. The tool is not the real argument. The real argument is simpler and harder: can your documentation prove the logic of the process? If not, you do not yet have risk assessment documentation. You have a table. Sometimes a tidy one. Sometimes an impressive one. Still just a table.