Why Road Geometry Drives Single‑Vehicle Crashes: Insights from Haworth (2015)

Why Road Geometry Drives Single‑Vehicle Crashes: Insights from Haworth (2015)

MotoScience | Research‑Backed Riding Insight
Study referenced:

Characteristics of road factors in multi and single vehicle motorcycle crashes in Queensland (Narelle Haworth, 2015, Centre for Accident Research & Road Safety – Qld (CARRS-Q) Faculty of Health; Institute of Health and Biomedical Innovation

Purpose of the Study

Motorcyclists accounted for 6.4% of all police‑reported crashes and 12.5% of fatal crashes in Queensland between 2004 and 2011. Within this, 43% were single‑vehicle (SV) crashes and 57% were multi‑vehicle (MV) crashes.

Although overall motorcycle crashes declined during the study period, this masked a crucial divergence: SV crashes increased while MV crashes decreased.

Haworth’s study set out to understand:

    • how SV and MV crashes differ,
    • which road‑environment factors predict each type, and
    • why the two crash types are following opposite long‑term trends.

The analysis used descriptive comparisons and regression modelling to examine the influence of road geometry (horizontal and vertical alignment) and surface condition (sealed/unsealed, wet/dry) on crash occurrence.

Key Findings

1. Single‑vehicle and multi‑vehicle crashes follow different trends

Across the 2004–2011 period:

    • Single‑vehicle crashes increased, despite overall crash reductions
    • Multi‑vehicle crashes decreased

This indicates that the two crash types are driven by different mechanisms and should not be treated as a single category.

2. Road geometry is a major predictor of single‑vehicle crashes

The regression models showed that SV crashes were strongly associated with:

    • Tight or complex horizontal curves
    • Significant vertical alignment changes (crests, dips)
    • Combinations of both (crest‑into‑bend, dip‑into‑bend)

These features increase the perceptual and control demands placed on riders, particularly in terms of:

    • preview
    • speed planning
    • lean‑angle judgement
    • grip management

These geometric factors had much weaker effects on MV crashes.

3. Surface condition matters more for single‑vehicle crashes

SV crashes were more likely on:

    • wet surfaces
    • unsealed surfaces
    • surface transitions

This reinforces that SV crashes are sensitive to traction‑related errors and rider‑road interaction.

MV crashes showed little sensitivity to these factors.

4. Multi‑vehicle crashes are dominated by traffic interactions

MV crashes were more strongly associated with:

    • intersections
    • turning movements
    • right‑of‑way conflicts
    • visibility and expectation failures by other drivers

Road geometry and surface condition played a comparatively minor role.

Implication for Motorcyclists: single vehicle and multi-vehicle crashes happen in different ways

The study reinforces a critical distinction:

1. Single‑vehicle crashes are “road‑demand failures”

They occur where the road environment exceeds the rider’s available capacity at that moment. Riders are most vulnerable when:

    • preview is restricted
    • geometry changes rapidly
    • vertical alignment hides what’s coming
    • surface grip is reduced or unpredictable

These are perceptual‑cognitive challenges, not simply “going too fast”.

2. Multi‑vehicle crashes are “traffic‑interaction failures”

They arise from:

    • being unseen
    • being unexpected
    • being misjudged by other drivers

This is where defensive positioning, conspicuity, and anticipation of right‑of‑way violations matter most.

 

Why this matters for riders

Riders need different strategies for SV vs MV risk. Road design plays a larger role in SV crashes than commonly acknowledged. Training should emphasise perceptual and predictive skills in high‑demand geometry. Practical takeaways for riders include:

    • Improve preview and speed planning on curves.
    • Adjust early for surface transitions.
    • Use defensive positioning to mitigate MV risk.

Conclusion

Motorcycle crashes are often reported as a single category, but Haworth’s analysis of Queensland crash data shows something far more important: single‑vehicle (SV) and multi‑vehicle (MV) crashes not only behave differently and respond to different risk factors, but in this location and in this time period, they are also following different long‑term trends. When two crash types move in opposite directions over the same period, it’s a signal that the underlying mechanisms are being pushed by different forces.

For MotoScience, this distinction is crucial. It helps us separate rider‑road interaction failures from traffic‑interaction failures, and it gives us a clearer view of how road design shapes rider error.

Why Familiar Roads Create Slow Reactions: Insights from TRL’s PPR313

Why Familiar Roads Create Slow Reactions: Insights from TRL’s PPR313

MotoScience | Research‑Backed Riding Insight
Study referenced:
Driver Reaction Times to Familiar but Unexpected Events (Coley, Wesley, Reed & Parry, 2010 — TRL PPR313)

Purpose of the Study

The report investigates how quickly drivers respond to unexpected events that occur in otherwise familiar driving environments. The central question:

“Does familiarity with the environment speed up or slow down reaction times when something unexpected happens?”

This is directly relevant to crash causation analysis, because many real‑world crashes occur on roads the rider/driver knows well — where expectation, complacency, and attentional narrowing all interact.

Key Findings

1. Expectation is the dominant factor in reaction time

The study reinforces a well‑established human‑factors truth: Reaction times double when an event is unexpected compared to when it is expected. This aligns with broader literature on perception–response time (PRT).

In familiar environments, drivers often predict what will happen next — which is efficient most of the time, but catastrophic when the prediction is wrong.

2. Familiarity can increase vulnerability

Counterintuitively, the report suggests that familiarity does not necessarily improve reaction times. Instead:

      • Drivers in familiar environments may allocate less attention to monitoring for hazards.
      • They rely more heavily on expectation and schema-driven perception.
      • When an unexpected event occurs, the “expectation violation” adds cognitive delay.

This is consistent with the broader cognitive psychology principle that schema conflict slows detection.

3. Reaction times vary by event type

The study distinguishes between:

      • Common but unexpected events (e.g., brake lights ahead) → Reaction times around 1.25 seconds in the literature.
      • Rare surprise events (e.g., an object suddenly entering the path) → Reaction times around 1.5 seconds or more.

PPR313’s own experimental data aligns with these ranges, reinforcing that surprise is the key driver of delay.

4. Reaction time is not a single number

The report emphasises that PRT is a distribution, not a constant. Influencing factors include:

      • Expectation
      • Cognitive load
      • Familiarity
      • Visibility
      • Event type
      • Driver age and experience
      • Environmental complexity

Using a single “standard” reaction time in crash analysis is misleading.

5. Implications for road design and safety

The authors highlight that:

      • Designers should not assume drivers will detect hazards instantly, even in familiar locations.
      • Familiarity may reduce vigilance.
      • Safety interventions should consider expectation management (e.g., consistent signage, predictable layouts).

Implication for Motorcyclists: The Paradox of Familiarity

Most of us would probably assume we react faster on familiar roads. After all, we know the bends, the junctions, the usual traffic patterns, even where the ‘unexpected threats’ are likely to appear.

But the research says otherwise.

TRL’s PPR313 study shows that familiarity can actually slow our reaction to unexpected hazards — sometimes dramatically.

1. Expectation Shapes What You See — and What You Miss

PPR313 reinforces a core truth: our brain doesn’t process the world neutrally. It predicts what should happen next.

On a familiar road, those predictions become stronger and more automatic. That’s efficient — until something violates the script. When an unexpected event occurs (a car pulling out, a pedestrian stepping off the kerb, a vehicle stopping abruptly), the brain must:

      • Detect the mismatch
      • Update the mental model
      • Select a response
      • Initiate action

That extra cognitive step — the “expectation violation” — adds measurable delay.

2. Reaction Time Isn’t a Number — It’s a Distribution

The study highlights that reaction time varies widely depending on:

      • Expectation
      • Familiarity
      • Event type
      • Cognitive load
      • Visibility
      • Driver experience

This aligns with the broader human‑factors literature: reaction time is not a fixed value. Yet many crash reconstructions still assume a single “standard” figure.

PPR313’s data shows:

      • Expected events: ~1.0–1.25 seconds
      • Unexpected events: ~1.5 seconds or more

That difference is the difference between stopping in time… or not.

3. Familiarity Can Reduce Vigilance

One of the most important findings is that drivers in familiar environments often pay less attention to hazard detection because the brain automates what it thinks it already knows.

4. Surprise Is the Real Killer

PPR313 confirms surprise adds delay. Given any particular following distance, delay means less distance for braking.

5. Stopping Distances, the Highway Code and the Two‑Second Rule

The Highway Code’s stopping‑distance table is built on a 0.67–0.70 second reaction time — a figure derived from controlled, expected braking tasks. It assumes the driver is already primed to respond.

PPR313 shows that this assumption collapses the moment surprise enters the picture. When an event is unexpected, reaction time stretches toward 1.5 seconds or more — more than double the HC assumption.

That has two major consequences for the Highway Code and the Two Second Rule.

5.1. Highway Code stopping distances are optimistic

They only hold when:

      • the hazard is expected
      • the driver is alert
      • the environment is predictable

Add surprise, and the real stopping distance increases dramatically. In other words, the Highway Code’s calculations only work when we’re expecting the hazard. When we’re not, we need more space than the textbook suggests.

5.2. The Two‑Second Rule isn’t a universal safety margin

Alongside the Highway Code’s speed-based stopping distances, drivers and riders are taught to apply a minimum time-based following distance by leaving a minimum two second gap when following another vehicle. Since the Two‑Second Rule is time and not distance based, its adequacy changes with speed:

      • Urban speeds (20–30 mph): Two seconds allows a reasonable buffer for unexpected events.
      • Rural speeds (50 mph): Two seconds is marginal. A 1.5‑second surprise reaction consumes most of that gap before braking even starts.
      • Motorway speeds (70 mph+): Two seconds is totally inadequate. It takes roughly 5.3 seconds of braking to come to a stop from 70 mph if we brake at 0.6 g  — a figure typical of ‘hard braking’ by most riders.

6. Why This Matters for Riders

Familiarity doesn’t protect us. It blinds us. Practical takeaways include:

      • Treat familiar roads as if they were unfamiliar — reset attention deliberately. Scan actively, not lazily.
      • Expect the unexpected — not as a slogan but as a cognitive strategy. Surprise is our enemy.
      • Build time into riding — reaction time is not guaranteed. A wider safety margin buys back the reaction time you lose to surprise.
      • Recognise when you’re on autopilot — fatigue, routine, and comfort all reduce vigilance.
      • Understand that other drivers are even more vulnerable to expectation failure — especially at junctions, roundabouts, and driveways.

7. How This Connects to Science of Being Seen

For practical applications in the context of the ‘Sorry Mate, I Didn’t See You’ collision, visit the Science Of Being Seen website. PPR313 provides the empirical backbone for the perceptual mechanisms explained in SOBS.

Conclusion

TRL’s PPR313 study is a powerful reminder that our brains are prediction engines and while familiarity makes those predictions stronger, it also makes violations slower to detect. Understanding this isn’t just academic. It’s survival.

“I speak, therefore I’m right” – Part 3 the difference between Data and Interpretation

“I speak, therefore I’m right” – Part 3 the difference between Data and Interpretation

In Part 1 of these introductory posts, I talked about the trap of relying on intuition and ‘common sense’ instead of critical thinking, and in the second I mentioned how Sabine Hossenfelder — a YouTuber and theoretical physicist with a sharp eye for nonsense — had been discussing misinformation and warned that people who “want misinformation — consciously or subconsciously — to justify conclusions they hold dear” are a real problem. We explored why we often reject critical thinking altogether — thanks to Groupthink. 

In Part 3, we need to look at something even more fundamental — the difference between data and interpretation — and how influencers can turn that gap into a cycle of misinformation.

What data actually is

Data is simply information about the world:

    • a measurement
    • a count
    • a recorded observation
    • a speed or distance
    • a crash statistic
    • a percentage
    • a timestamp

Data is neutral. Data doesn’t care what we think. Data doesn’t have an agenda.

But the moment a human looks at data, something happens. We interpret it. And that’s where things go wrong.

Interpretation is where bias creeps in

Interpretation is the story we tell ourselves about the data. Two people can look at the same dataset and come to completely different conclusions because:

    • they start with different assumptions
    • they have different beliefs
    • they want different outcomes
    • they’re influenced by different groups
    • they’re motivated to defend their worldview

This is why Hancock’s “I’ll take another decision if I think the science won’t work” is so revealing. He wasn’t rejecting the data — he was rejecting the interpretation he didn’t like. And riders do this too.

Motorcycling is full of examples where data and interpretation diverge

The example of the SMIDSY crash

The data from decades of research shows:

    • drivers almost always look
    • drivers often fail to see
    • drivers often misjudge speed and distance
    • because motorcycles fall below the threshold of visual salience

But the popular interpretation?

    • “They didn’t look.”
    • “They were distracted.”
    • “They need their eyes tested.”
    • “They are poorly trained.”

The data says one thing. And data isn’t wrong, it just is. But the narrative says something far more emotionally satisfying.

“Standing on the pegs lowers the centre of gravity”

Here’s another widely-disseminated statement. Asked in a reader’s letter “how the centre of mass of the motorcycle moves when a rider stands on the pegs”, the entire editorial staff of a US rider magazine some years ago claimed to have carefully consider the question, then stated that that answer was obvious; “the centre of mass moves down because the rider’s weight is taken on the pegs”.

In fact, the data (which we can derive from some pretty straightforward school level physics) is clear. The combined centre of mass of the bike + rider system moves up, not down. But unfortunately, that popular interpretation that “everyone knows it lowers the CoG” is repeated time and again. The data does change, but how riders interpret  it does. 

Where influencers come in: the misinformation cycle

In Part 2, Sabine Hossenfelder made an important point: people often want misinformation because it confirms what they already believe. But it’s not a straightforward one‑way street. It’s a really a cycle.

Step 1 — People want a simple explanation

Riding is complex. Human perception is complex. Crash causation is complex. Simple stories feel better. Simple stories spread online:

      • cherry‑picking a single study
      • quoting selectively
      • bending statistics
      • ignoring contradictory evidence
      • oversimplifying complex issues
      • presenting opinion as fact

It’s fast, confident, and emotionally satisfying — which is exactly why it spreads so easily.

Step 2 — Influencers supply the simple story

Not necessarily maliciously — but because confident statements sound authoritative and nuance doesn’t trend. 

Step 3 — The simple story becomes a belief

Once repeated enough, it becomes “common knowledge”, “what everyone knows” and “how we’ve always done it”. 

Step 4 — People seek confirmation

And then the effect of Confirmation Bias kicks in as we actively look for content that supports what we believe and that the “centre of gravity moves down when we stand on the pegs”, we avoid content that challenges us  (my letter pointing out the basic misunderstanding of the physics was never printed by that US magazine) and we share content we trust with our peers. 

Step 5 — Influencers see demand and produce more

Algorithms reward engagement, simple stories get clicks, and when channels are monetised… 

Step 6 — Popular interpretation

And the cycle continues. This is how riding myths persist for decades. Word-of-mouth, bike magazines, motorcycle forums, influencers. They are all subject to the same biases.

How do we break the cycle?

Not by banning influencers. Not by shouting at people. Not by insisting that “science says so”, tempting though that may be.

To start the process, we need to recognise that there are two very different ways to interpret data. We need to recognise when we are looking at:

      • popular interpretation — “what everyone knows”
      • rigorous interpretation — “what the data tells us”

A rigorous scientific interpretation is an approach:

      • looks at all the available studies, not just one
      • checks whether findings fit what we know about the science (physics, psychology, human factors and many more)
      • tests alternative explanations
      • acknowledges uncertainty
      • updates conclusions when new evidence appears

It’s slow, careful, and sometimes uncomfortable — but it’s the closest we get to understanding reality.

 

How do we apply a scientific interpretation ourselves?

One of the big and most persistent myths about science is that it’s “difficult” — something reserved for experts in white coats (someone made that exact ‘white-coat’ statement on my Facebook page this very morning), who are working in labs, surrounded by equipment most of us can’t name.

But that’s not what science is. Science isn’t a subject. It’s not a qualification. It’s not a club you need permission to join. Science is simply a structured way of making sense of the world. At its heart, science is nothing more than:

      • noticing a puzzle
      • asking a question
      • gathering information
      • testing an idea
      • seeing what the evidence supports

We do this far more often that we realise, every single day of our lives. Need a new pair of boots? Which are you going to buy? Which pair will suit best? What size do you need? Are you going to try them on?

Unless you simply buy the first pair that grabs your attention and force your feet into them, you’re gathering data and updating your hypothesis about which pair are best and when you make your decision and choose one piece of kit over another, you’ve compared evidence and drawn a conclusion.

None of that is “difficult”. So why does science feel difficult?

The brain likes mental shortcuts

The human brain evolved to save energy. Thinking deeply is energy-expensive. Evaluating evidence is slow. Challenging assumptions is uncomfortable. It’s tiring.

So instead of analysing information carefully, the brain relies on mental shortcuts — quick, effortless rules of thumb that usually work well enough to get us through the day. Psychologists call these shortcuts heuristics. They’re brilliant for survival, but terrible for understanding complex problems and they can lead us into traps:

      • we see what we expect to see
      • we trust our intuition even when it’s wrong
      • we prefer simple explanations over accurate ones
      • we avoid information that contradicts our beliefs
      • we mistake confidence for competence

This is why science feels hard. Not because the method is complicated, but because it asks us to slow down, question ourselves, and override our instincts. And that’s exactly the moment influencers can slip in.

How do we break the cycle?

The answer is to break the cycle by cultivating intellectual curiosityWhenever someone presents a claim, ask some simple questions:

      • What does the data actually say?
      • Is this interpretation supported by evidence?
      • Is the conclusion the only possible explanation?
      • Does this feel true because it’s true, or because it fits what I already believe?
      • Is someone simplifying a complex issue to sound authoritative?

And most importantly:

Does the interpretation match the mechanism? If it contradicts physics, psychology, or human factors, it’s probably wrong.

Why this matters for riders

Motorcycling is unforgiving. If we misinterpret any information that bears on our decision‑making — whether that’s the risk of a particular manoeuvre, how surfaces produce grip mid‑corner, or what type of clothing is appropriate for the kind of riding we’re doing — then it’s a classic case of garbage in, garbage out. The quality of our decisions depends entirely on the quality of the information we use to make them.

And this is where the brain’s love of mental shortcuts becomes a liability. Instead of analysing the situation carefully, we reach for the quickest, easiest explanation. We rely on intuition, habit, or whatever “everyone knows”. We trust confident voices over accurate ones. We prefer simple stories to complex mechanisms. It feels efficient — but it’s often wrong.

Breaking that cycle doesn’t require a degree in physics or psychology. It simply requires a willingness to pause and ask: “does this explanation actually fit what we know about how riding works?” Does it align with physics, human perception, and real‑world evidence? Or is it just a neat story that feels right because it saves us the effort of thinking? Influencers thrive in that space. They offer certainty, clarity, and confidence at exactly the moment our brains are looking for the path of least resistance.

Science isn’t difficult. Understanding the difference between “what the data says” and “what we think it means” is one of the most powerful safety tools we have. What’s difficult is resisting the shortcuts our brains prefer.

MotoScience exists to help riders make that shift — from comfortable stories to accurate understanding, from intuition to evidence, from “what everyone says” to what the data actually supports.

 

“I speak, therefore I’m right” — Part 2: the allure of ‘groupthink’

I speak, therefore I’m right” – Part 2 the allure of ‘groupthink’

Last time out in Part 1, I talked about the trap of relying on intuition and “common sense” instead of critical thinking. By coincidence, the very next day I watched a video by Sabine Hossenfelder — a theoretical physicist with a sharp eye for nonsense — discussing misinformation on YouTube.

She made a point that surprised me. She wasn’t just criticising creators who peddle misleading content. She was more concerned about the people who want misinformation. As she put it:

“The problem isn’t the few people who produce this content, it’s the many who watch it… They want misinformation — consciously or subconsciously — to justify conclusions they hold dear.”

And she’s right. We click on content we agree with because it’s mentally easier. It feels good. It fits our worldview. And it saves us the effort of thinking critically.

Understanding Groupthink

This is where Groupthink creeps in — the tendency to adopt the beliefs of the group around us, even when those beliefs are wrong.

Groupthink occurs when individuals conform to the views of their peers.

Groupthink can come about because individually we’re lazy – it’s easier to listen to someone else telling us what’s right, rather than critically assessing the topic, because that means we have to seek out the information needed to formulate our own, better-informed point-of-view.

Some years ago I covered an article in a US riding magazine that claimed to have asked all their editors, riders with years of experience, whether “standing up on the pegs lowers the centre of gravity of the motorcycle”. They ALL claimed to have carefully considered the problem and then agreed. They were ALL wrong as any physics teacher will tell you. I even wrote a comment explaining why they were all wrong, by using a simple diagram, and it was never published. ‘Stand up to lower the bike’s CoG’ is STILL a common Groupthink myth.

What’s worse is that we may indulge in Groupthink even when we hold dissenting opinions in order to be a better fit in a social group which objects to having its Groupthink challenged. We can see Groupthink operating when a group prioritises conformity over critical evaluation of ideas; “we’ve always done it this way” is an expression of Groupthink suppressing an objection. Group members who continually express dissenting opinions may find their voices suppressed. Either they leave the group so their voice no longer troubles the group, or they avoid speaking out and offering their own differing perspectives in order to maintain group cohesion.

We can see Groupthink operating every time a SMIDSY crash video is put up. Having done my own research, I’ve documented the visual perception issues behind the ‘Looked But Failed To See’ crash in the ‘Science Of Being Seen’ (SOBS) project, and documented the fact that incidents where a driver genuinely ‘did not look’ at a junction and caused a collision are rare. Yet, as soon as the video appears, there will be a long sequence of responses claiming “the driver didn’t look”. Or must have been “on the phone”. Or “was distracted”.

Without a crash study, those are speculative statements, powered by Groupthink. It’s what our peers tell us “what must have happened”. The facts — as gleaned from scientific investigations into how road users behave at junctions – show that it’s far more likely the driver:

    • ‘looked but COULD NOT see’ thanks to Vision Blockers.
    • ‘looked but FAILED to see’ thanks to visual perception issues.
    • ‘looked, saw but MISJUDGED speed and distance’ thanks to the cognitive difficulties of determining the ‘time to arrival’ of small objects like motorcycles.

As I’ve shown in SOBS, “didn’t look” or “distracted” are rare events, alongside  causative factors such as “medical emergencies” which we’d probably not consider. Together they account for just one in ten collisions at junctions.

Even spending a few moments using some critical thinking would show us that if drivers really ‘didn’t look’ they’d rarely get past the first intersection where they encounter other vehicles! There’s no property that makes other vehicles somehow magically visible to drivers who aren’t looking.

But every time we see a newspaper report that says a crash was caused by a “driver not looking properly” or police claim theirs statistics show most collisions result from “poor observation”, it’s reinforcement for our built-in tendency to look for information that aligns with our existing beliefs.

So what can we do?

The short answer is “ask questions”.

1. Cultivate intellectual curiosity.

2. Pause before accepting a claim.

Ask yourself:

    • Is this a proven fact?
    • Is it an opinion backed by reasoning?
    • Or is it just a hunch dressed up as expertise?

The moment someone says “we all know…”, treat it as a warning sign. And be especially wary of influencers — including those in motorcycling — who treat facts, data, and science as optional extras rather than foundations.

This doesn’t mean you should stop exploring new ideas. It means you should evaluate them, especially when you find yourself nodding along.

And yes — that includes this article.

Next time: we’ll look at how influencers misuse data, why misleading statistics are so persuasive, and how to spot when you’re being led astray.

“I speak, therefore I’m right” — Part 1: Why Riders Need Critical Thinking

“I speak, therefore I’m right” — Part 1: Why Riders Need Critical Thinking

Back in 2024, Matt Hancock appeared before the COVID inquiry to explain why certain decisions were made — and others weren’t. At one point, the barrister Hugo Keith asked him:

“Weren’t you meant to be following the science?”

Hancock replied:

“No, I was meant to be guided by the science. But if I thought it wouldn’t work, I’d take another decision.”

Commenting on BlueSky, Professor Alice Roberts posed the obvious question:

“Based on what evidence? Some kind of personal hunch?”

Her point was simple: science is the best tool we have for understanding the world. It’s better than crystal balls, animal entrails, and gut feelings — and it’s certainly better than hunch‑based decision‑making.

So what is science? And why does it matter to you and me when we’re riding a motorcycle?

Science = Critical Thinking

Science is built on critical thinking — a systematic, objective way of understanding the world. The scientific method:

    • identifies a puzzle
    • forms hypotheses
    • gathers information
    • tests ideas
    • analyses results
    • draws conclusions

It’s not mystical. It’s not abstract. And it’s not reserved for laboratories.

In fact, we apply the same process every single day on a motorcycle.

Everyday Riding Is Full of Mini‑Experiments

Sometimes the puzzles are trivial — though the consequences of getting them wrong may not be.

Choosing a helmet? You compare features, test the fit, check the finish, weigh up the pros and cons, and make an informed decision.

Approaching a traffic light? You estimate distance, speed, and timing. You predict whether the light will change. You decide whether to brake or continue.

These are small experiments. You gather data, evaluate it, and act.

And sometimes the puzzles are far more complex — like trying to understand why the “Sorry Mate I Didn’t See You” (SMIDSY) crash keeps happening to riders. That requires deeper thinking, better evidence, and a willingness to challenge assumptions.

Why Intuition Isn’t Enough

This is where Hancock’s “I’ll decide based on what I think” approach falls apart. Relying on intuition alone leads to:

    • inaccurate assumptions
    • faulty decisions
    • poor outcomes

In riding, that can mean misjudging a corner, overestimating grip, underestimating risk, or failing to see a hazard developing.

“Common sense” is not a reliable guide. Intuition is not a safety system. Critical thinking is.

The Power of Critical Thinking for Riders

Critical thinking gives us something intuition never can: a structured way to understand what’s really happening.

It helps us:

    • question our assumptions
    • recognise our cognitive limits
    • understand why errors occur
    • make better decisions under pressure
    • adapt when new information appears

And crucially, it gives us conclusions we can be reasonably confident in…

…at least until new evidence makes us rethink the puzzle.

That’s the scientific mindset — and it’s the foundation of MotoScience.

In Part 2, we’ll look at why riders — and humans in general — often reject critical thinking altogether, and how Groupthink shapes our beliefs.