Critical thinking is an essential skill to develop in a digital risk career. Without critical thinking, you risk poor decisions that will expose your organisation to unnecessary risk.
It is common to avoid critical thinking in every day life - it would be exhausting to think critically 24/7, instead we use techniques such as heuristics, as well as developing trust, to essentially save your brain’s bandwidth and processing needs. But in important situations which require well thought through decisions, heuristics and its more dangerous friend the cognitive bias, may lead to poor reasoning and weak analytical rigour.
What is critical thinking? Critical thinking can be thought of as taking in all available information, processing it correctly, and providing an output of a sound conclusion and informed choice. All available information includes facts that may be relevant, backing it up with evidence which includes from what you see yourself and other sources. Evidence and processing may also come from others. And the final output is the ability to assemble this into something coherent and robust.
So what are some red flags to look out for? Red flags are heuristics — shortcuts that quickly signal potential problems. Risk management professionals rely on red flags to highlight areas that require deeper analysis. However, just like any heuristic, they are not the end in themselves, and recognising a red flag should not lead to an automatic conclusion. Instead it should trigger further inquiry, critical evaluation, and the question of what evidence needs to be further gathered to confirm assumptions.
Here are ten red flags that indicate you might not be applying critical thinking — along with mitigating strategies to combat them.
1. Accepting Information at Face Value
Example: A vendor claims their product is “100% secure” with no vulnerabilities, and you believe it without verification.
Mitigant: Verify these types of claims through independent security testing or third-party assessments. Ask for proof, such as security testing results.
2. Relying on Emotional Reasoning
Example: A board member dismisses cyber threats because “we’ve never been hacked before,” despite industry-wide breaches.
Mitigant: Base decisions on data, threat intelligence, and risk assessments rather than past experiences or gut feelings or opinions of senior leaders. Develop a risk culture that prioritises evidence over assumptions.
3. Dismissing Opposing Views Without Consideration
Example: A security team ignores feedback from the business who say a new security measure will disrupt business operations or create a bad customer experience.
Mitigant: Encourage diverse viewpoints, use risk-based prioritisation, and involve stakeholders in decision-making to balance security and business needs.
4. Using Black-and-White Thinking
Example: Assuming that if a system has a particular security characteristic, such as encryption or multi-factor authentication, it will be secure.
Mitigant: Recognise that security is layered; no single control is a magic bullet. Use systems thinking to assess the whole risk picture, and continuously monitor for and think through emerging threats.
5. Appealing to Authority Without Questioning
Example: Trusting a vendor’s SOC2 report without reviewing what was tested, the scope, exclusions, or the results.
Mitigant: Ask for a copy of the report, scrutinise it, ask critical questions, and ensure they align with your intended scope and risk assessment. Ask for more information where the SOC2 report doesn’t cover what you need it to cover.
6. Struggling to Explain Your Beliefs
Example: Arguing that a particular feature is either a major risk or a silver bullet to solve it all - but being unable to convince others why.
Mitigant: Ensure your arguments are backed by clear reasoning, real-world examples, and data. If you can’t explain it and convince others, you may need to admit to yourself that you do not fully understand it.
7. Only Seeking Information That Confirms Your Views
Example: A security leader is convinced that their team’s practices and results are best in class, and ignores external viewpoints that highlight issues and weaknesses.
Mitigant: Commission third party reviews, and bring in viewpoints from second line, and third line. Listen to the recommendations, and agree actions. Welcome red team exercises as a method to challenge assumptions. Foster a culture of continuous improvement.
8. Making Hasty Generalisations
Example: After a phishing incident, assuming all employees need annual training, when a targeted awareness campaign might be more effective.
Mitigant: Use incident analysis and behavioural data to tailor risk responses instead of one-size-fits-all solutions. Focus on root causes, and targeted training.
9. Confusing Correlation With Causation
Example: Believing that an uptick in blocked attacks means the security system is improving — when it could just mean attackers have noticed you or are increasing their efforts.
Mitigants: Look at broader trends, analyse root causes, and avoid jumping to conclusions based on surface-level data, or data on a single plane.
10. Relying on Anecdotes Over Data
Example: A senior executive claims, “I’ve never seen a ransomware attack in our industry, so we don’t need to invest in prevention.”
Mitigant: Use industry data, threat intelligence, and peer benchmarking to highlight real risks rather than relying on personal experience alone. Ask your external advisors what they are seeing across the industry.
How to Build Critical Thinking in Risk and Cybersecurity
There are steps you can take for you and your team to build critical thinking into everyday work.
-
Encourage a Culture of Questioning: Promote an environment where your team will challenge assumptions and ask “why?” without fear. Indeed, you can use techniques such as the Five Whys to uncover root causes and encourage discussions within teams where dissenting opinions are explored rather than ignored or dismissed.
-
Use Structured Decision-Making: Use frameworks that walk step by step through a common set of objectives or risks, for example using FAIR (Factor Analysis of Information Risk) or STRIDE (Spoofing, Tampering, Repudiation, etc.) will help to remove bias.
-
Diversify Inputs: Seek external perspectives, attend industry discussions, and engage with different disciplines.
-
Apply Scenario Analysis: Run tabletop exercises or simulations, and what-if scenarios, to test assumptions and identify blind spots in your thinking
-
Audit Your Own Thinking: Regularly reflect on decisions, question your biases, and seek out constructive criticism.
Conclusion
In digital risk, poor critical thinking can lead to weak controls, missed threats, and dangerous blind spots. By recognising these red flags and actively challenging assumptions, professionals can make better decisions, strengthen security, and effectively manage risk in an evolving landscape.