
“Do I understand the risk of using CGM readings for insulin dosing — and is that risk the same for all CGMs?”
TL;DR
Continuous glucose monitoring (CGM) is the second most important diabetes innovation after insulin itself. It improves glucose management, accelerates learning through dynamic glucose management, enhances quality of life, and makes automated insulin delivery (AID) possible.
But it is not risk-free – especially when sensor readings and their rates of change are used directly for insulin dosing decisions (by people and especially by algorithms).
Now place side by side:
- the human desire for simplicity and certainty (“just trust the sensor”),
- regulatory pathways that can allow market access with limited publicly visible data,
- non-adjunctive indications (dose insulin without routine finger-pricks) without robust, publicly available performance evidence,
- increasing automation of insulin delivery via AID systems, and
- regulatory pathways that differ in their robustness and demands for hard safety stops – transparent performance data, mandatory very-low and very-high alarms, and clear, enforceable post-market reporting.
Together, these create an asymmetric tail risk: rare but severe events whose consequences sit squarely with the person wearing.
Is it time to redress the balance with:
- minimum safety expectations for non-adjunctive CGM, especially when used in AID systems,
- mandatory transparency of performance data, with standardised testing and minimum performance requirements,
- non-silenceable alarms for very low and very high glucose alarms for those using insulin, and
- making it crystal clear that finger-prick blood glucose meters with ≥95% of values within ISO 15/15 accuracy remain the gold-standard for accuracy. This means if in doubt, get the ISO-standard finger-prick blood glucose meter out. Also, resist using the CGM without verification in high-risk situations when accuracy is questionable.
The proviso is that people with diabetes must be taught why this safeguard is necessary and how to respond to it effectively.
That means teaching good technique—what appropriate action looks like, how to confirm and interpret a low, and how to correct it properly—bringing back the lost art of effective finger-prick glucose testing and how to act on the result in CGM context.
Simple: What is the “CGM Black Swan” problem?
A Black Swan, in Taleb’s language, is a rare, high-impact event that looks “obvious in hindsight” but is very hard to predict. We tend to underestimate these events, especially when systems run smoothly most of the time.
CGM has transformed diabetes care. Most of the time it works well; most of the time it helps. That is precisely why it is easy to slide into a mindset of:
- “My sensor is accurate.”
- “The algorithm is safe.”
- “If it’s approved for insulin dosing from age 2, it must be fine.”
But insulin dosing is not a neutral application. Too little insulin, for too long, can lead to DKA. Too much insulin, too quickly, can cause severe hypoglycaemia.
The risk is asymmetric: there are many days of small benefit if everything works, and a very small number of days where, if it does not, the consequences can be catastrophic.
“As CGM and AID for insulin dosing become standard of care, do we have enough transparency of data and safety inbuilt – or are we quietly sweeping the tail risk under the carpet?”
Medium: Simplicity, regulation, and missing data
The seduction of simplicity
Humans like simple stories. Educators want clear frameworks. Manufacturers want to satisfy prospective customers’ needs. Many users quite reasonably want guidance rather than uncertainty:
“tell me what to do” rather than “talk me through the probabilistic landscape”.
In that environment, the nuances of comparator choice, traceability, calibration behaviour, mandatory alarms and dynamic accuracy can easily get flattened into a single reassuring idea: “this sensor is accurate enough”.
The most accurate CGMs are accurate 99% of the time, but we must remember that the 1% can carry severe consequences.
Non-adjunctive labelling and the missing data problem
A key paper, CGM accuracy: Contrasting CE marking with the governmental controls of the USA (FDA) and Australia (TGA), highlights how CGM systems can obtain CE marking and be used widely in Europe, the UK and internationally with far less publicly visible clinical evidence than is typically available for US FDA-regulated devices – particularly regarding detailed accuracy and study design, audit, and post-market surveillance.
The concern is not that systems with less publicly accessible data are inherently unsafe. The concern is that the risk cannot be fully understood, because external clinicians, researchers, and users lack access to the evidence base required for independent appraisal – even when they explicitly ask for it.
That lack of transparency is itself a potential Black Swan amplifier. It does not mean disaster is inevitable. It means that uncertainty cannot be interrogated in the way a high-stakes, insulin-dosing system arguably deserves.
Deep: Taleb, 2008, and asymmetric risk in CGM and AID
Skin in the game and asymmetric tails
Taleb’s work – especially Fooled by Randomness, The Black Swan, and Skin in the Game – revolves around a core question:
“Are the people making the decisions and regulating carrying the downside of those decisions?”
If the answer is “no”, he argues, then the precautionary principle must take priority whenever the risk is asymmetric.
With insulin dosing, the risk is inherently asymmetric: almost all of the time, CGM helps; occasionally, if things align in the wrong way, the consequences can be very serious.
To make this more concrete, imagine being offered a game of Russian roulette with a revolver that has one bullet in a thousand chambers, and a prize of £1,000,000 if you survive. You might be tempted to play once. But would you play every day? Would you keep playing for the rest of your life?
That is the nature of asymmetric tail risk: small, repeated gains on one side, and a single irreversible loss on the other.
A familiar pattern from 2008
The pattern is not unique to diabetes technology. Before the 2008 financial crisis, institutions created increasingly complex mortgage products and described the risks as manageable. Many regulators, operating within the limits of their frameworks and knowledge, did not intervene decisively. When the system eventually collapsed, it was often framed as an “unforeseeable Black Swan”.
The people most affected were not those who designed the products. They were borrowers who trusted the system, families whose financial foundations collapsed, and ultimately, the public who underwrote the bailout. The structural dynamic was:
- a small upstream group set the parameters,
- the downstream population absorbed the tail risk, and
- regulatory tools and data were not always sufficient to fully anticipate or constrain the downside.
The analogy is intentionally uncomfortable – not because anyone in diabetes care is acting maliciously, but because the underlying structure is recognisable:
- a small upstream group defines how devices and algorithms behave,
- people living with diabetes and their families carry the physiological tail risk, and
- some regulatory pathways, particularly outside the US, provide less public visibility of detailed performance data.
This is not about blame. It is about recognising how asymmetric risk emerges whenever complexity outpaces transparency.
When the canaries start singing
Some issues with large, established CGM manufacturers have been widely publicised and will continue to be examined through formal channels. These are important, and future investigations will clarify the details.
At the same time, clinical groups have started to express concern about systems where evidence is less visible. Two examples:
- The Diabetes Technology Network (DTN) UK statement on an AID system available in the NHS, which highlights limited published evidence for safety and efficacy and advises caution.
- The joint ACDC/BSPED position statement, which raises similar concerns about the evidence base in paediatric use.
These documents do not say “this AID system is bad”. They say, in careful professional language, that non-adjunctive insulin dosing and AID system use without transparent performance data is not a sustainable safety model.
The deeper point is not about a single device or brand. It is about the structural problem of hidden asymmetry of risk when data are opaque – a problem that can, in principle, apply anywhere in the CGM and AID ecosystem.
Asymmetric risk: when CGM rare errors meet fat tails
Even high-performing iCGM systems are not perfect. All have at least 0.2% of readings outside the 40/40 error limits, with this percentage increasing substantially when the glucose rate of change is fast. That is not a criticism; it is a reflection of engineering limits and human physiology.
For most days, most people, those occasional outliers are just background noise. But insulin is not a neutral intervention:
- too little, for too long, can lead towards DKA,
- too much, too fast, can cause severe hypoglycaemia.
When you combine:
- a system that can occasionally under-read or over-read,
- no mandatory very-low or very-high alarms,
- AID algorithms with varying guard rails, and
- users trusting the CGM implicitly,
Result: exposure to fat-tailed outcomes – rare but potentially catastrophic events that are hard to predict and impossible to reverse once they occur.
Hypothetically – and this is deliberately blunt – if we could ask someone who had suffered a worst-case tail event:
“Would you have preferred fewer CGM choices, non-silenceable very low and very high alarms, a forced finger-prick at times of doubt, or the inconvenience of an early sensor change or calibration – if it had reduced the chance of this outcome?”
We can probably guess the answer.
That is the precautionary principle applied to CGM and AID. Short-term quality of life – fewer alarms, less friction – matters deeply. But if we remove every layer of safety and transparency to achieve that, we risk trading those gains against an unpriced, asymmetric, and occasionally severe downside.
Practical: What should we actually do?
1. Keep BGMs as the explicit backstop
This is not a nostalgic defence of “old tech”. It’s tail risk mitigation and we need to bring back the lost art of effective finger-prick glucose testing and how to act on the result in CGM context.
Modern blood glucose meters (BGMs) approved for self-monitoring must meet ISO criteria – typically ≥95% of values within ±15 mg/dL (0.8 mmol/L) at lower glucose levels and within ±15% at higher levels (often summarised as “15/15 with >95% of readings”).
When:
- a CGM reading feels “off”,
- there is no glucose reading or trend arrow,
- ketones or illness are in play, or
- an AID system behaves unexpectedly.
The capillary blood glucose meter remains the most accurate device a user has. Not because CGM is useless – far from it – but because:
- the meter’s accuracy standard is clearly defined,
- its reference traceability is regulated, and
- it does not depend on calibration algorithms that convert interstitial glucose into blood-glucose estimates – algorithms that attempt to handle lag, can drift over time, or be modified through firmware updates.
CGM is the everyday workhorse; finger-prick BG is the safety backstop.
2. Treat some alarms as non-optional
Alarm fatigue is real. Night-time false lows are miserable. People, families, clinicians, and companies all feel the pressure to reduce alarm burden.
But this collides directly with asymmetric risk if very low and/or very high alarms are optional to disable.
If these hard stop alarms can be disabled, both true severe lows and false lows signalling sensor failure can be ignored until it is too late. The same logic applies to persistent very high readings with possible ketones and DKA.
A pragmatic, safety-first position:
- retain user control and personalisation for many alerts,
- but maintain non-silenceable, hard-floor alarms for critical low thresholds and for prolonged very high glucose readings,
- pair this with clear education: if an alarm will not go away, you must act – check a finger-prick, check ketones, and take action to rectify.
The aim is not to punish people with diabetes. It is to protect them from the combination of human fatigue and device failure in a system where the tail risk is not symmetric.
3. Demand transparency for non-adjunctive and AID use
AID systems are, in many ways, the pinnacle of what CGM can do. They reduce cognitive load, flatten variability, and improve quality of life.
But when an AID system:
- is driven by a CGM with no publicly available paired accuracy data, and
- is indicated from early childhood,
we are effectively asking families and clinicians to trust a black box with insulin dosing without being able to see the supporting evidence.
As CGM and AID markets expand, we are heading towards a landscape where:
- many non-adjunctive CGMs are available,
- several drive AID systems,
- public performance data are variable or absent, and
- no global minimum standard exists for CGM testing and mandatory alarms that cannot be silenced.
This is prime Black Swan territory – not because catastrophe is guaranteed, but because complexity is growing faster than transparency and alignment.
4. Where next? Open conversation, not defensiveness
The question now is how we move forward in a way that:
- acknowledges the asymmetric risk of erroneous CGM readings, especially when coupled with AID and non-adjunctive labelling,
- recognises that CGM is extraordinary but not risk-free,
- keeps BGMs with ISO 15/15 ≥95% accuracy as the explicit gold-standard backstop when doubt arises, and
- sets minimum global expectations for transparency, comparator alignment, and safety features such as mandatory alarms.
There is no perfect, final answer. But the conversation itself is non-negotiable. If we avoid it, we risk drifting into a future of more devices, more automation, and more invisible tail risk.
If we embrace it, we have a chance to let CGM and AID fulfil their enormous potential while keeping the Black Swan shackled just enough that it cannot do its worst.
This is not about right and wrong, or absolute certainty. It is about working together to balance:
- short-term quality of life,
- the reality of alarm fatigue, and
- the asymmetric, sometimes severe risk of changing insulin doses based on numbers that are never infallible.
I am interested in hearing your thoughts. Please comment below.
References & further reading
What now?
They are in order, but take your pick:

Leave a Reply