CGM Deep Dive
The CGM Black Swan
CGM is one of the most important innovations in diabetes care. Most of the time, it works extraordinarily well. That is precisely why it is worth understanding the rare, asymmetric risks that sit quietly at the tail of the distribution — and what can be done about them.
This content is for educational exploration only. It describes average responses and general principles. It is not medical advice and cannot replace individual clinical guidance from your diabetes care team.
TL;DR
Continuous glucose monitoring (CGM) is the second most important diabetes innovation after insulin itself. It improves glucose management, accelerates learning through dynamic glucose management, enhances quality of life, and makes automated insulin delivery (AID) possible.
But it is not risk-free — especially when sensor readings and their rates of change are used directly for insulin dosing decisions (by people, and particularly by algorithms).
Place side by side:
- the human desire for simplicity and certainty (“just trust the sensor”),
- regulatory pathways that can allow market access with limited publicly visible data,
- non-adjunctive indications — dosing insulin without routine finger-pricks — without robust, publicly available performance evidence,
- increasing automation of insulin delivery via AID systems, and
- regulatory frameworks that differ in their demands for hard safety stops, transparent performance data, mandatory critical alarms, and enforceable post-market reporting.
Together, these create an asymmetric tail risk: rare but severe events whose consequences sit squarely with the person wearing the device.
There is a growing case for:
- minimum safety expectations for non-adjunctive CGM, especially when used in AID systems,
- mandatory transparency of performance data, with standardised testing and minimum performance requirements,
- non-silenceable alarms for very low and very high glucose levels for those using insulin, and
- clarity that finger-prick blood glucose meters meeting ISO 15/15 accuracy criteria remain the gold standard for accuracy when doubt arises — and that the skill of effective confirmatory testing deserves to be taught and preserved.
That last point requires that people are taught why this safeguard matters and how to use it effectively.
What is the “CGM Black Swan” problem?
A Black Swan, in Taleb’s framing, is a rare, high-impact event that appears obvious in hindsight but is very hard to predict in advance. These events are systematically underestimated, especially when systems run smoothly most of the time.
CGM has transformed diabetes care. Most of the time it works well; most of the time it helps. That is precisely why it is easy to slide into a mindset of:
- “My sensor is accurate.”
- “The algorithm is safe.”
- “If it’s approved for insulin dosing from age 2, it must be fine.”
But insulin dosing is not a neutral application. Too little insulin, for too long, can lead to DKA. Too much insulin, too quickly, can cause severe hypoglycaemia.
The risk is asymmetric: many days of modest benefit when everything works, and a very small number of days where, if it does not, the consequences can be catastrophic.
“As CGM and AID for insulin dosing become standard of care, do we have enough transparency of data and safety built in — or are we quietly sweeping the tail risk under the carpet?”
Simplicity, regulation, and missing data
The seduction of simplicity
Humans are drawn to simple stories. Educators want clear frameworks. Manufacturers want to meet prospective customers’ needs. Many users quite reasonably prefer guidance over uncertainty — “tell me what to do” rather than “walk me through the probabilistic landscape”.
In that environment, the nuances of comparator choice, traceability, calibration behaviour, mandatory alarms, and dynamic accuracy can easily get flattened into a single reassuring idea: “this sensor is accurate enough”.
The most accurate CGMs are accurate 99% of the time. But the 1% is not neutral — in an insulin dosing context, it can carry severe consequences.
Non-adjunctive labelling and the missing data problem
A key paper — CGM accuracy: Contrasting CE marking with the governmental controls of the USA (FDA) and Australia (TGA) — highlights how CGM systems can obtain CE marking and be used widely in Europe, the UK and internationally with far less publicly visible clinical evidence than is typically available for US FDA-regulated devices. This applies particularly to detailed accuracy, study design, audit, and post-market surveillance.
The concern is not that systems with less publicly accessible data are inherently unsafe. The concern is that the risk cannot be fully understood — because clinicians, researchers, and users lack access to the evidence base required for independent appraisal, even when they explicitly ask for it.
That lack of transparency is itself a potential Black Swan amplifier. It does not mean disaster is inevitable. It means that uncertainty cannot be interrogated in the way a high-stakes, insulin-dosing system arguably deserves.
Taleb, 2008, and asymmetric risk in CGM and AID
Skin in the game and asymmetric tails
Taleb’s work — especially Fooled by Randomness, The Black Swan, and Skin in the Game — revolves around a core question:
“Are the people making the decisions carrying the downside of those decisions?”
If the answer is “no”, the precautionary principle must take priority whenever the risk is asymmetric.
With insulin dosing, the risk is inherently asymmetric: almost all of the time, CGM helps; occasionally, if things align in the wrong way, the consequences can be very serious.
To make this concrete, consider being offered a game of Russian roulette with a revolver that has one bullet in a thousand chambers, and a prize of £1,000,000 if you survive. You might be tempted to play once. But would you play every day? Would you keep playing for the rest of your life?
That is the nature of asymmetric tail risk: small, repeated gains on one side, and a single irreversible loss on the other.
A familiar pattern from 2008
The pattern is not unique to diabetes technology. Before the 2008 financial crisis, institutions created increasingly complex mortgage products and described the risks as manageable. When the system eventually collapsed, it was often framed as an “unforeseeable Black Swan”.
The people most affected were not those who designed the products. They were borrowers who trusted the system, families whose financial foundations collapsed, and ultimately, the public who underwrote the bailout. The structural dynamic was:
- a small upstream group set the parameters,
- the downstream population absorbed the tail risk, and
- regulatory tools and data were not always sufficient to fully anticipate or constrain the downside.
The analogy to CGM and AID is intentionally uncomfortable — not because anyone in diabetes care is acting maliciously, but because the underlying structure is recognisable:
- a small upstream group defines how devices and algorithms behave,
- people living with diabetes and their families carry the physiological tail risk, and
- some regulatory pathways, particularly outside the US, provide less public visibility of detailed performance data.
This is not about blame. It is about recognising how asymmetric risk emerges whenever complexity outpaces transparency.
When the canaries start singing
Some issues with large, established CGM manufacturers have been widely publicised and will continue to be examined through formal channels. At the same time, clinical groups have started to express concern about systems where evidence is less visible. Two examples:
- The Diabetes Technology Network (DTN) UK statement on an AID system available in the NHS, which highlights limited published evidence for safety and efficacy and advises caution.
- The joint ACDC/BSPED position statement, which raises similar concerns about the evidence base in paediatric use.
These documents do not say “this AID system is bad”. They say, in careful professional language, that non-adjunctive insulin dosing and AID system use without transparent performance data is not a sustainable safety model.
The deeper point is not about a single device or brand. It is about the structural problem of hidden asymmetry of risk when data are opaque — a problem that can, in principle, apply anywhere in the CGM and AID ecosystem.
Asymmetric risk: when CGM rare errors meet fat tails
Even high-performing iCGM systems are not perfect. All have at least 0.2% of readings outside the 40/40 error limits, with this percentage increasing substantially when the glucose rate of change is fast. That is not a criticism; it reflects engineering limits and human physiology.
For most days and most people, those occasional outliers are background noise. But insulin is not a neutral intervention:
- too little, for too long, can lead towards DKA,
- too much, too fast, can cause severe hypoglycaemia.
When you combine:
- a system that can occasionally under-read or over-read,
- no mandatory very-low or very-high alarms,
- AID algorithms with varying guard rails, and
- users trusting the CGM implicitly,
the result is exposure to fat-tailed outcomes — rare but potentially catastrophic events that are hard to predict and impossible to reverse once they occur.
Hypothetically — and this is deliberately blunt — if it were possible to ask someone who had suffered a worst-case tail event:
“Would you have preferred fewer CGM choices, non-silenceable very low and very high alarms, a forced finger-prick at times of doubt, or the inconvenience of an early sensor change or calibration — if it had reduced the chance of this outcome?”
The answer is probably not difficult to guess.
That is the precautionary principle applied to CGM and AID. Short-term quality of life — fewer alarms, less friction — matters deeply. But removing every layer of safety and transparency to achieve that risks trading those gains against an unpriced, asymmetric, and occasionally severe downside.
What the evidence suggests we could do
1. Keep blood glucose meters as the explicit backstop
This is not a nostalgic defence of “old tech”. It is tail-risk mitigation, and a case for preserving the skill of effective finger-prick glucose testing in a CGM context.
Modern blood glucose meters approved for self-monitoring must meet ISO criteria — typically ≥95% of values within ±15 mg/dL (0.8 mmol/L) at lower glucose levels and within ±15% at higher levels. When:
- a CGM reading feels “off”,
- there is no glucose reading or trend arrow,
- ketones or illness are in play, or
- an AID system behaves unexpectedly —
the capillary blood glucose meter tends to be the most accurate device available. Not because CGM is useless — far from it — but because:
- the meter’s accuracy standard is clearly defined,
- its reference traceability is regulated, and
- it does not depend on calibration algorithms that convert interstitial glucose into blood-glucose estimates — algorithms that attempt to handle lag, can drift over time, or be modified through firmware updates.
CGM is the everyday workhorse; finger-prick BG is the safety backstop.
2. Some alarms may warrant being non-optional
Alarm fatigue is real. Night-time false lows are miserable. People, families, clinicians, and companies all feel the pressure to reduce alarm burden.
But this collides directly with asymmetric risk if very low and very high alarms are optional to disable. If these hard-stop alarms can be silenced, both true severe lows and false lows signalling sensor failure can be ignored until it is too late. The same logic applies to persistent very high readings with possible ketones.
A pragmatic, safety-first position might retain user control and personalisation for many alerts, while maintaining non-silenceable, hard-floor alarms for critical low thresholds and for prolonged very high glucose readings — paired with clear education about what action is needed when an alarm will not stop.
The aim is not to punish people with diabetes. It is to protect against the combination of human fatigue and device failure in a system where the tail risk is not symmetric.
3. Transparency for non-adjunctive and AID use
AID systems are, in many ways, the pinnacle of what CGM can do. They reduce cognitive load, flatten variability, and improve quality of life.
But when an AID system is driven by a CGM with no publicly available paired accuracy data, and is indicated from early childhood, families and clinicians are being asked to trust an insulin-dosing black box without being able to see the supporting evidence.
As CGM and AID markets expand, the landscape is moving towards many non-adjunctive CGMs driving multiple AID systems, with variable or absent public performance data and no global minimum standard for CGM testing or mandatory alarms. This is prime Black Swan territory — not because catastrophe is guaranteed, but because complexity is growing faster than transparency.
4. Open conversation, not defensiveness
The question now is how the field moves forward in a way that:
- acknowledges the asymmetric risk of erroneous CGM readings, especially when coupled with AID and non-adjunctive labelling,
- recognises that CGM is extraordinary but not risk-free,
- keeps blood glucose meters with ISO 15/15 ≥95% accuracy as the explicit gold-standard backstop when doubt arises, and
- sets minimum global expectations for transparency, comparator alignment, and safety features such as mandatory alarms.
There is no perfect, final answer. But the conversation itself is non-negotiable. If avoided, the field risks drifting into a future of more devices, more automation, and more invisible tail risk.
If embraced, there is a chance to let CGM and AID fulfil their enormous potential while keeping the Black Swan shackled just enough that it cannot do its worst.
This is about working together to balance:
- short-term quality of life,
- the reality of alarm fatigue, and
- the asymmetric, sometimes severe risk of changing insulin doses based on numbers that are never infallible.
References and further reading
- CGM Regulation: Achieving Standardisation for Outcomes
- CGM accuracy: Contrasting CE marking with the governmental controls of the USA (FDA) and Australia (TGA)
- Diabetes Technology Network UK: statement regarding the use of one AID system
- Nassim Nicholas Taleb, Incerto: Fooled by Randomness, The Black Swan, Skin in the Game

John
This is an outstanding article.