Menu
Psyche
DonateNewsletter
SIGN IN

Foraging for wild mushrooms. Photo by RJ Sangosti/The Denver Post/Getty

i

What might mushroom hunters teach the doctors of tomorrow?

Foraging for wild mushrooms. Photo by RJ Sangosti/The Denver Post/Getty

by Anna Harris & Lisa Herzog + BIO

Save

Share

Post

Email

Algorithms and artificial intelligence are a helpful aid to doctors. But they still need to learn the arts of noticing

Could algorithms and artificial intelligence replace doctors and nurses as diagnostic experts? As the world struggles to bring the COVID-19 pandemic to heel, scientists are using publicly available coronavirus data to build predictive models to triage newly arrived patients. Meanwhile, the US Food and Drug Administration has issued a rare emergency authorisation for a machine-learning algorithm that detects which patients are at risk of needing intubation to help them breathe.

Such technology was on the march before the pandemic. Artificial intelligence systems have become increasingly adept at spotting the early signs of diseases from cancer to kidney failure. More and more clinical tasks are likely to be performed by algorithms in the near future. Many people celebrate this development – as a way of reducing human error, but also one that saves time for medical staff and leaves more space for interaction with patients.

Yet training to be a doctor or nurse isn’t just about the acquisition of data points or building a human textbook of facts. It’s also an exercise in sensory and emotional attunement: how to listen well, whether to patients’ stories or with a stethoscope; how to palpate a lump; how to watch a lesion change shape or recognise the smell of a gangrenous limb. These sensory capacities are expanded in medical education, cultivated through lessons in how to listen and how to look.

The US anthropologist Anna Tsing has called such practices ‘arts of noticing’. She saw it at work when spending time with mushroom foragers in Yunnan, Lapland and the US state of Oregon, who knew how to find hidden fungi in forests. They were looking for a particularly elusive mushroom, the matsutake, which is connected to special host trees. The foragers could distinguish these trees by their size but also by their smell, finding the mushroom buttons by feeling for their texture among the carpet of forest leaves.

Whether in the forest or in the clinic, noticing means a full sensory engagement with the sounds, images, feeling and general atmosphere of an encounter. It also means paying attention to, and trying to make sense of, what is not typical, capturing what might at first seem to be inconsequential details. These skills differ markedly from algorithmic pattern recognition – something we ought to appreciate if we’re not to lose them, and if humans and machines are to collaborate in fruitful ways.

What humans are attentive to translates into what algorithms see and what they’re blind to

Algorithms have their distinctive strengths, at which they can outperform humans. Machine learning is based on the principle that algorithms can learn to spot patterns – sometimes very complex ones – when presented with large datasets. In medicine, this allows them to create predictive models linking symptoms with associated diagnoses, which are used to analyse fresh cases. The process lets software ‘see’ things that humans can miss, limited as we are by assumptions, biases and restricted processing power. For example, AI models monitoring numbers of hospital admissions in Wuhan hospitals or local news reports and airline ticket data helped to raise alarm bells at the beginning of the COVID-19 outbreak. The futurist Eric Topol sees certain medical fields such as pathology and dermatology as ‘pattern’ disciplines, lending themselves particularly well to such digitisation, modelling and automation.

Yet algorithms miss things, too. We mustn’t forget that ‘the bridge to the mathematical realm’ has its ‘roots in the social world’, as the economist Peter Spiegler notes in Behind the Model (2015). From a model’s perspective, its objects of analysis are ‘stable, modular, and quantitative, with no qualitative differences among instantiations of each type’, Spiegler writes. But the world isn’t always like that, and so it’s easy for models to lose touch with the reality they’re supposed to describe.

This is what happened in the 2007-08 financial crisis, when most economic models failed to predict the crash. One of the reasons was that their models assumed the independence of certain risk distributions, when these were actually interdependent. To avoid such unpleasant – and sometimes very consequential – surprises, Spiegler recommends qualitative, interpretive techniques, such as narrative interviews or ethnography, in order to establish the categories that become part of formal models.

Unexpected events pose another problem for modelling. When in the 1970s we first saw scientific evidence, from ground data, of a hole in the ozone layer, many scientists remained sceptical, as Cailin O’Connor and James Owen Weatherall note in The Misinformation Age (2019). After all, satellite data hadn’t revealed any ozone anomalies. But when the latter data were analysed afresh, it turned out that the satellites had recorded low ozone levels. But the software was simply treating them as outliers, since they were so much lower than existing theories considered possible.

This example shows how much human assumptions about the world affect what an algorithm looks for, including what it considers plausible and what gets written off as ‘noise’. But the future doesn’t always resemble the past, and the meaning of the same pattern can change from context to context. What humans are attentive to translates into what algorithms see and what they’re blind to. That’s also why human biases can all too easily be baked into algorithmic systems – in medicine, the dominance of white-skin cases used in dermatology programmes (one of the so-called ‘pattern disciplines) offers a well-known example.

Even when algorithms are ‘free’ to pick up their own patterns in the data, humans still need to make numerous decisions: which data sets to use, how to curate them, which quality standards to put in place, how to measure whether or not there’s a ‘fit’ between model and reality. For this, the software’s developers must grasp the context and meaning of the data; they need to understand what’s ‘normal’ and what’s a deviation. In short, they need to decide what matters. A qualitative understanding of the context is indispensable, and cultivated senses are required to grasp what might matter.

As Teppo Felin argued in Aeon in 2018, ‘knowing what to observe, what might be relevant and what data to gather in the first place is not a computational task – it’s a human one.’ He freely admits that humans often miss ‘obvious’ things, such as the gorilla in the basketball court in one well-known experiment on inattentional blindness. But he insists that this is a strength rather than a weakness: humans can focus, and so distinguish relevant from irrelevant. In doing so, they can also shift their attention to new things, instead of being tied to what they’ve been programmed to look for.

Trainee doctors are invited to look closely at paintings, under the guided instruction of a museum curator or art historian

So before algorithms take on an even greater share of medical decision-making, it’s vital to reflect on how we are noticing. To start with, those educating a new generation of doctors ought to think carefully on how they train skills in observation. One strategy could be to turn to ethnography, a method closely tied to anthropology, to help students develop their arts of noticing. Ethnography means engaging all senses, attuning observational skills to everyday practices, looking for details. It is about being open to the possibilities of what you might find when you spend time with communities – or your patients – rather than coming in with defined hypotheses.

In medicine, learning how to notice includes weighing up whether to do a certain test in the first place, based on what a doctor or nurse notices about a patient. It also means learning how to communicate this with others through new sensory vocabularies – describing a crackle or a wheeze to a colleague, for example, vividly and meaningfully.

Ethnography on medical education offers many examples of how teachers cultivate the arts of noticing in their medical students. Experienced doctors help novices attune to the sounds of the body: listening together with double-headed stethoscopes, as teachers mimic pathologies with their own voices or fabrics to hand, yanking a curtain to teach the sound of a heart murmur, or pulling the Velcro of their shoes to mimic a specific lung sound. Sensitive touch and palpitation are other important skills for the clinical diagnosis of tumours, cysts and other masses; here educators train their students to notice well by describing textures using new, medical words, as well as showing them how to determine edges, volumes or discrepancies in a mass.

Some medical educators, aware of the need to train students in how to see well, have even started taking trainee doctors to art museums. Around the world, trainee doctors are invited to look closely at paintings, under the guided instruction of a museum curator or art historian. Research suggests that, after such interventions, medical students become better not only at describing art, but also at general observational skills, as found by a study that tested students’ abilities to describe both artworks and ophthalmology images of eye diseases. This method could be expanded to cultivate noticing in other ways outside of art museums – for example, lessons among anthropologists doing fieldwork, or together with mushroom foragers in a forest.

Tsing describes these arts of noticing as ‘looking around rather than ahead’. We should celebrate this kind of observation, since as humans we turn out to be particularly good at learning how to do it. Instead of trying to out-compete machines in pattern recognition and memorising details, we might instead expand our capacity for curious exploration, finding importance in detail, in learning and discovering what matters. This is the ‘human side of medicine’ that even techno-enthusiasts such as Topol concede AI will struggle to replace. Medical students and scientists alike need to better understand how the tools they’re using are noticing too, and often doing so poorly, based on biases and flaws. The more methods we have at our disposal, and the better we understand what they’re good at observing, the less likely we will be to get caught in traps of inattention and blindness.

Save

Share

Post

Email

22 February 2021