And now for a return to musings on mental capacity…
The Mental Capacity Act 2005 says that you are unable to make a decision if, among other things, you don’t ‘understand’ the information relevant to the decision ‘because of an impairment of, or a disturbance in the functioning of, the mind or brain’. Lots of other jurisdictions have a similar idea.
A lot turns on this. The Act has three other ways of not being able to make a decision: if you cannot communicate what you want, retain information, or use and weigh the information. The first two are usually interpreted minimally, and the Act even requires this. Communication can be by ‘talking, using sign language or any other means’ [s3(1)(d)]; and you only have to be able to retain information ‘for a short period’ [s3(3)].
‘Using and weighing’ information, in contrast, tends to be an issue when someone talks as though they understand the information relevant to the decision; but then acts in a way incompatible with their verbal performance. For instance, if someone can explain to you, in detail, the importance of looking after their money, then walks out the door and spends everything that they have on lottery tickets, you might doubt whether they can ‘use and weigh’ that information after all. That makes ‘use and weigh’ secondary to ‘understand’ in practice.
If ‘communicate’ and ‘retain’ are minimised, and ‘use and weigh’ is secondary to ‘understand’, then a lot turns on how well we understand what we mean by ‘understand’. That worries me, because I don’t think we understand ‘understand’ much at all.
The cases, and the secondary literature, abound with seeming innocuous phrases like ‘level of understanding’. I say seemingly innocuous because, hard as they are to avoid, I think they totally misrepresent the practice of mental capacity assessments.
First, an important point that lawyers don’t take seriously enough. Mental capacity assessments weren’t invented by the Act, it just codified common law. Beyond that, though, they weren’t created by the common law. It just basically approved what was going on anyway. The emphasis in the cases from the eighties and nineties is very clearly ‘filling’ a gap in the law, so that what doctors were doing anyway wasn’t illegal.
It goes deeper than that, though. Something like capacity assessments are just part of the human condition.
If someone is trying to do something dangerous, and you care about them or are responsible for them, then the question of whether they understand what is happening is usually morally salient. Maybe, to use Mill’s example, they don’t know the bridge is about to collapse. Maybe they’ve been smoking weed all week, are having a serious case of paranoia, and wrongly think that all their friends are conspiring against them. If you really are trying to do the right thing, then the onus is on you to try to assess what the person understands (while being humble enough to acknowledge that they might actually understand things better than you).
Lest I be misunderstood, this doesn’t necessarily justify treating people with mental disabilities differently to anyone else.
So assessing the understanding of people around us, with a view to influencing, perhaps even coercively, their behaviour is just something that crops up in a host of human situations. It didn’t suddenly appear with the law getting interested, or medicine getting interested. Back in the palaeolithic, if your cousin ate some funny mushrooms, and tried to head-butt a mammoth, the question would still arise. Maybe it would be cast in terms of spirits or gods, we don’t really know what people believed back then. Nevertheless, they were still people, with languages of some sort, and the question of understanding would still arise.
The point of this digression all the way back to the stone age is simple. Mental capacity assessments are not created from the aether by the brilliant rationality of lawyers and doctors. They are a medico-legal bureaucratic representation of an ancient human practice. Nothing is wrong with that. Creating representations of practices helps us to understand them.
What is wrong, however, is the way that ‘understanding’ represents the practice when it appears in ‘level of understanding’ or ‘degree of understanding’. It’s a crap metaphor. It can be crap in at least two ways.
First, there’s the obviously crap way. This is when people talk as though there is a ‘level of understanding’ needed for all capacity assessments, and the process of capacity assessment creates a lasting ‘binary’ between those with capacity and those without. It simply doesn’t: mental capacity is decision specific. Just because I don’t understand the information for this decision doesn’t mean I won’t understand the information for another decision, or even the same decision tomorrow.
(In contrast, the Act does create a binary between those whose understanding can be assessed and those whose cannot: between those with and without of ‘impairment of, or a disturbance in the functioning of, the mind or brain’. Furthermore, terrible implementation of the Act might mean that capacity assessments are, at least sometimes, treated as though capacity was a once and forever deal. That is not the Act’s fault, or the concept of capacity’s.)
The second way that ‘level of understanding’ and similar metaphors are crap is less obvious. They give the picture of an axis of understanding for a particular decision, as though understanding was one of those big charity thermometers. Then, once you hit a certain level of facts understood, you’re over the line and you understand enough to have capacity.
This is sloppy, lazy drivel; and I don’t think it reflects what is actually going on in a capacity assessment at all.
Let’s say we’re trying to assess whether I have the understanding necessary to live alone in a house. Part of the assessment is likely to involve making sure I understand everyday household risks: simple things like why I need to treat electrical sockets with due respect, or why it’s a bad idea to turn the gas hob on without lighting it. Now, these different components aren’t additive, they work in parallel. If I get the point about sockets, but just don’t understand about gas at all, then I don’t understand the everyday household risks relevant to this particular house. Even if I’m an expert in electricity, and could draw circuit diagrams of the whole house, if I don’t understand gas, I don’t understand the information relevant to the decision. Knowing lots about electricity doesn’t push me up the big imaginary charity thermometer. Understanding household risks simply isn’t one axis. It’s these two and a whole host more.
Beyond that, if what we’re really interested in is whether I understand what I need to to live alone then household risks altogether is just one collection of axes (not the chopping things, the plural of axis) among many. If, to live alone, I also need to understand how to manage some basic finances, then my excellent grasp of household risk won’t help.
It works in the opposite direction too. Understanding that I shouldn’t stick forks in the electric sockets, even really well, won’t help if I think it’s a good idea to splash water on them. Even understanding socket safety is more than one axis.
Understanding is not an axis.
We do not have ‘levels of understanding’, ‘degrees of understanding’, or ‘amounts of understanding’.
Understanding is not a charity thermometer, or a sprint to the finish line.
It would be more transparent, and honest, to talk about the ‘criteria’ of understanding. That way people can argue about whether, for instance, I really do need to be able to understand financial decisions to live alone. Talk of ‘levels’ just submerges what is happening below the murky waters of ‘what the experts say’.