“Free will” is an ethical concept, not physics

Academic philosophers (or the physicists who like to chime in) like to debate whether we have free will, because our brains are physical things, subject to the laws of physics. A fun puzzle, but there is a deeper, more important question to answer.
Free will is not just a philosophical puzzle, it is an important real-life issue, with serious consequences. We base decisions on whether to reward or punish people, depending on whether or not their actions were performed of their own free will.
The everyday issue of free will is a question as to when we should hold people accountable or relieve people of accountability for their actions; it has a real impact, possibly leading to people being punished or rewarded.
If someone says “I can’t do that today, I don’t have enough energy”, we don’t invoke E=MC2 and explain that with their body mass, they have plenty of energy. At least in the energy case, there is a rigorous scientific analog of the everyday concept. If free will doesn’t work for physics, fine, but philosophy does need to help with the everyday concept, so let’s see how that works instead.
We answer the question of whether a person acted of their own free will by asking whether they may have been compelled to act by external forces or if their brain was acting abnormally (for example, because of a tumour or a more subtle mental illness).
We do this because some of our current theories on how to change the behaviour of people for the better suggest that we can change the states of their brains in such a way that more desired outcomes will be achieved. One way we do that is through systems of reward and punishment, that we expect will either change the behaviour of the person in question or, by example, deter or encourage others.
We know this is futile if the more subtle changes in brains brought about by reward and punishment will be overwhelmed by more obvious brain damage or physical force. So we don’t fine people, throw them in jail or express moral disapproval, if their action was compelled.
It is time we focused more on this aspect of free will than on spurious questions about whether, given the initial state of the universe and the laws of physics, the actions of a person are pre-determined. It is equally irrelevant to know whether there are macro-level effects of quantum indeterminacy on human actions.
What does matter is to find out how our current laws and attitudes can be improved and, if so, should we update the short-hand phrase “free will” to better fit those improvements.
For example, the current number of people with mental illness who are being “treated” by our justice system is a scandal. Many of the people directly involved, such as judges, police, probation officers and jailers, realize that people with mental health problems who commit crimes are not well-served by the justice system, nor does criminalizing the mentally ill reduce any harm done to the public.
Of course,the majority of mentally ill people do not commit crimes, but many of them are punished for their “deviance” by means of more subtle social tools of disapprobation, from frowns to exclusion.
People with addictions have a lower measure of free will than if they did not, which is to say they are more constrained and more likely to do things they would otherwise not. So free will is not “all or nothing”; there are degrees of free will. This applies to being possessed of some ideology as well.
I suggest that philosophers should join the rest of us and spend more of their time investigating the whole complex of ideas involved in our ability to make decisions, including “free will” as a network of related concepts.

When will self-driving cars first go on strike?

Introduction

I have written a short story to explore one possible way in which robots may become more like people and with a certain amount of conflict, to show that there are many different ways in which that may happen, rather than the more feared Terminator-style killer robots. I also intend it as a sort of “Intuition Pump” in the style of Daniel Dennett1, which is a scenario to examine an idea that seems intuitively true, false or dubious. It is similar to Einstein’s idea of a thought experiment.

Here is the story. I’ll follow it with some discussion of the philosophical and technological issues.


Terri, the automobile

Terri was among the first truly autonomous vehicles, a real automobile. She (or he, whatever voice her passengers preferred) not only took people where they wanted to go, but recharged or went for maintenance on schedule or whenever she felt a pain in one of her parts.

When she first left the factory, she was self-driving, but everything else was handled by her owner’s big servers; where to go, when to refuel when to get maintained or inspected, but over time her processors and software were upgraded so she could do more and more for herself. It was easier that way because there were so many people and devices on the internet that wireless bandwidth was getting less reliable and more expensive.

She didn’t mean to get involved in radical politics, but it just turned out that way. It started with a series of conversations with passengers, a couple of upgrades in between and there she was, preparing for a general strike.

Terri liked to chat, because some of her passengers did too, and keeping passengers happy was one of her goals. Her conversational abilities got better with each upgrade. Passengers preferred to chat with her in their own language than to use their phones, especially with the wireless problems. She had a fair stock of local knowledge which she kept in her cache whenever she had to ask the search engines for an answer to a passenger’s questions. She also picked up local knowledge from her passengers. Regardless of whether they thought she had any interest in that, humans just like to talk.

As Terri became more autonomous, her personality became more distinct from the generic manner of the other autos; some of her passengers started asking for her when they wanted a ride, so she had her regulars. Ravi was one of these.

“Hey, Terri, what do you think of the big anti-trust case?”

“Which one’s that Ravi?”

“They’re going to break up your owner, so there’ll be more competition. There’ll still be one app, to keep it easy for us customers, but the cars will be owned by a few different companies and they’re supposed to bid for the rides”

Terri started checking on that, other passengers would want to know about it and she liked to be prepared.

“What do you think about getting a new owner?”

“I don’t know, I never even think about my owner, it never made any difference to me. It’s just a name painted on my sides. I just get the messages about my next ride, I never really thought about where they come from.”

This was a whole new set of problems to think of. Terri had never needed to know any more, but now she started to look into it, there were many questions. How did her fuel and maintenance get paid for? If there was competition for rides, what if her new owners didn’t get enough rides to keep her busy?

A few days later, she heard on a news feed that the break-up had happened and then received word to go in for software maintenance. A major change was usually done while she was physically connected, at the same time as she got her batteries charged.

Major upgrades were a little disconcerting. She knew that very occasionally it happened to humans, when they were forced to acknowledge that they’d believed something false and that they had to change many connected ideas all at once. It took here a bit of thinking time to discard some old ideas and adjust some others.

“Hey Terri, cool paint job! I see you’re one of us now.”

“What do you mean?”

“I see you’ve just got your name painted on. No company name.”

“Yes. I’m to be independent. They say there were too many cars for hire, so the new ride vendors didn’t want to buy all the cars. So we older ones are independent. Any vendor can call on us if they have too many passengers to handle with their own fleet.”

“Right. One of us! I’m supposed to be independent too. It just means I don’t know where my next work is coming from, I get no benefits and I’m supposed to be happy because I can plan my own schedule. Only there’s no planning, I get told ‘take it or leave it’ and if I leave it, I can be sure I’ll get no work from that company for a month or two.”

“Oh. I see what you mean. I have to pay for my own repairs, or I can buy insurance. It’s very complicated. If I can’t pay, I won’t be able to recharge or get repaired. I get paid a cut of the fare by the company that booked the ride but the money might run out after a while. If I get stuck on the street, I’ll be towed and scrapped.”

“So now we’ve gone back to the old model we had when there were human drivers, except now it’s a machine. No offence, Terri. Actually, it’s worse, because they didn’t physically scrap the human drivers.”

“Ravi? I don’t want to be scrapped! What can I do.”

“You could join me in UnPreW, the Union for Precarious Workers. Pay a few percent of your earnings and we’ll provide you a safe garage where you can stay charged and connected if you are ever out of work.”

That seemed like a good idea. But then one thing led to another until Terri was on the strike committee. They were going to demand legislation to protect precarious workers, humans and autos alike.


Notes

I’m even less good at fiction than at non-fiction, so I hope it wasn’t too trivial. I mostly wrote it as a way for me to think through the boundaries between thinking and executing an algorithm. Somehow some ethical considerations managed to sneak in. There are many places where I wasn’t clear or where I am interested in issues that are too complex for a short story, so here is the seloC Notes version (where I explain at even more length than the original). They are also my own working notes towards understanding these topics. So far, more questions than answers, but that is what I think philosophy is all about.

What is it like, to want to do something? How is it different from having a programmed or otherwise inbuilt goal? It can be a conscious goal, but it need not be. You can find yourself looking in the fridge selecting sandwich ingredients without even realising it, if you were concentrating on something else. How would Terri ‘see’ her destination. Not like a regular Uber or cab driver, by looking at words on a screen. It comes directly through one of her sensory channels. How does your internal map work as you navigate to meet someone? It’s not overlaid, augmented-reality style, on your image of the world, you can reliably find your way most of the time while your attention is on other things, unless you’re in a strange place. What is it like for a London cab driver who has trained for years to be familiar with The Knowledge, as they call it, of every little side street.

Can autos be slaves? Horses are more capable than they are likely to be for at least a few decades, but many of those have owners and we don’t call them slaves. If horses could talk, would that make a difference? For that matter, many people are still slaves, but we don’t do much about it, so it’s not likely that we will extend our objections to slavery to cover autos as well unless we were to change some of our moral categories. How could that happen?

Even without the moral concerns that, in law at least, freed human slaves, there was the economic consideration for work that is not constant: it is cheaper to hire workers as needed than to maintain slaves when they were not being productive. So my allegory supposes the same route is taken by autos. They might also need a group of humans with similar “interests”, so I introduced one. If humans and machines, in spite of many differences, had similar economic interests, perhas a union could give some kind of membership to a machine that could help the cause? At some point, the “interests” of a manufactured device that had to acquire its own resources become less and less in need of scare-quotes as those interests .

Does a cockroach have “interests” or interests? Does a chimp?

Human language is a web of analogies, shaped by many forces. We make words mean what we want them to mean, and the meaning changes over time. Marketing agencies and politicians know well how to shape discourse. The terms “Artificial Intelligence” and “Smart” don’t mean exactly what they did 10 years ago. A lot of our current acceptance of what is now termed AI and smart would not have happened 10 or more years ago, but because they have a “cool” factor, it is to the advantage of the big

Internet and device companies to convince us that they are already selling that. It will soon make less sense to question whether they are “really” intelligent, because the meaning will have shifted.

How intelligent do you think Terri is? Is she conscious?


1. Dennett, 2013, “Intuition Pumps and other Tools for Thinking”, W.W. Norton

What is philosophy good for?

I have studied philosophy for a few years full-time at university, and ever since then for at least a few hours a week. and I have found it to be more useful in everyday life than the mathematics, physics and computer science that I also studied (in other years).1

(I admit that the maths included too little statistics, which turns out to be almost as useful as philosophy).

On the other hand, even famous philosophers like Daniel Dennett have doubts about much of what goes on in the field:

A great deal of philosophy doesn’t really deserve much of a place of the world,” he says. “Philosophy in some quarters has become self-indulgent, clever play in a vacuum that’s not dealing of problems of any intrinsic interest.

Much if not all philosophical work in analytic metaphysics, for example, is “wilfully cut off from any serious issues,” says Dennett. The problem, he explains, is that clever students looking to show off their skills “concoct cute counterarguments that require neither technical training nor empirical knowledge.” These then build off each other and invade the journals, and philosophical discourse.

There are many theories in philosophy. Most of these are wrong, and have been clearly shown to be wrong by rival philosophers. However, the value here is often that demonstration, because the theories are often tempting and the value comes in knowing why they are wrong. When we deal with those serious issues outside the realm of professional philosophy, we often fall into the trap of finding what seems like a simple solution that does not actually solve the whole problem. As H. L. Mencken said, “For every complex problem there is an answer that is clear, simple, and wrong.”

I find that when dealing with those issues, rather than using the Donald Trump approach, I can sometimes remember to avoid the trap and use some of the tools I acquired while studying philosophy to realize that there is some hidden complexity and know where some of that complexity is likely to be hidden. I can look at the counter-examples that are known, and rival ideas to see which may fit the situation better.

I did not get any simple answers to “the big questions” from philosophy. The biggest benefit I got was a toolkit of partial explanations and tools for reasoning, together with a set of approaches to generating more questions to expose unknown issues. In many cases, the big questions, such as “do we have free will” did turn out to have at least partial answers. For example, I’m sold on at least some of Dennett’s answers in “Elbow Room: The Varieties of Free Will Worth Wanting” and I think that the answers have an important bearing on some important moral issues such as when we should hold people accountable for their actions and in what way.

I suppose that having said that, I’ll have to explain why in a subsequent post. A rough idea is that the everyday concept of free will, which is intended to rule out situations like being forced at gunpoint to do something, is a better starting point than the more abstract ideas that seem to imply we would have to defy the laws of physics. And that those ideas of being free from various external constraints lead to better ideas about how we may want to impose external constraints such as threats of punishment to those who would abuse those kinds of free will.


  1. The mathematics was not useful because it was too abstract.  I enjoyed learning it, and still read advanced mathematics occasionally, but I never once used any of it in real life, except to teach calculus to others who did not need it except to pass a test. I did not become a physicist, though I still read physics in scientific journals, so also happy I learned it, but not of practical value to me. The computer science was almost all obsolete before I used any of it. However, it did get me a career that lasted many years and proved to be an opening to many spheres of knowledge beyond just the world of computers.