If there’s one thing nearly all people can agree on, it’s that some actions are morally right and others are wrong. But which actions count as right or wrong? That gets a bit more complicated. In some societies, polygamy is normal and proper. In others, taking a second wife can get you imprisoned. Some societies value individual rights and autonomy, while others emphasize collective obligations and hierarchy. In the face of such dazzling differences, how are we supposed to develop a coherent – much less scientific – understanding of “morality?” In a recent paper, a team of researchers at Oxford tried to answer this question by arguing that morality is always about cooperation – and they crunched data from wildly different societies all over the world to make their case.
On Monday, researchers will tell the world’s largest annual meeting of neuroscientists that some scientists working on organoids are “perilously close” to crossing the ethical line, while others may already have done so by creating sentient lumps of brain in the lab.
“If there’s even a possibility of the organoid being sentient, we could be crossing that line,” said Elan Ohayon, the director of the Green Neuroscience Laboratory in San Diego, California. “We don’t want people doing research where there is potential for something to suffer.”
Here we are, living in the “post-truth” era. Every day, the internet inundates us with alternative facts, fake news, doctored film footage, and bizarro anti-science conspiracies. No one seems to agree on what’s really real anymore. How did we, supposedly the most technologically and scientifically advanced civilization in history, get to this point? There are a lot of answers to that question, some of which, I’m sure, must involve fairly potent narcotics. But one of the most useful and informative answers has a lot to do with the social dimensions of cognition. Specifically, it has to do with how people create and then agree on social realities, or what some people call “social constructions.” In turn, the question of social construction drives straight to the heart of the so-called “science wars” – the conflict between postmodernism and science advocates – and evokes the fraught question of how religion relates to these mutual rivals.
This essay won’t invoke your righteous anger at postmodernism, scientism, religion, or any other contemporary bogeyman. I only want to look at how our relationship to “truth” has changed over the past few decades, and to think about what that shift might imply. A lot of people, such as philosopher, public atheist, and Santa Claus impersonator Daniel Dennett, blame our post-truth era on “postmodernism.” Are these critics right? In many ways, I think they are indeed onto something. But I also think that postmodernists have some credible points to make, and science advocates should take seriously what their arguments might imply for real, actual small-s science, as well as for the ideology of scientific progress (big-S Science™). Meanwhile, I think religion, scientism, and postmodernism are tangled in a tensely bound, three-way relationship of shared and opposing convictions and values. I think it’ll be illuminating to explore that triangle.
Back in the 1980s, a wife-husband philosopher team known as “the Churchlands” provoked the ire of their peers with the heretical claim that the best way to understand the mind was to study the brain. That might sound uncontroversial, but in philosophy it was anything but.
The nature of mind and consciousness had been one of the biggest and trickiest issues in philosophy for a century. Neuroscience was developing fast, but most philosophers resisted claims that it was solving the philosophical problems of mind. Scientists who trod on philosophers’ toes were accused of “scientism”: the belief that the only true explanations are scientific explanations and that once you had described the science of a phenomenon there was nothing left to say. Those rare philosophers like the Churchlands, who shared many of the enthusiasms and interest of these scientists, were even more despised. A voice in the head of Patricia Churchland told her how to deal with these often vicious critics: “outlast the bastards.”
It seems indisputable that there are holes. For example, there are keyholes, black holes and sinkholes; and there are holes in things such as sieves, golf courses and doughnuts. We come into the world through holes, and when we die many of us will be put into specially dug holes. But what are these holes and what are they made of? One of the big philosophical questions about holes is whether they are actually things themselves or, as the German-Jewish writer Kurt Tucholsky suggested in ‘The Social Psychology of Holes’ (1931), whether they are just ‘where something isn’t’. To help us investigate this issue, let us first dissect the anatomy of the hole.
At a recent conference on belief and unbelief hosted by the journal Salmagundi, the novelist and essayist Marilynne Robinson confessed to knowing some good people who are atheists, but lamented that she has yet to hear “the good Atheist position articulated.” She explained, “I cannot engage with an atheism that does not express itself.”
She who hath ears to hear, let her hear. One of the most beautifully succinct expressions of secular faith in our bounded life on earth was provided not long after Christ supposedly conquered death, by Pliny the Elder, who called down “a plague on this mad idea that life is renewed by death!” Pliny argued that belief in an afterlife removes “Nature’s particular boon,” the great blessing of death, and merely makes dying more anguished by adding anxiety about the future to the familiar grief of departure. How much easier, he continues, “for each person to trust in himself,” and for us to assume that death will offer exactly the same “freedom from care” that we experienced before we were born: oblivion.
A few years ago, a scientist named Nenad Sestan began throwing around an idea for an experiment so obviously insane, so “wild” and “totally out there,” as he put it to me recently, that at first he told almost no one about it: not his wife or kids, not his bosses in Yale’s neuroscience department, not the dean of the university’s medical school.
Like everything Sestan studies, the idea centered on the mammalian brain. More specific, it centered on the tree-shaped neurons that govern speech, motor function and thought — the cells, in short, that make us who we are. In the course of his research, Sestan, an expert in developmental neurobiology, regularly ordered slices of animal and human brain tissue from various brain banks, which shipped the specimens to Yale in coolers full of ice. Sometimes the tissue arrived within three or four hours of the donor’s death. Sometimes it took more than a day. Still, Sestan and his team were able to culture, or grow, active cells from that tissue — tissue that was, for all practical purposes, entirely dead. In the right circumstances, they could actually keep the cells alive for several weeks at a stretch.
When I met with Sestan this spring, at his lab in New Haven, he took great care to stress that he was far from the only scientist to have noticed the phenomenon. “Lots of people knew this,” he said. “Lots and lots.” And yet he seems to have been one of the few to take these findings and push them forward: If you could restore activity to individual post-mortem brain cells, he reasoned to himself, what was to stop you from restoring activity to entire slices of post-mortem brain?
Philosophers have spent millennia debating whether we have free will, without reaching a conclusive answer. Neuroscientists optimistically entered the field in the 1980s, armed with tools they were confident could reveal the origin of actions in the brain. Three decades later, they have reached the same conclusion as the philosophers: Free will is complicated.
Now, a new research program spanning 17 universities and backed by more than $7 million from two private foundations hopes to break out the impasse by bringing neuroscientists and philosophers together. The collaboration, the researchers say, can help them tackle two important questions: What does it take to have free will? And whatever that is, do we have it?
If we found out next week that neuroscientists had conclusively demonstrated that free will does not exist and that our so-called ‘choices’ are purely the result of automatic brain functions, I think we would be right to take this news badly. But imagine further that, as we continue to develop new ways to alter human brain chemistry and so on, we found a way to design a ‘free will pill’ – something like Prozac or Adderall – which alters our brains so that we can act freely.
It turns out that some recent work on free will makes this speculation more plausible than you might think. And this brings up all sorts of bizarre questions, such as whether we can freely choose to take a free will drug; whether we should take a free will drug; and what kind of effects such a drug would have on us, individually and socially.
Why would anyone spend thousands of dollars on a Prada handbag, an Armani suit, or a Rolex watch? If you really need to know the time, buy a cheap Timex or just look at your phone and send the money you have saved to Oxfam. Certain consumer behaviors seem irrational, wasteful, even evil. What drives people to possess so much more than they need?
Maybe they have good taste. In her wonderful 2003 book The Substance of Style, Virginia Postrel argues that our reaction to many consumer items is “immediate, perceptual, and emotional.” We want these things because of the pleasure we get from looking at and interacting with high-quality products—and there is nothing wrong with this. “Decoration and adornment are neither higher nor lower than ‘real’ life,” she writes. “They are part of it.”
Postrel is pushing back against a more cynical theory held by many sociologists, economists, and evolutionary theorists. Building from the insights of Thorstein Veblen, they argue that we buy such things as status symbols. Though we are often unaware of it and might angrily deny it, we are driven to accumulate ostentatious goods to impress others. Evolutionary psychologist Geoffrey Miller gives this theory an adaptationist twist, arguing that the hunger for these luxury goods is a modern expression of the evolved desire to signal attractive traits—such as intelligence, ambition, and power—to entice mates: Charles Darwin’s sexual selection meets Veblen’s conspicuous consumption.
Does the language you speak influence how you think? This is the question behind the famous linguistic relativity hypothesis, that the grammar or vocabulary of a language imposes on its speakers a particular way of thinking about the world.
The strongest form of the hypothesis is that language determines thought. This version has been rejected by most scholars. A weak form is now thought to be obviously true, which is that if one language has a specific vocabulary item for a concept but another language does not, then speaking about the concept may happen more frequently or more easily. For example, if someone explained to you, an English speaker, the meaning for the German term Schadenfreude, you could recognize the concept, but you may not have used the concept as regularly as a comparable German speaker.
Scholars are now interested in whether having a vocabulary item for a concept influences thought in domains far from language, such as visual perception. Consider the case of the "Russian blues." While English has a single word for blue, Russian has two words, goluboy for light blue and siniy for dark blue. These are considered "basic level" terms, like green and purple, since no adjective is needed to distinguish them. Lera Boroditsky and her colleagues displayed two shades of blue on a computer screen and asked Russian speakers to determine, as quickly as possible, whether the two blue colors were different from each other or the same as each other. The fastest discriminations were when the displayed colors were goluboy and siniy, rather than two shades of goluboy or two shades of siniy. The reaction time advantage for lexically distinct blue colors was strongest when the blue hues were perceptually similar.
Forty-seven years ago, the Asian elephant now known as Happy was one of seven calves captured—probably in Thailand, but details are hazy—and sent to the United States. She spent five years at a safari park in Florida, time that in the wild would have been spent by her mother’s side. Then she was moved to the Bronx Zoo in New York City. There Happy remains today, and since the death of an elephant companion in 2006, she has lived alone, her days alternating between a 1.15-acre yard and an indoor stall.
For a member of a species renowned for both intelligence and sociality, the setting is far from natural. In the wild, Happy would share a many-square-mile home range with a lifelong extended family, their bonds so close-knit that witnessing death produces symptoms akin to post-traumatic stress disorder in humans. It would seem that Happy, despite the devotions of the people who care for her, is not living her best life.
In considering Happy’s circumstances and what might be done to improve them, should something more than animal-welfare laws and zoo regulations—which the Bronx Zoo has not violated, but arguably are inadequate—be invoked? Should Happy be considered, in legal terms, a person? Which is to say, an entity capable of possessing at least some rights historically reserved for humans alone—beginning with a right to be free?
MATTHEW FISHER was wary of how his peers would react to his latest project. In the end he was relieved he wasn’t laughed out of court. “They told me that this is sensible science – I’m not crazy.”
Certainly nothing in Fisher’s CV says crazy. A specialist in the quantum properties of materials, he worked at IBM and then at Microsoft’s Research Station Q developing quantum computers. He is now a professor at the Kavli Institute for Theoretical Physics at the University of California Santa Barbara. This year he won a share of the American Physical Society’s Oliver E. Buckley prize in condensed matter physics, many recipients of which have gone on to win a Nobel.
The thing was, he had broached a subject many physicists would rather simply avoid.
“Does the brain use quantum mechanics? That’s a perfectly legitimate question,” says Fisher. On one level, he is right – and the answer is yes. The brain is composed of atoms, and atoms follow the laws of quantum physics. But Fisher is really asking whether the strange properties of quantum objects – being in two places at once, seeming to instantly influence each other over distance and so on – could explain still-perplexing aspects of human cognition. And that, it turns out, is a very contentious question indeed.
New Stanford Encyclopedia of Philosophy entry on the emergence of first-order logic by William Ewald:
For anybody schooled in modern logic, first-order logic can seem an entirely natural object of study, and its discovery inevitable. It is semantically complete; it is adequate to the axiomatization of all ordinary mathematics; and Lindström’s theorem shows that it is the maximal logic satisfying the compactness and Löwenheim-Skolem properties. So it is not surprising that first-order logic has long been regarded as the “right” logic for investigations into the foundations of mathematics. It occupies the central place in modern textbooks of mathematical logic, with other systems relegated to the sidelines. The history, however, is anything but straightforward, and is certainly not a matter of a sudden discovery by a single researcher. The emergence is bound up with technical discoveries, with differing conceptions of what constitutes logic, with different programs of mathematical research, and with philosophical and conceptual reflection. So if first-order logic is “natural”, it is natural only in retrospect. The story is intricate, and at points contested; the following entry can only provide an overview.
[…] What we believe is then of tremendous practical importance. False beliefs about physical or social facts lead us into poor habits of action that in the most extreme cases could threaten our survival. If the singer R Kelly genuinely believed the words of his song ‘I Believe I Can Fly’ (1996), I can guarantee you he would not be around by now.
But it is not only our own self-preservation that is at stake here. As social animals, our agency impacts on those around us, and improper believing puts our fellow humans at risk. As Clifford warns: ‘We all suffer severely enough from the maintenance and support of false beliefs and the fatally wrong actions which they lead to …’ In short, sloppy practices of belief-formation are ethically wrong because – as social beings – when we believe something, the stakes are very high.
Have you ever stood in a field full of cows? It’s obvious that they’re aware of one another, but in a minimal kind of way. They tend to stay loosely clumped together as they graze, and they don’t deliberately knock into other members of the herd. Shouting gets their attention, but it tends to elicit a flickering inspection at most, which subsides into cud-munching indifference when they realise you represent neither a threat nor a treat. Cows don’t gauge how to respond to sights, sounds and smells by carefully studying the subtleties of one another’s reactions (which is why they can startle each other into stampeding). When you’re with a herd of cows, you’re basically alone.
Stand or walk among a herd of elephants, however, and you’ll appreciate how different the experience is. Even the most peaceful group feels electric with communicative action. There’s continuous eye contact, touching, trunk and ear movements to which others attend and respond. Elephants engage in low-frequency vocalisation, most of which you can’t hear, but you can certainly see its effects. If you’re fidgety, for example, all the adult elephants will notice and become uneasy. Typically they take their cues from their female leader, the matriarch. When you’re with a herd of elephants, you’re not alone at all; you’re in a highly charged atmosphere, shimmering with presence and feeling. To an outside observer, elephants appear to have highly responsive minds, with their own autonomous perspectives that yield only to careful, respectful interaction.
A self-driving car is speeding down a busy road when suddenly a group of pedestrians appears in its path. The car has a split-second to decide between two horrific options. Should it plow down the unwitting pedestrians or swerve into a concrete barrier with the likelihood that occupants in the car may be killed?
What if the pedestrian is a woman with a stroller? Does that change the moral calculus? Or what if the occupants of the car are mostly young children while the pedestrian is a single jaywalker breaking the law? Or an elderly man, possibly disoriented?
In what feels like an increasingly polarised world, trying to convince the “other side” to see things differently often feels futile. Psychology has done a great job outlining some of the reasons why, including showing that, regardless of political leanings, most people are highly motivated to protect their existing views.
However a problem with some of this research is that it is very difficult to concoct opposing real-life arguments of equal validity, so as to make a fair comparison of people’s treatment of arguments they agree and disagree with.
To get around this problem, an elegant new paper in the Journal of Cognitive Psychology has tested people’s ability to assess the logic of formal arguments (syllogisms) structured in the exact same way, but that featured wording that either confirmed or contradicted their existing views on abortion. The results provide a striking demonstration of how our powers of reasoning are corrupted by our prior attitudes.
Caruso: [Dan,] you have famously argued that freedom evolves and that humans, alone among the animals, have evolved minds that give us free will and moral responsibility. I, on the other hand, have argued that what we do and the way we are is ultimately the result of factors beyond our control, and that because of this we are never morally responsible for our actions, in a particular but pervasive sense – the sense that would make us truly deserving of blame and praise, punishment and reward. While these two views appear to be at odds with each other, one of the things I would like to explore in this conversation is how far apart we actually are. I suspect that we may have more in common than some think – but I could be wrong. To begin, can you explain what you mean by ‘free will’ and why you think humans alone have it?
Dennett: A key word in understanding our differences is ‘control’. [Gregg,] you say ‘the way we are is ultimately the result of factors beyond our control’ and that is true of only those unfortunates who have not been able to become autonomous agents during their childhood upbringing. There really are people, with mental disabilities, who are not able to control themselves, but normal people can manage under all but the most extreme circumstances, and this difference is both morally important and obvious, once you divorce the idea of control from the idea of causation. Your past does not control you; for it to control you, it would have to be able to monitor feedback about your behaviour and adjust its interventions – which is nonsense.
Imagine you’re the president of a European country. You’re slated to take in 50,000 refugees from the Middle East this year. Most of them are very religious, while most of your population is very secular. You want to integrate the newcomers seamlessly, minimizing the risk of economic malaise or violence, but you have limited resources. One of your advisers tells you to invest in the refugees’ education; another says providing jobs is the key; yet another insists the most important thing is giving the youth opportunities to socialize with local kids. What do you do?
Well, you make your best guess and hope the policy you chose works out. But it might not. Even a policy that yielded great results in another place or time may fail miserably in your particular country under its present circumstances. If that happens, you might find yourself wishing you could hit a giant reset button and run the whole experiment over again, this time choosing a different policy. But of course, you can’t experiment like that, not with real people.
You can, however, experiment like that with virtual people. And that’s exactly what the Modeling Religion Project does. An international team of computer scientists, philosophers, religion scholars, and others are collaborating to build computer models that they populate with thousands of virtual people, or “agents.” As the agents interact with each other and with shifting conditions in their artificial environment, their attributes and beliefs—levels of economic security, of education, of religiosity, and so on—can change. At the outset, the researchers program the agents to mimic the attributes and beliefs of a real country’s population using survey data from that country. They also “train” the model on a set of empirically validated social-science rules about how humans tend to interact under various pressures.
And then they experiment: Add in 50,000 newcomers, say, and invest heavily in education. How does the artificial society change? The model tells you. Don’t like it? Just hit that reset button and try a different policy.