The modern successor to the Hippocratic oath, called the Declaration of Geneva, was updated and approved by the World Medical Association in 2017. The pledge states that “The health and well-being of my patient will be my first consideration” and “I will not use my medical knowledge to violate human rights and civil liberties, even under threat.”1 Can a physician work in US immigration detention facilities while upholding this pledge?
For the first time, scientists have detected brain waves similar to those of a pre-term baby in miniature, lab-grown brains.
The results, published Thursday in the journal Cell Stem Cell, have big implications for the medical field. Access to human brains are a consistent barrier to studying conditions like Alzheimer’s, autism, or schizophrenia; for obvious reasons, infant brains are even more difficult to obtain. So models that are grown from stem cells like these mini-brains (known to scientists as “cortical organoids”) may offer a solution.
We learn from our personal interaction with the world, and our memories of those experiences help guide our behaviors. Experience and memory are inexorably linked, or at least they seemed to be before a recent reporton the formation of completely artificial memories. Using laboratory animals, investigators reverse engineered a specific natural memory by mapped the brain circuits underlying its formation. They then “trained” another animal by stimulating brain cells in the pattern of the natural memory. Doing so created an artificial memory that was retained and recalled in a manner indistinguishable from a natural one.
Memories are essential to the sense of identity that emerges from the narrative of personal experience. This study is remarkable because it demonstrates that by manipulating specific circuits in the brain, memories can be separated from that narrative and formed in the complete absence of real experience. The work shows that brain circuits that normally respond to specific experiences can be artificially stimulated and linked together in an artificial memory. That memory can be elicited by the appropriate sensory cues in the real environment. The research provides some fundamental understanding of how memories are formed in the brain and is part of a burgeoning science of memory manipulation that includes the transfer, prosthetic enhancement and erasure of memory. These efforts could have a tremendous impact on a wide range of individuals, from those struggling with memory impairments to those enduring traumatic memories, and they also have broad social and ethical implications.
Alternative link in case of paywall:
Marieme and Ndeye each have a sticker on their faces: a butterfly for Ndeye, and a green smiley face for her twin sister. They giggle as they take them off and stick them back on; then Ndeye decides it’s their dad’s turn, placing the smiley face over his right eye.
“Ndeye is the lively one, she likes attention, and Marieme is a quieter personality – calm and thoughtful,” said Ibrahima Ndiaye, the twins’ father. “Ndeye is fire and Marieme is ice.”
Their behaviour – and their differences – are typical for three-year-old twins, but Marieme and Ndeye are not typical at all. The sisters are conjoined: they have separate brains, hearts and lungs, but share a liver, bladder and digestive system, and have three kidneys between them.
Ndiaye brought his daughters from Senegal to Great Ormond Street hospital (GOSH) in London at the age of eight months after a desperate search for medical help. Over the past two and a half years, he and the hospital have wrestled with an agonising decision about whether to go ahead with a surgical separation that Marieme would not survive, but that could give Ndeye a chance of a reasonable life. Without a separation, both will almost certainly die.
Mathematicians, computer engineers and scientists in related fields should take a Hippocratic oath to protect the public from powerful new technologies under development in laboratories and tech firms, a leading researcher has said.
The ethical pledge would commit scientists to think deeply about the possible applications of their work and compel them to pursue only those that, at the least, do no harm to society.
Hannah Fry, an associate professor in the mathematics of cities at University College London, said an equivalent of the doctor’s oath was crucial given that mathematicians and computer engineers were building the tech thatwould shape society’s future.
You might be aware that chimpanzees can recognize themselves in a mirror, communicate through sign language, pursue goals creatively and form long-lasting friendships. You might also think that these are the kinds of things that a person can do. However, you might not think of chimpanzees as persons.
The Nonhuman Rights Project does. Since 2013, the group has been working on behalf of two chimpanzees, Kiko and Tommy, currently being held in cages by their “owners” without the company of other chimpanzees. It is asking the courts to rule that Kiko and Tommy have the right to bodily liberty and to order their immediate release into a sanctuary where they can live out the rest of their lives with other chimpanzees.
The problem is that under current United States law, one is either a “person” or a “thing.” There is no third option. If you are a person, you have the capacity for rights, including the right to habeas corpus relief, which protects you from unlawful confinement. If you are a thing, you do not have the capacity for rights. And unfortunately, even though they are sensitive, intelligent, social beings, Kiko and Tommy are considered things under the law.
At a recent conference on belief and unbelief hosted by the journal Salmagundi, the novelist and essayist Marilynne Robinson confessed to knowing some good people who are atheists, but lamented that she has yet to hear “the good Atheist position articulated.” She explained, “I cannot engage with an atheism that does not express itself.”
She who hath ears to hear, let her hear. One of the most beautifully succinct expressions of secular faith in our bounded life on earth was provided not long after Christ supposedly conquered death, by Pliny the Elder, who called down “a plague on this mad idea that life is renewed by death!” Pliny argued that belief in an afterlife removes “Nature’s particular boon,” the great blessing of death, and merely makes dying more anguished by adding anxiety about the future to the familiar grief of departure. How much easier, he continues, “for each person to trust in himself,” and for us to assume that death will offer exactly the same “freedom from care” that we experienced before we were born: oblivion.
A Japanese stem-cell scientist is the first to receive government support to create animal embryos that contain human cells and transplant them into surrogate animals since a ban on the practice was overturned earlier this year.
Hiromitsu Nakauchi, who leads teams at the University of Tokyo and Stanford University in California, plans to grow human cells in mouse and rat embryos and then transplant those embryos into surrogate animals. Nakauchi's ultimate goal is to produce animals with organs made of human cells that can, eventually, be transplanted into people.
Until March, Japan explicitly forbid the growth of animal embryos containing human cells beyond 14 days or the transplant of such embryos into a surrogate uterus. That month Japan’s education and science ministry issued new guidelines allowing the creation of human-animal embryos that can be transplanted into surrogate animals and brought to term.
Fahad Diwan logs in and fills out the details of a person facing a bail hearing. Date of birth. Current charges. Pending charges. Past convictions.
Once his SmartBail program is done, he says, an algorithm trained on a mountain of data will be able to assess whether that suspect is a good candidate for pretrial release. Unlikely to be a flight risk. Unlikely to commit offences. Likely to comply with the conditions of release.
Suspects in custody are “legally innocent people,” said Diwan, 30, who hopes to one day put his software to use in Ontario’s bail courts. “We just want to find a way to make the system better, faster, economical.”
Proponents of this kind of program say machine learning would save time and money by quickly identifying people who should be released, speeding up bail hearings, reducing the number of people in jails and freeing up courts to focus on defendants who should have a full, contested hearing. All that with less bias and without affecting the crime rate.
In this episode, Kathryn Sussman talks with Dr. Andrew Fenton and Dr. Letitia Meynell, authors and associate professors of Philosophy at Dalhousie University in Halifax. We learn from them about the ethics behind animal captivity in zoos and the relationship that such institutions create between humans and other animal species. They also reflect upon the ethical ways of displaying animals, particularly exotic animals such as polar bears in zoos far from their natural habitat, and the justifications of doing that.
The experts unravel the differences between zoos and sanctuaries as depending on who the exhibits are built for – human visitors or the animals themselves. They also explain how zoo professionals and zoo associations are now starting to aim towards a transformation, focusing onto the animals’ well-being, giving them a life worth living where their basic needs are met.
A few years ago, a scientist named Nenad Sestan began throwing around an idea for an experiment so obviously insane, so “wild” and “totally out there,” as he put it to me recently, that at first he told almost no one about it: not his wife or kids, not his bosses in Yale’s neuroscience department, not the dean of the university’s medical school.
Like everything Sestan studies, the idea centered on the mammalian brain. More specific, it centered on the tree-shaped neurons that govern speech, motor function and thought — the cells, in short, that make us who we are. In the course of his research, Sestan, an expert in developmental neurobiology, regularly ordered slices of animal and human brain tissue from various brain banks, which shipped the specimens to Yale in coolers full of ice. Sometimes the tissue arrived within three or four hours of the donor’s death. Sometimes it took more than a day. Still, Sestan and his team were able to culture, or grow, active cells from that tissue — tissue that was, for all practical purposes, entirely dead. In the right circumstances, they could actually keep the cells alive for several weeks at a stretch.
When I met with Sestan this spring, at his lab in New Haven, he took great care to stress that he was far from the only scientist to have noticed the phenomenon. “Lots of people knew this,” he said. “Lots and lots.” And yet he seems to have been one of the few to take these findings and push them forward: If you could restore activity to individual post-mortem brain cells, he reasoned to himself, what was to stop you from restoring activity to entire slices of post-mortem brain?
On Monday, the German Ethics Council made public a 230-page report discussing their current position on human genome manipulation and in particular, germline editing. According to the press release published on 9 May, a few days before the report, “germline interventions currently too risky, but not ethically out of the question”.
The council made up of 26 ethicists, legal scholars, scientists, and other experts unanimously agreed there are no compelling philosophical arguments against altering human germlines, which they write is not “in principle, ethically reprehensible.” […]
The World Health Organization called for the establishment of a global registry of gene editing research on humans last March. And many scientists would now agree, genome-editing in the human germline should not be regulated by the scientific community but by law.
All members agreed “ the human germline is not inviolable”, although not all are in favour of the pursuing germline interventions – some are concerned the possible benefits may not outweigh the potential downsides.
Ron Posno was diagnosed with mild cognitive impairment—a precursor to dementia—in 2016, and soon after, the London, Ont., resident re-wrote his will. He already had a Do Not Resuscitate order in place, and to this he added instructions for the niece who was his substitute decision maker that at a specific point in the progress of his illness, she was to seek medical assistance in dying on his behalf.
The eight conditions that Posno identified as signalling the proper time for his death are like a photographic negative that also reveals what he considers a life worth living. When I am unable to recognize and respond to family and friends; when I frequently experience hallucinations, paranoia or acute depression; when I become routinely incontinent; when I am unable to eat, clean or dress myself without assistance: that is when I want it to be over.
But then Posno’s niece, a lawyer in Toronto, informed him that an advance request like this for medical assistance in dying (MAID) was against the law and she would have no ability to act on it once he could no longer consent.
Posno had assumed that this request was basically an extension of his DNR: a statement of his desires for medical treatment in a given set of circumstances. He found it incomprehensible that he could legally state that he did not want CPR and the instruction would be followed if he were unconscious with a DNR in place, but in the face of an illness that would eventually render him unable to provide informed consent, he couldn’t request MAID on behalf of a carefully delineated future version of himself.
Last week, the Supreme Court agreed to review three lower court decisions posing the important question whether Title VII of the Civil Rights Act of 1964—which makes it unlawful for an employer or prospective employer “to discriminate against any individual . . . because of such individual’s . . . sex”—thereby forbids discrimination on the basis of sexual orientation and gender identity. There is little doubt that few if any of the members of the Congress that originally enacted the statutory language would have thought it had that effect.
However, as the late Justice Antonin Scalia wrote for the Court in a 1998 Title VII case that applied the statute’s sex discrimination prohibition to other circumstances that its drafters likely did not envision, “it is ultimately the provisions of our laws rather than the principal concerns of our legislators by which we are governed.” And there are straightforward reasons to think that discrimination based on sexual orientation or gender identity is sex discrimination.
The pending Title VII cases thus pose a test for the Court’s conservative majority. At one point or another and to varying degrees, all of the Court’s conservatives have embraced some version of the so-called textualist approach to statutory interpretation epitomized by Justice Scalia’s observation in the 1998 case, Oncale v. Sundowner Offshore Services, Inc. If they keep faith with their textualist commitment, they will rule in favor of the plaintiffs.
Human intelligence is one of evolution’s most consequential inventions. It is the result of a sprint that started millions of years ago, leading to ever bigger brains and new abilities. Eventually, humans stood upright, took up the plow, and created civilization, while our primate cousins stayed in the trees.
Now scientists in southern China report that they’ve tried to narrow the evolutionary gap, creating several transgenic macaque monkeys with extra copies of a human gene suspected of playing a role in shaping human intelligence.
“This was the first attempt to understand the evolution of human cognition using a transgenic monkey model,” says Bing Su, the geneticist at the Kunming Institute of Zoology who led the effort.
According to their findings, the modified monkeys did better on a memory test involving colors and block pictures, and their brains also took longer to develop—as those of human children do. There wasn’t a difference in brain size.
A team of scientists in Spain is getting ready to experiment on prisoners. If the scientists get the necessary approvals, they plan to start a study this month that involves placing electrodes on inmates’ foreheads and sending a current into their brains. The electricity will target the prefrontal cortex, a brain region that plays a role in decision-making and social behavior. The idea is that stimulating more activity in that region may make the prisoners less aggressive.
This technique — transcranial direct current stimulation, or tDCS — is a form of neurointervention, meaning it acts directly on the brain. Using neurointerventions in the criminal justice system is highly controversial. In recent years, scientists and philosophers have been debating under what conditions (if any) it might be ethical.
The Spanish team is the first to use tDCS on prisoners. They’ve already done it in a pilot study, publishing their findings in Neuroscience in January, and they were all set to implement a follow-up study involving at least 12 convicted murderers and other inmates this month. On Wednesday, New Scientist broke news of the upcoming experiment, noting that it had approval from the Spanish government, prison officials, and a university ethics committee. The next day, the Interior Ministry changed course and put the study on hold.
HALF MOON BAY, Calif. — When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.
“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.
As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.
But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.
[…] The chairperson of the group drafting the report on mental illness and assisted dying, Kwame McKenzie, made a statement to Canadian news media in support of current government policy that excludes competent people who suffer from refractory mental illness from access to assisted dying. He reportedly cautioned that ‘no one can be completely certain that a mentally ill patient is never going to get better’. Which takes me to the actual topic of this blogpost: certainty as a standard for health policy making. Complete certainty, if that were ever possible in the context of health and disease, where most decision making is based on probability as opposed to certainty, might be a defensible threshold if nobody were harmed by the implementation of such a high standard. If the setting of a high standard were cost neutral, there would be no good reason not to have such a standard.
Forty-seven years ago, the Asian elephant now known as Happy was one of seven calves captured—probably in Thailand, but details are hazy—and sent to the United States. She spent five years at a safari park in Florida, time that in the wild would have been spent by her mother’s side. Then she was moved to the Bronx Zoo in New York City. There Happy remains today, and since the death of an elephant companion in 2006, she has lived alone, her days alternating between a 1.15-acre yard and an indoor stall.
For a member of a species renowned for both intelligence and sociality, the setting is far from natural. In the wild, Happy would share a many-square-mile home range with a lifelong extended family, their bonds so close-knit that witnessing death produces symptoms akin to post-traumatic stress disorder in humans. It would seem that Happy, despite the devotions of the people who care for her, is not living her best life.
In considering Happy’s circumstances and what might be done to improve them, should something more than animal-welfare laws and zoo regulations—which the Bronx Zoo has not violated, but arguably are inadequate—be invoked? Should Happy be considered, in legal terms, a person? Which is to say, an entity capable of possessing at least some rights historically reserved for humans alone—beginning with a right to be free?
Difficult ethical issues arise for patients and professionals in medical genetics, and often relate to the patient’s family or their social context. Tackling these issues requires sensitivity to nuances of communication and a commitment to clarity and consistency. It also benefits from an awareness of different approaches to ethical theory. Many of the ethical problems encountered in genetics relate to tensions between the wishes or interests of different people, sometimes even people who do not (yet) exist or exist as embryos, either in an established pregnancy or in vitro. Concern for the long-term welfare of a child or young person, or possible future children, or for other members of the family, may lead to tensions felt by the patient (client) in genetic counselling. Differences in perspective may also arise between the patient and professional when the latter recommends disclosure of information to relatives and the patient finds that too difficult, or when the professional considers the genetic testing of a child, sought by parents, to be inappropriate. The expectations of a patient’s community may also lead to the differences in perspective between patient and counsellor. Recent developments of genetic technology permit genome-wide investigations. These have generated additional and more complex data that amplify and exacerbate some pre-existing ethical problems, including those presented by incidental (additional sought and secondary) findings and the recognition of variants currently of uncertain significance, so that reports of genomic investigations may often be provisional rather than definitive. Experience is being gained with these problems but substantial challenges are likely to persist in the long term.