AI MIT CSAIL’s AI can predict the onset of breast cancer 5 years in advance

Breast cancer is the second leading cancer-related cause of death among women in the U.S. It’s estimated that in 2015, 232,000 women were diagnosed with the disease and approximately 40,000 died from it. And while exams like mammography have come into wide practice — in 2014, over 39 million breast cancer screenings were performed in the U.S. alone — they’re not always reliable. About 10% to 15% of women who undergo a mammogram are asked to return following an inconclusive analysis.

Fortunately, with the help of AI, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital are making steps toward more consistent and reliable screening procedures. In a newly published paper in the journal Radiology, they describe a machine learning model that can predict from a mammogram if a patient is likely to develop breast cancer as many as five years in the future.

Is Ethical A.I. Even Possible? | NYT

HALF MOON BAY, Calif. — When a news article revealed that Clarifai was working with the Pentagon and some employees questioned the ethics of building artificial intelligence that analyzed video captured by drones, the company said the project would save the lives of civilians and soldiers.

“Clarifai’s mission is to accelerate the progress of humanity with continually improving A.I.,” read a blog post from Matt Zeiler, the company’s founder and chief executive, and a prominent A.I. researcher. Later, in a news media interview, Mr. Zeiler announced a new management position that would ensure all company projects were ethically sound.

As activists, researchers, and journalists voice concerns over the rise of artificial intelligence, warning against biased, deceptive and malicious applications, the companies building this technology are responding. From tech giants like Google and Microsoft to scrappy A.I. start-ups, many are creating corporate principles meant to ensure their systems are designed and deployed in an ethical way. Some set up ethics officers or review boards to oversee these principles.

But tensions continue to rise as some question whether these promises will ultimately be kept. Companies can change course. Idealism can bow to financial pressure. Some activists — and even some companies — are beginning to argue that the only way to ensure ethical practices is through government regulation.

In matters of morality, self-driving cars face a cultural obstacle course - The Globe and Mail

A self-driving car is speeding down a busy road when suddenly a group of pedestrians appears in its path. The car has a split-second to decide between two horrific options. Should it plow down the unwitting pedestrians or swerve into a concrete barrier with the likelihood that occupants in the car may be killed?

What if the pedestrian is a woman with a stroller? Does that change the moral calculus? Or what if the occupants of the car are mostly young children while the pedestrian is a single jaywalker breaking the law? Or an elderly man, possibly disoriented?

Computer Programmers Get New Tech Ethics Code - Scientific American

Computing professionals are on the front lines of almost every aspect of the modern world. They’re involved in the response when hackers steal the personal information of hundreds of thousands of people from a large corporation. Their work can protect—or jeopardize—critical infrastructure like electrical grids and transportation lines. And the algorithms they write may determine who gets a job, who is approved for a bank loan or who gets released on bail.

Technological professionals are the first, and last, lines of defense against the misuse of technology. Nobody else understands the systems as well, and nobody else is in a position to protect specific data elements or ensure the connections between one component and another are appropriate, safe and reliable. As the role of computing continues its decades-long expansion in society, computer scientists are central to what happens next.

That’s why the world’s largest organization of computer scientists and engineers, the Association for Computing Machinery, of which I am president, has issued a new code of ethics for computing professionals. And it’s why ACM is taking other steps to help technologists engage with ethical questions.

Artificial Intelligence Shows Why Atheism Is Unpopular - The Atlantic

Imagine you’re the president of a European country. You’re slated to take in 50,000 refugees from the Middle East this year. Most of them are very religious, while most of your population is very secular. You want to integrate the newcomers seamlessly, minimizing the risk of economic malaise or violence, but you have limited resources. One of your advisers tells you to invest in the refugees’ education; another says providing jobs is the key; yet another insists the most important thing is giving the youth opportunities to socialize with local kids. What do you do? 

Well, you make your best guess and hope the policy you chose works out. But it might not. Even a policy that yielded great results in another place or time may fail miserably in your particular country under its present circumstances. If that happens, you might find yourself wishing you could hit a giant reset button and run the whole experiment over again, this time choosing a different policy. But of course, you can’t experiment like that, not with real people.

You can, however, experiment like that with virtual people. And that’s exactly what the Modeling Religion Project does. An international team of computer scientists, philosophers, religion scholars, and others are collaborating to build computer models that they populate with thousands of virtual people, or “agents.” As the agents interact with each other and with shifting conditions in their artificial environment, their attributes and beliefs—levels of economic security, of education, of religiosity, and so on—can change. At the outset, the researchers program the agents to mimic the attributes and beliefs of a real country’s population using survey data from that country. They also “train” the model on a set of empirically validated social-science rules about how humans tend to interact under various pressures.

And then they experiment: Add in 50,000 newcomers, say, and invest heavily in education. How does the artificial society change? The model tells you. Don’t like it? Just hit that reset button and try a different policy.

'The discourse is unhinged': how the media gets AI alarmingly wrong | The Guardian

[...] A month after this initial research was released, Fast Company published an article entitled AI Is Inventing Language Humans Can’t Understand. Should We Stop It?. The story focused almost entirely on how the bots occasionally diverged from standard English – which was not the main finding of the paper – and reported that after the researchers “realized their bots were chattering in a new language” they decided to pull the plug on the whole experiment, as if the bots were in some way out of control.

Fast Company’s story went viral and spread across the internet, prompting a slew of content-hungry publications to further promote this new Frankenstein-esque narrative: “Facebook engineers panic, pull plug on AI after bots develop their own language,” one website reported. Not to be outdone, the Sun proposed that the incident “closely resembled the plot of The Terminator in which a robot becomes self-aware and starts waging a war on humans”.

Zachary Lipton, an assistant professor at the machine learning department at Carnegie Mellon University, watched with frustration as this story transformed from “interesting-ish research” to “sensationalized crap”.

Is This the World’s Most Bizarre Scholarly Meeting? - The Chronicle of Higher Education

Start with Noam Chomsky, Deepak Chopra, and a robot that loves you no matter what. Add a knighted British physicist, a renowned French neuroscientist, and a prominent Australian philosopher/occasional blues singer. Toss in a bunch of psychologists, mathematicians, anesthesiologists, artists, meditators, a computer programmer or two, and several busloads of amateur theorists waving self-published manuscripts and touting grand unified solutions. Send them all to a swanky resort in the desert for a week, supply them with lots of free coffee and beer, and ask them to unpack a riddle so confounding that it’s unclear how to make progress or where you’d even begin.

Then just, like, see what happens.

No death and an enhanced life: Is the future transhuman? | Technology | The Guardian

The aims of the transhumanist movement are summed up by Mark O’Connell in his book To Be a Machine, which last week won the Wellcome Book prize. “It is their belief that we can and should eradicate ageing as a cause of death; that we can and should use technology to augment our bodies and our minds; that we can and should merge with machines, remaking ourselves, finally, in the image of our own higher ideals.”

The idea of technologically enhancing our bodies is not new. But the extent to which transhumanists take the concept is. In the past, we made devices such as wooden legs, hearing aids, spectacles and false teeth. In future, we might useimplants to augment our senses so we can detect infrared or ultraviolet radiation directly or boost our cognitive processes by connecting ourselves to memory chips. Ultimately, by merging man and machine, science will produce humans who have vastly increased intelligence, strength, and lifespans; a near embodiment of gods.

Opinion | It’s Westworld. What’s Wrong With Cruelty to Robots? - The New York Times

Suppose we had robots perfectly identical to men, women and children and we were permitted by law to interact with them in any way we pleased. How would you treat them?

That is the premise of “Westworld,” the popular HBO series that opened its second season Sunday night. And, plot twists of Season 2 aside, it raises a fundamental ethical question we humans in the not-so-distant future are likely to face.

Is Anyone Home? A Way to Find Out If AI Has Become Self-Aware - Scientific American Blog Network

Every moment of your waking life and whenever you dream, you have the distinct inner feeling of being “you.” When you see the warm hues of a sunrise, smell the aroma of morning coffee or mull over a new idea, you are having conscious experience. But could an artificial intelligence (AI) ever have experience, like some of the androids depicted in Westworld or the synthetic beings in Blade Runner?

The question is not so far-fetched. Robots are currently being developed to work inside nuclear reactors, fight wars and care for the elderly. As AIs grow more sophisticated, they are projected to take over many human jobs within the next few decades. So we must ponder the question: Could AIs develop conscious experience?

The Nature of Consciousness: Sam Harris

In this episode of the Waking Up podcast, Sam Harris speaks with Thomas Metzinger about the scientific and experiential understanding of consciousness. They also talk about the significance of WWII for the history of ideas, the role of intuition in science, the ethics of building conscious AI, the self as an hallucination, how we identify with our thoughts, attention as the root of the feeling of self, the place of Eastern philosophy in Western science, and the limitations of secular humanism.

The rise of AI is sparking an international arms race - Vox

"Artificial intelligence is the future not only of Russia but of all of mankind ... Whoever becomes the leader in this sphere will become the ruler of the world."

Russian President Vladimir Putin made this statement to a group of students two weeks ago. Shortly thereafter, Tesla’s Elon Musk, who has worried publicly about the hazards of artificial intelligence (AI) for years now, posted an ominous tweet in response to Putin’s remarks.

“China, Russia, soon all countries w/ strong computer science,” he wrote. “Competition for AI superiority at national level most likely cause of WW3 in my opinion.”

Teaching robots right from wrong

More than 400 years ago, according to legend, a rabbi knelt by the banks of the Vltava river in what is now known as the Czech Republic. He pulled handfuls of clay out of the water and carefully patted them into the shape of a man. The Jews of Prague, falsely accused of using the blood of Christians in their rituals, were under attack. The rabbi, Judah Loew ben Bezalel, decided that his community needed a protector stronger than any human. He inscribed the Hebrew word for “truth”, emet, onto his creation’s forehead and placed a capsule inscribed with a Kabbalistic formula into its mouth. The creature sprang to life.

The Golem patrolled the ghetto, protecting its citizens and carrying out useful jobs: sweeping the streets, conveying water and splitting firewood. All was harmonious until the day the rabbi forgot to disable the Golem for the Sabbath, as he was required to, and the creature embarked on a murderous rampage. The rabbi was forced to scrub the initial letter from the word on the Golem’s forehead to make met, the Hebrew word for “death”. Life slipped from the Golem and he crumbled into dust.

This cautionary tale about the risks of building a mechanical servant in man’s image has gained fresh resonance in the age of artificial intelligence. Legions of robots now carry out our instructions unreflectively. How do we ensure that these creatures, regardless of whether they’re built from clay or silicon, always work in our best interests? Should we teach them to think for themselves? And if so, how are we to teach them right from wrong?

Creating robots capable of moral reasoning is like parenting | Aeon Essays

Intelligent machines, long promised and never delivered, are finally on the horizon. Sufficiently intelligent robots will be able to operate autonomously from human control. They will be able to make genuine choices. And if a robot can make choices, there is a real question about whether it will make moral choices. But what is moral for a robot? Is this the same as what’s moral for a human?

Philosophers and computer scientists alike tend to focus on the difficulty of implementing subtle human morality in literal-minded machines. But there’s another problem, one that really ought to come first. It’s the question of whether we ought to try to impose our own morality on intelligent machines at all. In fact, I’d argue that doing so is likely to be counterproductive, and even unethical. The real problem of robot morality is not the robots, but us. Can we handle sharing the world with a new type of moral creature?

The Dark Secret at the Heart of AI

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

How humans will lose control of artificial intelligence

This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the world's first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results — not knowing they've already doomed us all.


Google’s sibling Deepmind artificial intelligence has the potential to learn to become highly aggressive

Humans can be a selfish bunch, often acting out of self-interest instead of consideration for others. Given the circumstance, though, people can also work together towards a greater cause.

Alphabet subsidiary Deepmind recently published a study into how this behaviour would apply to multiple artificial intelligences when they’re placed together in certain situations. In doing so, the company hopes to better understand and control how AI works.

Big law is having its Uber moment

Benjamin Alarie, a law professor at the University of Toronto, was startled to learn, two years ago, that the school’s computer science students planned to “disrupt” his lucrative, centuries-old profession with artificial intelligence, or AI. But the more he thought about it, the more it made sense. Now he’s leading the march of the machines into oak-panelled law offices right across the country—no doubt to the chagrin of thousands of  lawyers (and perhaps a few former students) whose jobs could soon be at risk.

In addition to his professorial duties, Alarie spent the past 24 months building a “legal tech” startup that uses machine learning technologies—sophisticated algorithms capable of doing tasks for which they haven’t been specifically programmed—to predict how courts are likely to rule on new tax cases. That startup, Blue J Legal, which Alarie founded along with two other U of T law profs and a former IBM software developer, boasts that its Tax Foresight tool yields results that are more than 90 per cent accurate. “It’s like a flight simulator for new tax law cases,” says Alarie, who is also the firm’s CEO.

Humans Mourn Loss After Google Is Unmasked as China’s Go Master

It was dramatic theater, and the latest sign that artificial intelligence is peerless in solving complex but defined problems. AI scientists predict computers will increasingly be able to search through thickets of alternatives to find patterns and solutions that elude the human mind.

Master’s arrival has shaken China’s human Go players.

“After humanity spent thousands of years improving our tactics, computers tell us that humans are completely wrong,” Mr. Ke, 19, wrote on Chinese social media platform Weibo after his defeat. “I would go as far as to say not a single human has touched the edge of the truth of Go.”

Apple’s first research paper tries to solve a problem facing every company working on AI

The paper, authored by six of Apple’s researchers, doesn’t focus on AI that someone with an iPhone might interact with, but rather how to create enough data to effectively train it. Specifically, the research focuses on making realistic fake images—mostly of humans—to train facial recognition AI. It addresses a core problem: training a machine takes a huge amount of data. Moreover, training a machine on matters like faces and body language can take a ton of personal data. The ability to manufacture this kind of training data and still achieve high results could allow Apple to build AI that understand how humans function (the way we move our hands or look around a screen) without needing to use any user data while building the software.