This link has been bookmarked by 174 people . It was first bookmarked on 11 Apr 2017, by someone privately.
-
18 Feb 20
-
27 Mar 19
-
04 Mar 19
-
28 Sep 18
-
10 Aug 18
-
03 May 18Juan David Correa Toro
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. via Pocket https://ift.tt/2rhiD6b
-
02 Mar 18alexcouture0
This source talks about how deep learning neural networks are being used today and what computer scientists currently understand about them. It explains that the algorithm for the neural networks constantly modifies itself and that is impossible to understand how the algorithm is making decisions.
This source is both recent and credible. It was published in April of 2017 by MIT Technology Review which is a very credible magazine. The article also references several studies done about artificial intelligence by credible sources such as Google. This article has also been named the most credible science and technology magazine ahead of National Geographic, Scientific American, Harvard Business Review and Smithsonian.
This source will be helpful in describing where artificial intelligence is today and how it is being used. It will also help in explaining how neural networks and deep learning work.-
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to
-
Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job.
-
There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.
-
Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.
-
-
22 Jan 18
-
05 Dec 17
-
30 Nov 17Robert Martinez
"Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself."
"This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant."
"He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”" -
07 Oct 17
-
10 Sep 17
-
29 Jul 17logisticalred
"banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable."
-
22 Jul 17
-
April 11, 2017
-
But what if one day it did something unexpected—crashed into a tree, or sat at a green light
-
And you can’t ask it
-
mysterious mind
-
a looming issue
-
do countless other things to transform whole industries.
-
unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users
-
Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will
-
mathematical models
-
it would be possible to understand their reasoning
-
that could make automated decision-making altogether inscrutable
-
.
-
you don’t want to just rely on a ‘black box’ method.
-
Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach
-
and they have done it in ways we cannot understand
-
We’ve never before built machines that operate in ways their creators don’t understand.
-
it proved incredibly good at predicting disease
-
He still doesn’t know
-
“We can build these models,” Dudley says ruefully, “but we don’t know how they work.”
-
Artificial intelligence hasn’t always been this way
-
that reasoned according to rules and logic,
-
who cared to examine some code
-
The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.
-
But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception.
-
It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.
-
But by its nature, deep learning is a particularly dark black box.
-
A network’s reasoning is embedded in the behavior of thousands of simulated neurons
-
it would generate or modify them.
-
here is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.
-
the program uses to recognize, say, a bird or building
-
But the images
-
has employed the AI equivalent of optical illusions to test deep neural networks
-
His tool targets any neuron in the middle of the network and searches for the image that activates it the most
-
highlighting the mysterious nature of the machine’s perceptual abilities
-
We need more than a glimpse of AI’s thinking, however, and there is no easy solution
-
but those calculations are a quagmire of mathematical functions and variables
-
But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable
-
who is determined to apply machine learning to medicine
-
cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment
-
She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”
-
develop a system capable of mining pathology reports to identify patients with specific clinical characteristics
-
they aim to give this system some ability to explain its reasoning
-
has identified explainability as a key stumbling block
-
Explainable Artificial Intelligence program
-
Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data
-
Many autonomous ground vehicles and aircraft are being developed and tested
-
will be reluctant to act
-
It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made
-
This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program.
-
provide a rationale for their outputs
-
is that the explanations provided will always be simplified
-
We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain
-
We’re a long way from having truly interpretable AI.
-
Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives.
-
explainability
-
you’ll want to know what the reasoning was
-
If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.
-
explainability as the core of the evolving relationship between humans and intelligent machines
-
It might just be part of the nature of intelligence that only part of it is exposed to rational explanation
-
then at some stage we may have to simply trust AI’s judgment or do without using it
-
robot tanks and other killing machines
-
a renowned philosopher and cognitive scientist who studies consciousness and the mind
-
suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do
-
I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,
-
-
03 Jul 17
-
30 Jun 17bittkomk
"The Dark Secret at the Heart of AI
No one really knows how the most advanced algorithms do what they do. That could be a problem.
" -
22 Jun 17June Breivik
How well can we get along with machines that are unpredictable and inscrutable? https://t.co/iWkShqmHue
-
Beth Ritter-Guth
How well can we get along with machines that are unpredictable and inscrutable? https://t.co/qg0koGBVPt
-
Lee Brentnell
How well can we get along with machines that are unpredictable and inscrutable? https://t.co/iWkShqmHue
-
16 Jun 17
-
15 Jun 17
-
04 Jun 17
-
03 Jun 17
-
01 Jun 17
-
Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”
-
As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand.
-
. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.
-
-
27 May 17
-
“It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”
-
-
18 May 17howdee
We don't understand how AI makes it's decisions; we can't learn from AI.
-
“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”
-
-
13 May 17Francois Guite
No one really knows how the most advanced algorithms do what they do. That could be a problem.
-
08 May 17
-
05 May 17
-
it isn’t completely clear how the car makes its decisions
-
explainability as a key stumbling block.
-
explanations provided will always be simplified, meaning some vital information may be lost along the way
-
We’re a long way from having truly interpretable AI.
-
It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.
-
society is built upon a contract of expected behavior
-
-
04 May 17
-
01 May 17
-
27 Apr 17mathew lowry
there is no obvious way to design such a system so that it could always explain why it did what it did... How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? ... a journey to the bleeding edge of research on AI algorithms... including a meeting with one of the great philosophers of our time.... <br>
the system extracts and highlights snippets of text that are representative of a pattern it has discovered... a loop where the machine and the human collaborate... the nature of these machine-learning systems ... produce a lot of false alarms... system designed to classify an e-mail message as coming from a terrorist... could highlight certain keywords ... <br>
we may have to simply trust AI’s judgment or do without ... that judgment will have to incorporate social intelligence... fit with our social norms... consistent with our ethical judgments... we should be as cautious of AI explanations as we are of each other’s -
23 Apr 17
-
20 Apr 17
-
19 Apr 17
-
Oliver Kurz
“As the technology advances, we might soon cross some threshold beyond which using #AI requires a leap of faith.” https://t.co/fM9HAUX05p
-
sunil-joglekar
"No one really knows how the most advanced algorithms do what they do. That could be a problem."
-
18 Apr 17
-
Muzaffaruddin Alvi
via All News on 'The Twitter Times: Muzaffar69/corpgov' http://ift.tt/1MszafE
#CorpGov All News on 'The Twitter Times: Muzaffar69_corpgov'
-
17 Apr 17Yee Sian Ng
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.
-
Mike McEvoy
Truth. This is part of the AI story, if we’re being honest. https://t.co/ria0Bx3aQt
-
travisjamison
There’s a big problem with AI: even its creators can’t explain how it works https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
-
-
there is no obvious way to design such a system so that it could always explain why it did what it did.
-
-
16 Apr 17
-
nikerym
via Instapaper: Unread http://ift.tt/UvJzr1
The Dark Secret of #AI: No one really knows how the most advanced #algorithms do what they do! https://t.co/h5Hg8LECEh #MachineLearning
— Kirk Borne (@KirkDBorne) September 10, 2017 -
15 Apr 17
-
Charles Gambrell
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.
-
14 Apr 17
-
13 Apr 17
-
Carol Furchner
In neural-network-based machine learning, the processes used for making a decision are somewhat opaque, and the machines can't describe how they arrived at a decision. Much like humans.
LAI AI machine-learning pattern-recognition decision-making neural-network artificial-neural-network artificial-intelligence
-
12 Apr 17occams razors
"Intelligent Machines
The Dark Secret at the Heart of AI
No one really knows how the most advanced algorithms do what they do. That could be a problem.
by Will Knight April 11, 2017
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.
But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental." -
Jorge Barba
There’s a big problem with AI: even its creators can’t explain how it works https://t.co/E5IYNRUWxM https://t.co/vEzIYw56UW
-
V W
He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”
-
Robert Best
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey.
-
Mark Brian
There’s a big problem with AI: even its creators can’t explain how it works https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/
-
Kamil Śliwowski
The Dark Secret at the Heart of AI http://ift.tt/2p1wr2R
Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. via Pocket -
Wessel van Rensburg
"We might cross a threshold beyond which using AI requires a leap of faith" https://t.co/ErUM0FC3CR
Already there: https://t.co/wpjiyih4Ts -
11 Apr 17
-
Weiye Loh
"As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time. "
-
-
In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”
-
At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”
-
Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.
-
We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables.
-
In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”
-
After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.
-
The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.
-
David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.
-
This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant
-
One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”
-
It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue.
-
it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.
-
Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”
-
To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind.
-
But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”
-
-
Doug Peterson
The Dark Secret at the Heart of AI https://t.co/xbmjhpsUYU via @flipboard
— Doug Peterson (@dougpete) April 11, 2017
Would you like to comment?
Join Diigo for a free account, or sign in if you are already a member.