Posted On:August 2017 - AppFerret
Jason Yosinski sits in a small glass box at Uber’s San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinski’s program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: It’s a black box.
This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AI’s individual computational nodes—the neurons, so to speak—to see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. “This responds to your face and my face,” he says. “It responds to different size faces, different color faces.”
No one trained this network to identify faces. Humans weren’t labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinski’s probe had illuminated one small part of it, but overall, it remained opaque. “We build amazing models,” he says. “But we don’t quite understand them. And every year, this gap is going to get a bit larger.”
This video provides a high-level overview of the problem:
Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?
That interpretability problem, as it’s known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it “AI neuroscience.”
Opening up the black box
Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.
Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AI—be they text, images, or anything else—in clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiro’s program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the words—or parts of an image or molecular structure, or any other kind of data—most important in the AI’s original judgment. The tests might reveal that the word “horrible” was vital to a panning or that “Daniel Day Lewis” led to a positive review. But although LIME can diagnose those singular examples, that result says little about the network’s overall insight.
New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesn’t require testing the network a thousand times over: a boon if you’re trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank reference—a black image or a zeroed-out array in place of text—and transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.
Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting in—outfitted with the standard medley of mugs, tables, chairs, and computers—as a Google conference room. “I can give a zillion reasons.” But say you slowly dim the lights. “When the lights become very dim, only the biggest reasons stand out.” Those transitions from a blank reference allow Sundararajan to capture more of the network’s decisions than Ribeiro’s variations do. But deeper, unanswered questions are always there, Sundararajan says—a state of mind familiar to him as a parent. “I have a 4-year-old who continually reminds me of the infinite regress of ‘Why?’”
The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create “explanations” for their models’ internal logic. The Defense Advanced Research Projects Agency, the U.S. military’s blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasn’t the only thing on their minds, she says. “I’m not sure what it’s doing,” they told her. “I’m not sure I can trust it.”
Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. “Fear of a neural net is completely justified,” he says. “What really terrifies me is what else did the neural net learn that’s equally wrong?”
Today’s neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of data—say, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections “fire” in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.
At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learns—somehow—to make fine distinctions among breeds. “Using modern horsepower and chutzpah, you can get these things to really sing,” Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.
Complete original article here.
For developers, worrying about infrastructure is a chore they can do without. Serverless computing relieves that burden.
It’s always unfortunate to start the definition of a phrase by calling it a misnomer, but that’s where you have to begin with serverless computing: Of course there will always be servers. Serverless computing merely adds another layer of abstraction atop cloud infrastructure, so developers no longer need to worry about servers, including virtual ones in the cloud.
To explore this idea, I spoke with one of serverless computing’s most vocal proponents: Chad Arimura, CEO of the startup Iron.io, which develops software for microservices workload management. Arimura says serverless computing is all about the modern developer’s evolving frame of reference:
What we’ve seen is that the atomic unit of scale has been changing from the virtual machine to the container, and if you take this one step further, we’re starting to see something called a function … a single-purpose block of code. It’s something you can conceptualize very easily: Process an image. Transform a piece of data. Encode a piece of video.
To me this sounded a lot like microservices architecture, where instead of a building a monolithic application, you assemble an application from single-purpose services. What then is the difference between a microservice and a function?
A service has a common API that people can access. You don’t know what’s going on under the hood. That service may be powered by functions. So functions are the building block code aspect of it; the service itself is like the interface developers can talk to.
Just as developers use microservices to assemble applications and call services from functions, they can grab functions from a library to build the services themselves — without having to consider server infrastructure as they create an application.
AWS Lambda is the best-known example of serverless computing. As an Amazon instructional video explains, “once you upload your code to Lambda, the service handles all the capacity, scaling, patching, and administration of the infrastructure to run your code.” Both AWS Lambda and Iron.io offer function libraries to further accelerate development. Provisioning and autoscaling are on demand.
Keep in mind all of this is above the level of service orchestration — of the type offered by Mesos, Kubernetes, or Docker Swarm. Although Iron.io offers its own orchestration layer, which predated those solutions being generally available, it also plugs into them, “but we really play at the developer/API later,” says Arimura.
In fact, it’s fair to view the core of Iron.io’s functionality roughly equivalent to that of AWS Lambda, only deployable on all major public and private cloud platforms. Arimura sees the ability to deploy on premises as a particular Iron.io advantage because the hybrid cloud is becoming more and more essential to the enterprise approach to cloud computing. Think of the consistency and application portability enabled by the same serverless computing environment across public and private clouds.
Arimura even goes as far as to use the controversial “no-ops,” coined by former Netflix cloud architect Adrain Cockcroft. Again, just as there will always be servers, there will always be ops to run them. Again, no-ops and serverless computing take the developer’s point of view: Someone else has to worry about that stuff, but not me while I create software.
Serverless computing, then, represents yet another leap in developer efficiency, where even virtual infrastructure concerns melt away and libraries of services and functions reduce once again the amount of code developers need to write from scratch.
Enterprise dev shops have been slow to adopt agile, CICD, devops, and the like. But as we move up the stack to serverless computing levels of abstraction, the palpable benefits of modern development practices become more and more enticing.
Original article here.
A revolution in warfare where killer robots, or autonomous weapons systems, are common in battlefields is about to start.
Both scientists and industry are worried.
The world’s top artificial intelligence (AI) and robotics companies have used a conference in Melbourne to collectively urge the United Nations to ban killer robots or lethal autonomous weapons.
An open letter by 116 founders of robotics and artificial intelligence companies from 26 countries was launched at the world’s biggest artificial intelligence conference, the International Joint Conference on Artificial Intelligence (IJCAI), as the UN delays meeting until later this year to discuss the robot arms race.
Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, released the letter at the opening of the opening of the conference, the world’s pre-eminent gathering of experts in artificial intelligence and robotics.
The letter is the first time that AI and robotics companies have taken a joint stand on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.
In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal talks on autonomous weapons. Of these, 19 have already called for a ban.
“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter says.
“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”
Signatories of the 2017 letter include:
- Elon Musk, founder of Tesla, SpaceX and OpenAI (US)
- Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
- Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
- Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
- Jü rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
- Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)
Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons.
The 2015 letter was signed by thousands of researchers working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky.
“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” says Walsh.
“It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.
“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.”
Ryan Gariepy, founder of Clearpath Robotics, says the number of prominent companies and individuals who have signed this letter reinforces the warning that this is not a hypothetical scenario but a very real and pressing concern.
“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he says.
“The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”
An Open Letter to the United Nations Convention on Certain Conventional Weapons
As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations. We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.
We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.
We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.
FULL LIST OF SIGNATORIES (by country):
- Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.
- Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia.
- Charles Gretton, founder of Hivery, Australia. Brad Lorge, founder & CEO of , Australia
- Brenton O’Brien, founder & CEO of Microbric, Australia.
- Samir Sinha, founder & CEO of Robonomics AI, Australia.
- Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.
- Peter Turner, founder & MD of Tribotix, Australia.
- Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.
- Ryan Gariepy, founder & CTO, Clearpath Robotics, found & CTO of OTTO Motors, Canada.
- James Chow, founder & CEO of UBTECH Rob otics, China.
- Robert Li, founder & CEO of Sankobot, China.
- Marek Rosa, founder & CEO of GoodAI, Czech Republic.
- Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.
- Markus Järve, founder & CEO of Krakul, Estonia.
- Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.
- Esben Østergaard, founder & CTO of Universal Robotics, Denmark.
- Raul Bravo, founder & CEO of DIBOTICS, France.
- Raphael Cherrier, founder & CEO of Qucit, France.
- Jerome Monceaux, founder & CEO of , founder & CCO of Aldebaran Robotics, France.
- Charles Ollion, founder & Head of Research at Heuritech, France.
- Anis Sahbani, founder & CEO of Enova Robotics, France.
- Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.
- Marcus Frei, founder & CEO of NEXT.robotics, Germany
- Kirstinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.
- Fahad Azad, founder of Robosoft Systems, India.
- Debashis Das, Ashish Tupate, Jerwin Prabu, founders (incl. CEO ) of Bharati Robotics, India.
- Pulkit Gaur, founder & CTO of Gridbots Technologies, India.
- Pranay Kishore, founder & CEO of Phi Robotics Research, India.
- Shahid Memom, founder & CTO of Vanora Robots, India.
- Krishnan Nambiar & Shahid Memon, founders, CEO & C TO of Vanora Robotics, India.
- Achu Wilson, founder & CTO of Sastra Robotics, India.
- Neill Gernon, founder & MD of Atrovate, founder of , Ireland.
- Parsa Ghaffari, founder & CEO of Aylien, Ireland.
- Alan Holland, founder & CEO of Keelvar Systems, Ireland.
- Alessandro Prest, founder & CTO of LogoGrab, Ireland.
- Alessio Bonfietti, founder & CEO of MindIT, Italy.
- Angelo Sudano, founder & CTO of ICan Robotics, Italy.
- Shigeo Hirose, Michele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of HiBot
- Corporation, Japan.
- Luis Samahí García González, founder & CEO of QOLbotics, Mexico.
- Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.
- Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.
- Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.
- Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at
- BlueEye Robotics, Norway.
- Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.
- Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.
- Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.
- Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations Officer, Singapore.
- Jasper Horrell, founder of DeepData, South Africa.
- Toni Ferrate, founder & CEO of RO – BOTICS, Spain.
- José Manuel del Río, founder & CEO of Aisoy Robotics, Spain. Victor Martin, founder & CEO of Macco Robotics, Spain.
- Timothy Llewellynn, founder & CEO of nViso, Switzerland.
- Francesco Mondada, founder of K – Team, Switzerland.
- Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders,
- President & CEO of Nnaisense, Switzerland.
- Satish Ramachandran, founder of AROBOT, United Arab Emirates.
- Silas Adekunle, founder & CEO of Reach Robotics, UK.
- Steve Allpress, founder & CTO of FiveAI, UK.
- Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.
- Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.
- Nic Greenway, founder of React AI Ltd (Aiseedo), UK.
- Daniel Hulme, founder & CEO of Satalia, UK.
- Charlie Muirhead & Tabitha Goldstaub, founders & CEO of Cognitio nX, UK.
- Geoff Pegman, founder & MD of R U Robots, UK.
- Mustafa Suleyman, founder & Head of Applied AI, DeepMind, UK.
- Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of PredictionIO, UK.
- Antoine Biondeau, founder & CEO of Sentient Technologies, USA.
- Brian Gerkey, founder & CEO of Open Source Robotics, USA.
- Ryan Hickman & Soohyun Bae, founders, CEO & CTO of , USA.
- Henry Hu, founder & CEO of Cafe X Technologies, USA.
- Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.
- Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.
- Brian Mingus, founder & CTO of Latently, USA.
- Mohammad Musa, founder & CEO at Deepen AI, USA.
- Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.
- Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.
- Erik Nieves, founder & CEO of PlusOne Robotics, USA.
- Steve Omohundro, founder & President of Possibility Research, USA.
- Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.
- Dan Reuter, found & CEO of Electric Movement, USA.
- Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA. Dan Rubins, founder & CEO of Legal Robot, USA.
- Stuart Russell, founder & VP of Bayesian Logic Inc., USA.
- Andrew Schroeder, founder of WeRo botics, USA.
- Gabe Sibley & Alex Flint, founders, CEO & CPO of , USA.
- Martin Spencer, founder & CEO of GeckoSystems, USA.
- Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.
- Michael Stuart, founder & CEO of Lucid Holdings, USA.
- Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.
Original article here.
IBM Research announced, with the help of Sony Storage Media Solutions, the have achieved a capacity breakthrough in tape storage. IBM was able to fit 201 Gb/in^2 (gigabits per square inch) in areal density on a prototype sputtered magnetic tape. This marks the fifth capacity record IBM has hit since 2006.
The current buzz in storage typically goes to faster media, like those that leverage the NVMe interface. StorageReview is guilty of focusing on these new emerging technologies without spending much time on tape; namely because tape is a fairly well known and not terribly exciting storage media. However, tape remains the most secure, energy efficient, and cost-effective solution for storing enormous amounts of back-up and archival data. And the deluge of unstructured data that is now being seen everywhere will need to go on something that has the capacity to store it.
This newly announced record for tape capacity would be 20 times the areal density of state of the art commercial tape drives such as the IBM TS1155 enterprise tape drive. The technology allows for 330TB of uncompressed data to be stored on a single tape cartridge. According to IBM this is the equivalent of having the texts of 330 million books in the palm of one’s hand.
Technologies used to hit this new density include:
- Innovative signal-processing algorithms for the data channel, based on noise-predictive detection principles, which enable reliable operation at a linear density of 818,000 bits per inch with an ultra-narrow 48nm wide tunneling magneto-resistive (TMR) reader.
- A set of advanced servo control technologies that when combined enable head positioning with an accuracy of better than 7 nanometers. This combined with a 48nm wide (TMR) hard disk drive read head enables a track density of 246,200 tracks per inch, a 13-fold increase over a state of the art TS1155 drive.
- A novel low friction tape head technology that permits the use of very smooth tape media
This new technology marks a long list of tape storage innovation for IBM stretching back 60 years. Though the capacity today is 165 million times the capacity of their first tape product.
Original article here.