Posted On:AI Archives - AppFerret

standard

AI in 2019 (video)

2019-01-03 - By 

2018 has been an eventful year for AI to say the least! We’ve seen advances in generative models, the AlphaGo victory, several data breach scandals, and so much more. I’m going to briefly review AI in 2018 before giving 10 predictions on where the space is going in 2019. Prepare yourself, my predictions range from more Kubernetes infused ML pipelines to the first business use case of generative modeling of 3D worlds. Happy New Year and enjoy!

Original video here.


standard

Google’s AutoML is a Machine Learning Game-Changer

2018-05-24 - By 

Google’s AutoML is a new up-and-coming (alpha stage) cloud software suite of Machine Learning tools. It’s based on Google’s state-of-the-art research in image recognition called Neural Architecture Search (NAS). NAS is basically an algorithm that, given your specific dataset, searches for the most optimal neural network to perform a certain task on that dataset. AutoML is then a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge of deep learning or AI; all you need is labelled data! Google will use NAS to then find the best network for your specific dataset and task. They’ve already shown how their methods can achieve performance that is far better than that of hand-designed networks.

AutoML totally changes the whole machine learning game because for many applications, specialised skills and knowledge won’t be required. Many companies only need deep networks to do simpler tasks, such as image classification. At that point they don’t need to hire 5 machine learning PhDs; they just need someone who can handle moving around and organising their data.

There’s no doubt that this shift in how “AI” can be used by businesses will create change. But what kind of change are we looking at? Whom will this change benefit? And what will happen to all of the people jumping into the machine learning field? In this post, we’re going to breakdown what Google’s AutoML, and in general the shift towards Software 2.0, means for both businesses and developers in the machine learning field.

More development, less research for businesses

A lot of businesses in the AI space, especially start-ups, are doing relatively simple things in the context of deep learning. Most of their value is coming from their final put-together product. For example, most computer vision start-ups are using some kind of image classification network, which will actually be AutoML’s first tool in the suite. In fact, Google’s NASNet, which achieves the current state-of-the-art in image classification is already publicly available in TensorFlow! Businesses can now skip over this complex experimental-research part of the product pipeline and just use transfer learning for their task. Because there is less experimental-research, more business resources can be spent on product design, development, and the all important data.

Speaking of which…

It becomes more about product

Connecting from the first point, since more time is being spent on product design and development, companies will have faster product iteration. The main value of the company will become less about how great and cutting edge their research is and more about how well their product/technology is engineered. Is it well designed? Easy to use? Is their data pipeline set up in such a way that they can quickly and easily improve their models? These will be the new key questions for optimising their products and being able to iterate faster than their competition. Cutting edge research will also become less of a main driver of increasing the technology’s performance.

Now it’s more like…

Data and resources become critical

Now that research is a less significant part of the equation, how can companies stand out? How do you get ahead of the competition? Of course sales, marketing, and as we just discussed, product design are all very important. But the huge driver of the performance of these deep learning technologies is your data and resources. The more clean and diverse yet task-targeted data you have (i.e both quality and quantity), the more you can improve your models using software tools like AutoML. That means lots of resources for the acquisition and handling of data. All of this partially signifies us moving away from the nitty-gritty of writing tons of code.

It becomes more of…

Software 2.0: Deep learning becomes another tool in the toolbox for most

All you have to do to use Google’s AutoML is upload your labelled data and boom, you’re all set! For people who aren’t super deep (ha ha, pun) into the field, and just want to leverage the power of the technology, this is big. The application of deep learning becomes more accessible. There’s less coding, more using the tool suite. In fact, for most people, deep learning because just another tool in their toolbox. Andrej Karpathy wrote a great article on Software 2.0 and how we’re shifting from writing lots of code to more design and using tools, then letting AI do the rest.

But, considering all of this…

There’s still room for creative science and research

Even though we have these easy-to-use tools, the journey doesn’t just end! When cars were invented, we didn’t just stop making them better even though now they’re quite easy to use. And there’s still many improvements that can be made to improve current AI technologies. AI still isn’t very creative, nor can it reason, or handle complex tasks. It has the crutch of needing a ton of labelled data, which is both expensive and time consuming to acquire. Training still takes a long time to achieve top accuracy. The performance of deep learning models is good for some simple tasks, like classification, but does only fairly well, sometimes even poorly (depending on task complexity), on things like localisation. We don’t yet even fully understand deep networks internally.

All of these things present opportunities for science and research, and in particular for advancing the current AI technologies. On the business side of things, some companies, especially the tech giants (like Google, Microsoft, Facebook, Apple, Amazon) will need to innovate past current tools through science and research in order to compete. All of them can get lots of data and resources, design awesome products, do lots of sales and marketing etc. They could really use something more to set them apart, and that can come from cutting edge innovation.

That leaves us with a final question…

Is all of this good or bad?

Overall, I think this shift in how we create our AI technologies is a good thing. Most businesses will leverage existing machine learning tools, rather than create new ones since they don’t have a need for it. Near-cutting-edge AI becomes accessible to many people, and that means better technologies for all. AI is also quite an “open” field, with major figures like Andrew Ng creating very popular courses to teach people about this important new technology. Making things more accessible helps people transition with the fast-paced tech field.

Such a shift has happened many times before. Programming computers started with assembly level coding! We later moved on to things like C. Many people today consider C too complicated so they use C++. Much of the time, we don’t even need something as complex as C++, so we just use the super high level languages of Python or R! We use the tool that is most appropriate at hand. If you don’t need something super low-level, then you don’t have to use it (e.g C code optimisation, R&D of deep networks from scratch), and can simply use something more high-level and built-in (e.g Python, transfer learning, AI tools).

At the same time, continued efforts in the science and research of AI technologies is critical. We can definitely add tremendous value to the world by engineering new AI-based products. But there comes a point where new science is needed to move forward. Human creativity will always be valuable.

Conclusion

Thanks for reading! I hope you enjoyed this post and learned something new and useful about the current trend in AI technology! This is a partially opinionated piece, so I’d love to hear any responses you may have below!

Original article here.


standard

Google’s Duplex AI Demo Just Passed the Turing Test (video)

2018-05-11 - By 

Yesterday, at I/O 2018, Google showed off a new digital assistant capability that’s meant to improve your life by making simple boring phone calls on your behalf. The new Google Duplex feature is designed to pretend to be human, with enough human-like functionality to schedule appointments or make similarly inane phone calls. According to Google CEO Sundar Pichai, the phone calls the company played were entirely real. You can make an argument, based on these audio clips, that Google actually passed the Turing Test.

If you haven’t heard the audio of the two calls, you should give the clip a listen. We’ve embedded the relevant part of Pichai’s presentation below.

I suspect the calls were edited to remove the place of business, but apart from that, they sound like real phone calls. If you listen to both segments, the male voice booking the restaurant sounds a bit more like a person than the female does, but the gap isn’t large and the female voice is still noticeably better than a typical AI. The female speaker has a rather robotic “At 12PM” at one point that pulls the overall presentation down, but past that, Google has vastly improved AI speech. I suspect the same technologies at work in Google Duplex are the ones we covered about six weeks ago.

So what’s the Turing Test and why is passing it a milestone? The British computer scientist, mathematician, and philosopher Alan Turing devised the Turing test as a means of measuring whether a computer was capable of demonstrating intelligent behavior equivalent to or indistinguishable from that of a human. This broad formulation allows for the contemplation of many such tests, though the general test case presented in discussion is a conversation between a researcher and a computer in which the computer responds to questions. A third person, the evaluator, is tasked with determining which individual in the conversation is human and which is a machine. If the evaluator cannot tell, the machine has passed the Turing test.

The Turing test is not intended to be the final word on whether an AI is intelligent and, given that Turing conceived it in 1950, obviously doesn’t take into consideration later advances or breakthroughs in the field. There have been robust debates for decades over whether passing the Turing test would represent a meaningful breakthrough. But what sets Google Duplex apart is its excellent mimicry of human speech. The original Turing test supposed that any discussion between computer and researcher would take place in text. Managing to create a voice facsimile close enough to standard human to avoid suspicion and rejection from the company in question is a significant feat.

As of right now, Duplex is intended to handle rote responses, like asking to speak to a representative, or simple, formulaic social interactions. Even so, the program’s demonstrated capability to deal with confusion (as on the second call), is still a significant step forward for these kinds of voice interactions. As artificial intelligence continues to improve, voice quality will improve and the AI will become better at answering more and more types of questions. We’re obviously still a long way from creating a conscious AI, but we’re getting better at the tasks our systems can handle — and faster than many would’ve thought possible.

 

Original article here.

 


standard

The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning

2018-04-08 - By 

Simple explanations of Artificial Intelligence, Machine Learning, and Deep Learning and how they’re all different. Plus, how AI and IoT are inextricably connected.

We’re all familiar with the term “Artificial Intelligence.” After all, it’s been a popular focus in movies such as The Terminator, The Matrix, and Ex Machina (a personal favorite of mine). But you may have recently been hearing about other terms like “Machine Learning” and “Deep Learning,” sometimes used interchangeably with artificial intelligence. As a result, the difference between artificial intelligence, machine learning, and deep learning can be very unclear.

I’ll begin by giving a quick explanation of what Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) actually mean and how they’re different. Then, I’ll share how AI and the Internet of Things are inextricably intertwined, with several technological advances all converging at once to set the foundation for an AI and IoT explosion.

So what’s the difference between AI, ML, and DL?

First coined in 1956 by John McCarthy, AI involves machines that can perform tasks that are characteristic of human intelligence. While this is rather general, it includes things like planning, understanding language, recognizing objects and sounds, learning, and problem solving.

We can put AI in two categories, general and narrow. General AI would have all of the characteristics of human intelligence, including the capacities mentioned above. Narrow AI exhibits some facet(s) of human intelligence, and can do that facet extremely well, but is lacking in other areas. A machine that’s great at recognizing images, but nothing else, would be an example of narrow AI.

At its core, machine learning is simply a way of achieving AI.

Arthur Samuel coined the phrase not too long after AI, in 1959, defining it as, “the ability to learn without being explicitly programmed.” You see, you can get AI without using machine learning, but this would require building millions of lines of codes with complex rules and decision-trees.

So instead of hard coding software routines with specific instructions to accomplish a particular task, machine learning is a way of “training” an algorithm so that it can learnhow. “Training” involves feeding huge amounts of data to the algorithm and allowing the algorithm to adjust itself and improve.

To give an example, machine learning has been used to make drastic improvements to computer vision (the ability of a machine to recognize an object in an image or video). You gather hundreds of thousands or even millions of pictures and then have humans tag them. For example, the humans might tag pictures that have a cat in them versus those that do not. Then, the algorithm tries to build a model that can accurately tag a picture as containing a cat or not as well as a human. Once the accuracy level is high enough, the machine has now “learned” what a cat looks like.

Deep learning is one of many approaches to machine learning. Other approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others.

Deep learning was inspired by the structure and function of the brain, namely the interconnecting of many neurons. Artificial Neural Networks (ANNs) are algorithms that mimic the biological structure of the brain.

In ANNs, there are “neurons” which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn, such as curves/edges in image recognition. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer.

AI and IoT are Inextricably Intertwined

I think of the relationship between AI and IoT much like the relationship between the human brain and body.

Our bodies collect sensory input such as sight, sound, and touch. Our brains take that data and makes sense of it, turning light into recognizable objects and turning sounds into understandable speech. Our brains then make decisions, sending signals back out to the body to command movements like picking up an object or speaking.

All of the connected sensors that make up the Internet of Things are like our bodies, they provide the raw data of what’s going on in the world. Artificial intelligence is like our brain, making sense of that data and deciding what actions to perform. And the connected devices of IoT are again like our bodies, carrying out physical actions or communicating to others.

Unleashing Each Other’s Potential

The value and the promises of both AI and IoT are being realized because of the other.

Machine learning and deep learning have led to huge leaps for AI in recent years. As mentioned above, machine learning and deep learning require massive amounts of data to work, and this data is being collected by the billions of sensors that are continuing to come online in the Internet of Things. IoT makes better AI.

Improving AI will also drive adoption of the Internet of Things, creating a virtuous cycle in which both areas will accelerate drastically. That’s because AI makes IoT useful.

On the industrial side, AI can be applied to predict when machines will need maintenance or analyze manufacturing processes to make big efficiency gains, saving millions of dollars.

On the consumer side, rather than having to adapt to technology, technology can adapt to us. Instead of clicking, typing, and searching, we can simply ask a machine for what we need. We might ask for information like the weather or for an action like preparing the house for bedtime (turning down the thermostat, locking the doors, turning off the lights, etc.).

Converging Technological Advancements Have Made this Possible

Shrinking computer chips and improved manufacturing techniques means cheaper, more powerful sensors.

Quickly improving battery technology means those sensors can last for years without needing to be connected to a power source.

Wireless connectivity, driven by the advent of smartphones, means that data can be sent in high volume at cheap rates, allowing all those sensors to send data to the cloud.

And the birth of the cloud has allowed for virtually unlimited storage of that data and virtually infinite computational ability to process it.

Of course, there are one or two concerns about the impact of AI on our society and our future. But as advancements and adoption of both AI and IoT continue to accelerate, one thing is certain; the impact is going to be profound.

 

Original article here.


standard

CEOs should do 3 things to help their workforce fully embrace AI

2018-02-07 - By 

There’s no denying it: the era of the intelligent enterprise is upon us. As technologies like AI, cognitive computing, and predictive analytics become hot topics in the corporate boardroom, sleek startups and centuries-old companies alike are laying plans for how to put these exciting innovations to work.

Many organizations, however, are in the nascent days of AI implementation. Of the three stages of adoption—education, prototyping, and application at scale—most executives are still taking a tentative approach to exploring AI’s true potential. They’re primarily using the tech to drive small-scale efficiencies.

But AI’s real opportunity lies in tapping completely new areas of value. AI can help established businesses expand their product offerings and infiltrate (or even invent) entirely new markets, as well as streamline internal processes. The rewards are ripe: According to Accenture projections, fully committing to AI could boost global profits by a whopping $4.8 trillion by 2022. For the average S&P 500 company, this would mean an additional $7.5 billion in revenue over the next four years.

There’s no denying it: the era of the intelligent enterprise is upon us. As technologies like AI, cognitive computing, and predictive analytics become hot topics in the corporate boardroom, sleek startups and centuries-old companies alike are laying plans for how to put these exciting innovations to work.

Many organizations, however, are in the nascent days of AI implementation. Of the three stages of adoption—education, prototyping, and application at scale—most executives are still taking a tentative approach to exploring AI’s true potential. They’re primarily using the tech to drive small-scale efficiencies.

But AI’s real opportunity lies in tapping completely new areas of value. AI can help established businesses expand their product offerings and infiltrate (or even invent) entirely new markets, as well as streamline internal processes. The rewards are ripe: According to Accenture projections, fully committing to AI could boost global profits by a whopping $4.8 trillion by 2022. For the average S&P 500 company, this would mean an additional $7.5 billion in revenue over the next four years.

Faster and more compelling innovations will be driven by the intersection of intelligent technology and human ingenuity—yes, the workforce will see sweeping changes. It’s already happening: Nearly all leaders report they’ve redesigned jobs to at least some degree to account for disruption. What’s more, while three-quarters of executives plan to use AI to automate tasks, virtually all intend to use it to augment the capabilities of their people.

Still, many execs report they’re not sure how to pull off what Accenture calls Applied Intelligence—the ability to implement intelligent technology and human ingenuity to drive growth.

Here are three steps the C-suite can take to elevate their organizations into companies that are fully committed to creating new value with AI/human partnerships.

1) Reimagine what it means to work

The most significant impact of AI won’t be on the number of jobs, but on job content. But skepticism about employees’ willingness to embrace AI is unfounded. Though nearly 25% of executives cite “resistance by the workforce” as one of the obstacles to integrating AI, 62% of workers—both low-skilled and high-skilled—are optimistic about the changes AI will bring. In line with that optimism, 67% of employees say it is important to develop the skills to work with intelligent technologies.

Yes, ambiguities about exactly how automation fits into the future workplace persist. Business leaders should look for specific skills and tasks to supplement instead of thinking about replacing generalized “jobs”. AI is less about replacing humans than it is about augmenting their roles.

While AI takes over certain repetitive or routine tasks, it opens doors for project-based work. For example, if AI steps into tasks like sorting email or analyzing inventory, it can free up employees to develop more in-depth customer service tactics, like targeted conversations with key clients. Interestingly, data suggests that greater investment in AI could actually increase employment by as much as 10% in the next three years alone.

2) Pivot the workforce to your company’s unique value proposition

Today, AI and human-machine collaboration is beginning to change how enterprises conduct business. But it has yet to transform whatbusiness they choose to pursue. Few companies are creating entirely new revenue streams or customer experiences. But, at least, 72% of execs agree that adopting intelligent technologies will be critical to their organization’s ability to differentiate in the market.

One of the challenges facing companies is how to make a business case for that pivot to new opportunities without disrupting today’s core business. A key part is turning savings generated by automation into the fuel for investing in the new business models and workforces that will ultimately take a company into new markets.

Take Accenture: the company puts 60% of the money it saves through AI investments into training programs. That’s resulted in the retraining of tens of thousands of people whose roles were automated. Those workers can now focus on more high-value projects, working with AI and other technologies to offer better services to clients.

3) Scaling new skilling: Don’t choose between hiring a human or a machine—hire both

Today, most people already interact with machines in the workplace—but humans still run the show. CEOs still value a number of decidedly human skills—resource management, leadership, communication skills, complex problem solving, and judgment—but in the future, human ingenuity will not suffice. Working in tandem, smarter machines and better skilled humans will likely drive swifter and more compelling innovations.

To scale up new skilling, employers may want to consider these three steps:

  1. Prioritize skills to be honed. While hard skills like data analytics, engineering, or coding are easy to define, innately “human” skills like ethical decision-making and complex problem-solving need to be considered carefully.
  2. Provide targeted training programs. Employees’ level of technical expertise, willingness to learn new technologies, and specific skill sets will determine how training programs should be developed across the organization.
  3. Use digital solutions for training. Taking advantage of cutting-edge technologies like virtual reality or augmented reality can teach workers how to interact with smart machinery via realistic simulations.

While it is natural for businesses to exploit AI to drive efficiencies in the short term, their long term growth depends on using AI far more creatively. It will take new forms of leadership and imagination to prepare the future workforce to truly partner with intelligent machines. If they succeed, it will be a case of humans helping AI help humans.

Original article here.

 


standard

Why 2018 Will be The Year of AI

2017-12-13 - By 

Artificial Intelligence, more commonly known as AI, isn’t a new topic and has been around for years; however, those who have tracked its’ progress have noted that 2017 is the year that has seen it accelerate than years previously.  This hot topic has made its’ way into the media, in boardrooms and within the government.  One reason for this is things that haven’t functioned for decades has suddenly began to work; this is going beyond embedded functions or just tools and expectations are high for 2018.

There are several reasons why this year has recorded the most progress when working with AI.  Before going into these four preconditions that allowed AI to progress over the past five years, it is important to understand what Artificial Intelligence means.  Then, we can take a closer look at each of the four preconditions and how they will shape what is to come next year.

What is Artificial Intelligence?

Basically, Artificial Intelligence is defined as the science of making computers do things that require intelligence when done by humans.  However, five decades have gone by since AI was born and progress in the field has moved very slowly; this has created a level of appreciation to the profound difficulty of this problem.  Fortunately, this year has seen considerable progress and has opened the probability of further advancement in 2018.

Preconditions That Allowed AI to Succeed in 2017 and Beyond

Everything is presently becoming connected with our devices, such as being able to start a project on your Desktop PC and then able to finish your work on a connected smartphone or tablet.  Ray Kurzweil believes that eventually, humans will be able to use sensors that connects our brains to the cloud.  Since the internet originally connected to computers and has advanced to connecting to our mobile devices, sensors that already enable buildings, homes and even our clothes to be linked to the internet can in our near future expand to be used to connect our minds into the cloud.

Another component for AI becoming more advanced is due to computing becoming freer to use.  Previously, it would be costly for new chips to come out during an eighteen-month-period at twice the speed; however, Marc Andreessen claims that new chips are being processed at the same speed but only at half of the cost.  The theory is that the future will see inexpensive processors at an inexpensive price so that a processor will be in everything; computing capacity will be able to solve problems that had no solution five years prior.

Another component for the advancement of AI is that data is becoming the new oil, which has been made available digitally over the past decade.  Since data can be retrieved through our mobile devices and can be tracked through sensors, new sources of data have appeared through video, social media and digital images.  Conditions that could only be modeled at an elevated level in the past can now be described more accurately due to the almost infinite set of real data that is available; this means accuracy will increase more soon.

Finally, the fourth component that has contributed towards AI advancement is that machine learning is transforming into the new combustion engine as it is accomplished through using mathematical models and algorithms to discover patterns that are implicit in data.  These complex patterns are used by machines to solve on their own whether a new data is similar, that it fits or can be used to predict future outcomes.  Virtual assistants use AI, such as Siri and Cortana, to solve equations and predict outcomes every day with great accuracy; virtual assistants will continue to be used in 2018 and beyond as well as what they will be able to accomplish the more AI continues to grow and evolve.

Artificial Intelligence has seen much improvements than in previous decades.  This year, many experts were amazed and excited about how AI has progressed from the many decades since its’ birth.  Now, we can expect 2018 to see advancements in work, school and possibly in self-driving cars that can result in up to ninety percent fewer car accidents; welcome to the future of Artificial Intelligence.

Original article here.


standard

AI-Generated Celebrity Faces Look Real (video)

2017-10-31 - By 

Researchers from NVIDIA published work with artificial intelligence algorithms, or more specifically, generative adversarial networks, to produce celebrity faces in high detail. Watch the results below.

Original article here.  Research PDF here.

 


standard

The Dark Secret at the Heart of AI

2017-10-09 - By 

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligenceThe car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

Original article here.

 


standard

AI will be Bigger than the Internet

2017-09-19 - By 

AI will be the next general purpose technology (GPT), according to experts. Beyond the disruption of business and data – things that many of us don’t have a need to care about – AI is going to change the way most people live, as well.

As a GPT, AI is predicted to integrate within our entire society in the next few years, and become entirely mainstream — like electricity and the internet.

The field of AI research has the potential to fundamentally change more technologies than, arguably, anything before it. While electricity brought illumination and the internet revolutionized communication, machine-learning is already disrupting financechemistrydiagnosticsanalytics, and consumer electronics – to name a few. This is going to bring efficiency to a world with more data than we know what to do with.

It’s also going to disrupt your average person’s day — in a good way — like other GPTs before AI did.

Anyone who lived in a world before Google and the internet may recall a time when people would actually have arguments about simple facts. There wasn’t an easy way, while riding in a car, to determine which band sang a song that was on the radio. If the DJ didn’t announce the name of the artist and track before a song played, you could be subject to anywhere from three to seven minutes of heated discussion over whether “The Four Tops” or “The Temptations” sang a particular song, for example.

Today we’re used to looking up things, and for many of us it’s almost second nature. We’re throwing cookbooks out, getting rid of encyclopedias, and libraries are mostly meeting places for fiction enthusiasts these days. This is what a general purpose technology does — it changes everything.

If your doctor told you they didn’t believe in the internet you’d get a new doctor. Imagine a surgeon who chose not to use electricity — would you let them operate on you?

The AI that truly changes the world beyond simply augmenting humans, like assisted steering does, is the one that starts removing other technology from our lives, like the internet did. With the web we’ve shrunken millions of books and videos down to the size of a single iPhone, at least as far as consumers are concerned.

AI is being layered into our everyday lives, as a general purpose technology, like electricity and the internet. And once it reaches its early potential we’ll be getting back seconds of time at first, then minutes, and eventually we’ll have devices smart enough to no longer need us to direct them at every single step, giving us back all the time we lost when we started splitting our reality between people and computers.

Siri and Cortana won’t need to be told what to do all the time, for example, once AI learns to start paying attention to the world outside of the smart phone.

Now, if only I could convince the teenager in my house to do the same …

Original article here.


standard

AI detectives are cracking open the black box of deep learning (video)

2017-08-30 - By 

Jason Yosinski sits in a small glass box at Uber’s San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinski’s program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: It’s a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AI’s individual computational nodes—the neurons, so to speak—to see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. “This responds to your face and my face,” he says. “It responds to different size faces, different color faces.”

No one trained this network to identify faces. Humans weren’t labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinski’s probe had illuminated one small part of it, but overall, it remained opaque. “We build amazing models,” he says. “But we don’t quite understand them. And every year, this gap is going to get a bit larger.”

This video provides a high-level overview of the problem:

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as it’s known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it “AI neuroscience.”

Opening up the black box

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AI—be they text, images, or anything else—in clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiro’s program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the words—or parts of an image or molecular structure, or any other kind of data—most important in the AI’s original judgment. The tests might reveal that the word “horrible” was vital to a panning or that “Daniel Day Lewis” led to a positive review. But although LIME can diagnose those singular examples, that result says little about the network’s overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesn’t require testing the network a thousand times over: a boon if you’re trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank reference—a black image or a zeroed-out array in place of text—and transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting in—outfitted with the standard medley of mugs, tables, chairs, and computers—as a Google conference room. “I can give a zillion reasons.” But say you slowly dim the lights. “When the lights become very dim, only the biggest reasons stand out.” Those transitions from a blank reference allow Sundararajan to capture more of the network’s decisions than Ribeiro’s variations do. But deeper, unanswered questions are always there, Sundararajan says—a state of mind familiar to him as a parent. “I have a 4-year-old who continually reminds me of the infinite regress of ‘Why?’”

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create “explanations” for their models’ internal logic. The Defense Advanced Research Projects Agency, the U.S. military’s blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasn’t the only thing on their minds, she says. “I’m not sure what it’s doing,” they told her. “I’m not sure I can trust it.”

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. “Fear of a neural net is completely justified,” he says. “What really terrifies me is what else did the neural net learn that’s equally wrong?”

Today’s neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of data—say, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections “fire” in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learns—somehow—to make fine distinctions among breeds. “Using modern horsepower and chutzpah, you can get these things to really sing,” Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Complete original article here.

 


standard

Gartner’s Hype Cycle: AI for Marketing

2017-07-24 - By 

Gartner’s 2017 Hype Cycle for Marketing and Advertising is out (subscription required) and, predictably, AI for Marketing has appeared as a new dot making a rapid ascent toward the Peak of Inflated Expectations. I say “rapid” but some may be surprised to see us projecting that it will take more than 10 years for AI in Marketing to reach the Plateau of Productivity. Indeed, the timeframe drew some skepticism and we deliberated on this extensively, as have many organizations and communities.

AI for Marketing on the 2017 Hype Cycle for Marketing and Advertising

First, let’s be clear about one thing: a long journey to the plateau is not a recommendation to ignore a transformational technology. However, it does raise questions of just what to expect in the nearer term.

Skeptics of a longer timeframe rightly point out the velocity with which digital leaders from Google to Amazon to Baidu and Alibaba are embracing these technologies today, and the impact they’re likely to have on marketing and advertising once they’ve cracked the code on predicting buying behavior and customer satisfaction and acting accordingly.

There’s no point in debating the seriousness of the leading digital companies when it comes to AI. The impact that AI will have on marketing is perhaps more debatable – some breakthrough benefits are already being realized, but – to use some AI jargon here – many problems at the heart of marketing exhibit high enough dimensionality to suggest they’re AI-complete. In other words, human behavior is influenced by a large number of variables which makes it hard to predict unless you’re human. On the other hand, we’ve seen dramatic lifts in conversion rates from AI-enhanced campaigns and the global scale of markets means that even modest improvements in matching people with products could have major effects. Net-net, we do believe AI that will have a transformational on marketing and that some of these transformational effects will be felt in fewer than ten years – in fact, they’re being felt already.

Still, in the words of Paul Saffo, “Never mistake a clear view for a short distance.” The magnitude of a technology’s impact is, if anything, a sign it will take longer than expected to reach some sort of equilibrium. Just look at the Internet. I still vividly recall the collective expectation that many of us held in 1999 that business productivity was just around the corner. The ensuing descent into the Trough of Disillusionment didn’t diminish the Internet’s ultimate impact – it just delayed it. But the delay was significant enough to give a few companies that kept the faith, like Google and Amazon, an insurmountable advantage when Internet at last plateaued, about 10 years later.

Proponents of faster impact point out that AI has already been through a Trough of Disillusionment maybe ten times as long as the Internet – the “AI Winter” that you can trace to the 1980s. By this reckoning, productivity is long overdue. This may be true for a number of domains – such as natural language processing and image recognition – but it’s hardly the case for the kinds of applications we’re considering in AI for Marketing. Before we could start on those we needed massive data collection on the input side, a cloud-based big data machine learning infrastructure, and real-time operations on the output side to accelerate the learning process to the point where we could start to frame the optimization problem in AI. Some of the algorithms may be quite old, but their real-time marketing context is certainly new.

More importantly, consider the implications of replacing the way marketing works today with true lights-out AI-driven operations. Even when machines do outperform human counterparts in making the kinds of judgments marketers pride themselves on, the organizational and cultural resistance they will face from the enterprise is profound….with important exceptions: disruptive start-ups and the digital giants who are developing these technologies and currently dominate digital media.

And enterprises aren’t the only source of resistance. The data being collected in what’s being billed as “people-based marketing” – the kind that AI will need to predict and influence behavior – is the subject of privacy concerns that stem from the “people’s” notable lack of an AI ally in the data collection business. See more comments here.

Then consider this: In 2016, P&G spent over $4B media. Despite their acknowledgment of the growing importance of the Internet to their marketing (20 years in), they still spend orders of magnitude more on TV (see Ad Age, ubscription required). As we know, Marc Pritchard, P&G’s global head of brands, doesn’t care much for the Internet’s way of doing business and has demanded fundamental changes in what he calls its “corrupt and non-transparent media supply chain.”

Well, if Marc and his colleagues don’t like the Internet’s media supply chain, wait until they get a load of the emerging AI marketing supply chain. Here’s a market where the same small group of gatekeepers own the technology, the data, the media, the infrastructure – even some key physical distribution channels – and their business models are generally based on extracting payment from suppliers, not consumers who enjoy their services for “free.” The business impulses of these companies are clear: just ask Alexa. What they haven’t perfected yet is that shopping concierge that gets you exactly what you want, but they’re working on it. If their AI can solve that, then two of P&G’s most valuable assets – its legacy media-based brand power and its retail distribution network – will be neutralized. Does this mean the end of consumer brands? Not necessarily, but our future AI proxies may help us cultivate different ideas about brand loyalty.

This brings us to the final argument against putting AI for Marketing too far out on the hype cycle: it will encourage complacency in companies that need to act. By the time established brands recognize what’s happened, it will be too late.

Business leaders have told me they use Gartner’s Hype Cycles in two ways. One is to help build urgency behind initiatives that are forecast to have a large, near-term impact, especially ones tarnished by disillusionment. The second is to subdue people who insist on drawing attention to seductive technologies on the distant horizon. Neither use is appropriate for AI for Marketing. In this case, the long horizon is neither a cause for contentment nor is a reason to go shopping.

First, brands need a plan. And the plan has to anticipate major disruptions, not just in marketing, but in the entire consumer-driven, AI-mediated supply chain in which brands – or their AI agents – will find themselves negotiating with a lot of very smart algorithms. I feel confident in predicting that this will take a long time. But that doesn’t mean it’s time to ignore AI. On the contrary, it’s time to put learning about and experiencing AI at the top of the strategic priority list, and to consider what role your organization will play when these technologies are woven into our markets and brand experiences.

Original article here.


standard

McKinsey Analysis rates machine learning

2017-06-13 - By 

McKinsey outline the range of opportunities for applying artificial intelligence in their article. They say:

For companies, successful adoption of these evolving technologies will significantly enhance performance. Some of the gains will come from labor substitution, but automation also has the potential to enhance productivity, raise throughput, improve predictions, outcomes, accuracy, and optimization, as well expand the discovery of new solutions in massively complex areas such as synthetic biology and material science‘.

At Smart Insights, we’ve been looking beyond the hype to look at specific practical applications for applying AI in marketing. Our recommendation is that the best marketing applications are in machine learning where predictive analytics is applied to learn from historic data to deliver more relevant personalization, both on site, using email automation and offsite in programmatic advertising.  This high potential is also clear from the chart from McKinsey (see top of page).

You can see that ‘personalize advertising’ is rated highly and this relates to different forms of personalised messaging I mentioned above. Optimize merchandising strategy is a retail application which is related.

Original article here.

 


standard

All the Big Players Are Remaking Themselves Around AI

2017-01-02 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.

 


standard

Tech trends for 2017: more AI, machine intelligence, connected devices and collaboration

2016-12-30 - By 

The end of year or beginning of year is always a time when we see many predictions and forecasts for the year ahead. We often publish a selection of these to show how tech-based innovation and economic development will be impacted by the major trends.

A number of trends reports and articles have bene published – ranging from investment houses, to research firms, and even innovation agencies. In this article we present headlines and highlights of some of these trends – from Gartner, GP Bullhound, Nesta and Ovum.

Artificial intelligence will have the greatest impact

GP Bullhound released its 52-page research report, Technology Predictions 2017, which says artificial intelligence (AI) is poised to have the greatest impact on the global technology sector. It will experience widespread consumer adoption, particularly as virtual personal assistants such as Apple Siri and Amazon Alexa grow in popularity as well as automation of repetitive data-driven tasks within enterprises.

Online streaming and e-sports are also significant market opportunities in 2017 and there will be a marked growth in the development of content for VR/AR platforms. Meanwhile, automated vehicles and fintech will pose longer-term growth prospects for investors.

The report also examines the growth of Europe’s unicorn companies. It highlights the potential for several firms to reach a $10 billion valuation and become ‘decacorns’, including BlaBlaCar, Farfetch, and HelloFresh.

Alec Dafferner, partner, GP Bullhound, commented, “The technology sector has faced up to significant challenges in 2016, from political instability through to greater scrutiny of unicorns. This resilience and the continued growth of the industry demonstrate that there remain vast opportunities for investors and entrepreneurs.”

Big data and machine learning will be disruptors

Advisory firm Ovum says big data continues to be the fastest-growing segment of the information management software market. It estimates the big data market will grow from $1.7bn in 2016 to $9.4bn by 2020, comprising 10 percent of the overall market for information management tooling. Its 2017 Trends to Watch: Big Data report highlights that while the breakout use case for big data in 2017 will be streaming, machine learning will be the factor that disrupts the landscape the most.

Key 2017 trends:

  • Machine learning will be the biggest disruptor for big data analytics in 2017.
  • Making data science a team sport will become a top priority.
  • IoT use cases will push real-time streaming analytics to the front burner.
  • The cloud will sharpen Hadoop-Spark ‘co-opetition’.
  • Security and data preparation will drive data lake governance.

Intelligence, digital and mesh

In October, Gartner issued its top 10 strategic technology trends for 2017, and recently outlined the key themes – intelligent, digital, and mesh – in a webinar.  It said that autonomous cars and drone transport will have growing importance in the year ahead, alongside VR and AR.

“It’s not about just the IoT, wearables, mobile devices, or PCs. It’s about all of that together,” said Cearley, according to hiddenwires magazine. “We need to put the person at the canter. Ask yourself what devices and service capabilities do they have available to them,” said David Cearley, vice president and Gartner fellow, on how ‘intelligence everywhere’ will put the consumer in charge.

“We need to then look at how you can deliver capabilities across multiple devices to deliver value. We want systems that shift from people adapting to technology to having technology and applications adapt to people.  Instead of using forms or screens, I tell the chatbot what I want to do. It’s up to the intelligence built into that system to figure out how to execute that.”

Gartner’s view is that the following will be the key trends for 2017:

  • Artificial intelligence (AI) and machine learning: systems that learn, predict, adapt and potentially operate autonomously.
  • Intelligent apps: using AI, there will be three areas of focus — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces.
  • Intelligent things, as they evolve, will shift from stand-alone IoT devices to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks.
  • Virtual and augmented reality: VR can be used for training scenarios and remote experiences. AR will enable businesses to overlay graphics onto real-world objects, such as hidden wires on the image of a wall.
  • Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.
  • Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.
  • Conversational systems will shift from a model where people adapt to computers to one where the computer ‘hears’ and adapts to a person’s desired outcome.
  • Mesh and app service architecture is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs (application programming interfaces) and events to deliver modular, flexible and dynamic solutions.
  • Digital technology platforms: every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the internet of things and business ecosystems.
  • Adaptive security architecture: multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

The real-world vision of these tech trends

UK innovation agency Nesta also offers a vision for the year ahead, a mix of the plausible and the more aspirational, based on real-world examples of areas that will be impacted by these tech trends:

  • Computer says no: the backlash: the next big technological controversy will be about algorithms and machine learning, which increasingly make decisions that affect our daily lives; in the coming year, the backlash against algorithmic decisions will begin in earnest, with technologists being forced to confront the effects of aspects like fake news, or other events caused directly or indirectly by the results of these algorithms.
  • The Splinternet: 2016’s seismic political events and the growth of domestic and geopolitical tensions, means governments will become wary of the internet’s influence, and countries around the world could pull the plug on the open, global internet.
  • A new artistic approach to virtual reality: as artists blur the boundaries between real and virtual, the way we create and consume art will be transformed.
  • Blockchain powers a personal data revolution: there is growing unease at the way many companies like Amazon, Facebook and Google require or encourage users to give up significant control of their personal information; 2017 will be the year when the blockchain-based hardware, software and business models that offer a viable alternative reach maturity, ensuring that it is not just companies but individuals who can get real value from their personal data.
  • Next generation social movements for health: we’ll see more people uniting to fight for better health and care, enabled by digital technology, and potentially leading to stronger engagement with the system; technology will also help new social movements to easily share skills, advice and ideas, building on models like Crohnology where people with Crohn’s disease can connect around the world to develop evidence bases and take charge of their own health.
  • Vegetarian food gets bloodthirsty: the past few years have seen growing demand for plant-based food to mimic meat; the rising cost of meat production (expected to hit $5.2 billion by 2020) will drive kitchens and laboratories around the world to create a new wave of ‘plant butchers, who develop vegan-friendly meat substitutes that would fool even the most hardened carnivore.
  • Lifelong learners: adult education will move from the bottom to the top of the policy agenda, driven by the winds of automation eliminating many jobs from manufacturing to services and the professions; adult skills will be the keyword.
  • Classroom conundrums, tackled together: there will be a future-focused rethink of mainstream education, with collaborative problem solving skills leading the charge, in order to develop skills beyond just coding – such as creativity, dexterity and social intelligence, and the ability to solve non-routine problems.
  • The rise of the armchair volunteer: volunteering from home will become just like working from home, and we’ll even start ‘donating’ some of our everyday data to citizen science to improve society as well; an example of this trend was when British Red Cross volunteers created maps of the Ebola crisis in remote locations from home.

In summary

It’s clear that there is an expectation that the use of artificial intelligence and machine learning platforms will proliferate in 2017 across multiple business, social and government spheres. This will be supported with advanced tools and capabilities like virtual reality and augmented reality. Together, there will be more networks of connected devices, hardware, and data sets to enable collaborative efforts in areas ranging from health to education and charity. The Nesta report also suggests that there could be a reality check, with a possible backlash against the open internet and the widespread use of personal data.

Original article here.


standard

Microsoft Researchers Predict What’s Coming in AI for the Next Decade

2016-12-06 - By 

Microsoft Research’s female contingent makes their calls for AI breakthroughs to come.

Seventeen Microsoft researchers—all of whom happen to be women this year—have made their calls for what will be hot in the burgeoning realm of artificial intelligence (AI) in the next decade.

Ripping a page out of the IBM IBM -0.48% 5 for 5 playbook, Microsoft MSFT -0.37% likes to use these annual predictions to showcase the work of its hotshot research brain trust. Some of the picks are already familiar. One is about how advances in deep learning—which endows computers with human-like thought processes—will make computers or other smart devices more intuitive and easier to use. This is something we’ve all heard before, but the work is not done, I guess.

 

For example, “the search box” most of us use on Google or Bing search engines will disappear, enabling people to search for things based on spoken commands, images, or video, according to Susan Dumais, distinguished scientist and deputy managing director of Microsoft’s Redmond, Wash. research lab. That’s actually already happening with products like Google GOOG -0.10% Now, Apple Siri AAPL 0.26% , and Microsoft Cortana—but there’s more to do.

Dumais says the box will go away. She explains:

That is more ubiquitous, embedded and contextually sensitive. We are seeing the beginnings of this transformation with spoken queries, especially in mobile and smart home settings. This trend will accelerate with the ability to issue queries consisting of sound, images or video, and with the use of context to proactively retrieve information related to the current location, content, entities or activities without explicit queries.

Virtual reality will become more ubiquitous as researchers enhance better “body tracking” capabilities, says Mar Gonzalez Franco, a researcher at the Redmond research lab. That will enable such rich, multi-sensorial experiences that could actually cause subjects to hallucinate. That doesn’t sound so great to some, but that capability could help people with disabilities “retrain” their perceptual systems, she notes.

Get Data Sheet, Fortune’s tech newsletter.

There’s but one mention on this list of the need for ethical or moral guidelines for the use of AI. That comes from Microsoft distinguished scientist Jennifer Chayes.

Chayes, who is also managing director of Microsoft’s New England and New York City research labs, thinks AI can be used to police the ethical application of AI.

Our lives are being enhanced tremendously by artificial intelligence and machine learning algorithms. However, current algorithms often reproduce the discrimination and unfairness in our data and, moreover, are subject to manipulation by the input of misleading data. One of the great algorithmic advances of the next decade will be the development of algorithms which are fair, accountable and much more robust to manipulation.

Microsoft experienced the mis-use of AI’s power first-hand earlier this year when its experimental Tay chatbot offended many Internet users with racist and sexist slurs that the program was taught by others. Microsoft chose to focus on female researchers to stress that, while women and girls make up half of the world’s population, they account for less than 20% of computer science graduates.

This is particularly true for women and girls who comprise 50% of the world’s population, but account for less than 20 percent of computer science graduates, according to the Organization for Economic Cooperation and Development. The fact that the U.S. Bureau of Labor Statistics expects that there will be fewer than 400,000 qualified applicants to take on 1.4 million computing jobs in 2020 means there is great opportunity for women in technology going forward.

Original article here.


standard

How Economists View the Rise of Artificial Intelligence

2016-11-25 - By 

Machine learning will drop the cost of making predictions, but raise the value of human judgement.

To really understand the impact of artificial intelligence in the modern world, it’s best to think beyond the mega-research projects like those that helped Google recognize cats in photos.

According to professor Ajay Agrawal of the University of Toronto, humanity should be pondering how the ability of cutting edge A.I. techniques like deep learning—which has boosted the ability for computers to recognize patterns in enormous loads of data—could reshape the global economy.

Making his comments at the Machine Learning and the Market for Intelligence conference this week by the Rotman School of Management at the University of Toronto, Agrawal likened the current boom of A.I. to 1995, when the Internet went mainstream. Gaining enough mainstream traction, the Internet ceased to be seen as a new technology. Instead, it was a new economy where businesses could emerge online.

However, one group of people refused to call the Internet a new economy: economists. For them, the Internet didn’t usher in a new economy per se, instead it simply altered the existing economy by introducing a new way to purchase goods like shoes or toothbrushes at a cheaper rate than brick-and-mortar stores offered.

“Economists think of technology as drops in the cost of particular things,” Agrawal said.

Likewise, the advent of calculators or rudimentary computers lowered the cost for people to perform basic arithmetic, which aided workers at the census bureau who previously slaved away for hours manually crunching data without the help of those tools.

Similarly, with the rise of digital cameras, improvements in software and hardware helped manufacturers run better internal calculations within the device that could help users capture and improve their digital photos. Researchers essentially applied calculations to the old-school field of photography, something previous generations probably never believed would be touched by math, he explained.

As people “we shifted to an arithmetic solution” to help improve digital cameras, but their cost went up as more people wanted them, as opposed to traditional film cameras that require film and chemical baths to produce good photos, he added. “Those went down,” said Agrawal, in terms of both cost and want.

Artificial Intelligence and the future | André LeBlanc | TEDxMoncton

All this takes us back to the rise of machine learning and its ability to learn from data and make predictions based on the information.

The rise of machine learning will lead to “a drop in the cost of prediction,” he said. However, this drop will result in certain other things to go up in value, he explained.

For example, a doctor that works on a patient with a hurt leg will probably have to take an x-ray of the limb and ask questions to gather information so that he or she can make a prediction on what to do next. Advanced data analytics, however, would presumably make it easier to predict the best course of remedy for the doctor, but it will be up for the doctor to follow through or not.

So while “machine intelligence is a substitute for human prediction,” it can also be “a compliment to human judgment, so the value of human judgment increases,” Agrawal said.

In some ways, Agrawal’s comments call to mind a recent research paper in which researchers developed an A.I. system that could predict 79% of the time the correct outcome of roughly 600 human rights cases by the European Court of Human Rights. The report’s authors explained that while the tool could help discover patterns in the court cases, “they do not believe AI will be able to replace human judgement,” as reported by the Verge.

The authors of that research paper don’t want A.I. powered computers to replace humans as new, futuristic cyber judges. Instead, they want the tool to help humans to make more thoughtful judgements that can ultimately improve human rights.

Original article here.


standard

Google, Facebook, and Microsoft Are Remaking Themselves Around AI

2016-11-24 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.


standard

Artificial Intelligence Will Grow 300% in 2017

2016-11-06 - By 

Insights matter. Businesses that use artificial intelligence (AI), big data and the Internet of Things (IoT) technologies to uncover new business insights “will steal $1.2 trillion per annum from their less informed peers by 2020.” So says Forrester in a new report, “Predictions 2017: Artificial Intelligence Will Drive The Insights Revolution.”

Across all businesses, there will be a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. Through the use of cognitive interfaces into complex systems, advanced analytics, and machine learning technology, AI will provide business users access to powerful insights never before available to them. It will help, says Forrester, “drive faster business decisions in marketing, ecommerce, product management and other areas of the business by helping close the gap from insights to action.”

The combination of AI, Big data, and IoT technologies will enable businesses investing in them and implementing them successfully to overcome barriers to data access and to mining useful insights. In 2017 these technologies will increase business’ access to data, broaden the types of data that can be analyzed, and raise the level of sophistication of the resulting insight. As a result, Forrester predicts an acceleration in the trend towards democratization of data analysis. While in 2015 it found that only 51% of data and analytics decision-makers said that they were able to easily obtain data and analyze it without the help of technologist, Forrester expects this figure to rise to around 66% in 2017.

Big data technologies will mature and vendors will increasingly integrate them with their traditional analytics platforms which will facilitate their incorporation in existing analytics processes in a wide range of organizations. The use of a single architecture for big data convergence with agile and actionable insights will become more widespread.

The third set of technologies supporting insight-driven businesses, those associated with IoT, will also become integrated with more traditional analytics offerings and Forrester expects the number of digital analytics vendors offering IoT insights capabilities to double in 2017. This will encourage their customers to invest in networking more devices and exploring the data they produce. For example, Forrester has found that 67% of telecommunications decision-makers are considering or prioritizing developing IoT or M2M initiatives in 2017.

The increased investment in IoT will lead to new type of analytics which in turn will lead to new business insights. Currently, much of the data that is generated by edge devices such as mobile phones, wearables, or cars, goes unused as “immature data and analytics practices cause most firms to squander these insights opportunities,” says Forrester. In 2016, less than 50% of data and analytics decision-makers have adopted location analytics, but Forrester expects the adoption of location analytics will grow to over two-thirds of businesses by the end of 2017.  The resulting new insights will enable firms to optimize their customers’ experiences as they engage in the physical world with products, services and support.

In general, Forrester sees encouraging signs that more companies are investing in initiatives to get rid of existing silos of customer knowledge so they can coordinate better and drive insights throughout the entire enterprise. Specifically, Forrester sees three such initiatives becoming prominent in 2017:

Organizations with Chief Data Officers (CDOs) will become the majority in 2017, up from a global average of 47% in 2016. But to become truly insights-driven, says Forrester, “firms must eventually assign data responsibilities to CIOs and CMOs, and even CEOs, in order to drive swift business action based on data driven insights.”

Customer data management projects will increase by 75%. In 2016, for the first time, 39% of organizations have embarked on a big data initiative to support cross-channel tracking and attribution, customer journey analytics, and better segmentation. And nearly one-third indicated plans to adopt big data technologies and solutions in the next twelve months.

Forrester expects to see a marked increase in the adoption of enterprise-wide insights-driven practices as firms digitally transform their business in 2017. Leading customer intelligence practices and strategies will become “the poster child for business transformation,” says Forrester.

Longer term, according to Forrester’s “The Top Emerging Technologies To Watch: 2017 To 2021,” Artificial intelligence-based services and applications will eventually change most industries and redistribute the workforce.

Original article here.


standard

IBM Research Takes Watson to Hollywood with the First “Cognitive Movie Trailer”

2016-10-03 - By 

How do you create a movie trailer about an artificially enhanced human?

You turn to the real thing – artificial intelligence.

20th Century Fox has partnered with IBM Research to develop the first-ever “cognitive movie trailer” for its upcoming suspense/horror film, “Morgan”. Fox wanted to explore using artificial intelligence (AI) to create a horror movie trailer that would keep audiences on the edge of their seats.

Movies, especially horror movies, are incredibly subjective. Think about the scariest movie you know (for me, it’s the 1976 movie, “The Omen”). I can almost guarantee that if you ask the person next to you, they’ll have a different answer. There are patterns and types of emotions in horror movies that resonate differently with each viewer, and the intricacies and interrelation of these are what an AI system would have to identify and understand in order to create a compelling movie trailer. Our team was faced with the challenge of not only teaching a system to understand, “what is scary”, but then to create a trailer that would be considered “frightening and suspenseful” by a majority of viewers.

As with any AI system, the first step was training it to understand a subject area. Using machine learning techniques and experimental Watson APIs, our Research team trained a system on the trailers of 100 horror movies by segmenting out each scene from the trailers. Once each trailer was segmented into “moments”, the system completed the following;

1)   A visual analysis and identification of the people, objects and scenery. Each scene was tagged with an emotion from a broad bank of 24 different emotions and labels from across 22,000 scene categories, such as eerie, frightening and loving;

2)   An audio analysis of the ambient sounds (such as the character’s tone of voice and the musical score), to understand the sentiments associated with each of those scenes;

3)   An analysis of each scene’s composition (such the location of the shot, the image framing and the lighting), to categorize the types of locations and shots that traditionally make up suspense/horror movie trailers.

The analysis was performed on each area separately and in combination with each other using statistical approaches. The system now “understands” the types of scenes that categorically fit into the structure of a suspense/horror movie trailer.

Then, it was time for the real test. We fed the system the full-length feature film, “Morgan”. After the system “watched” the movie, it identified 10 moments that would be the best candidates for a trailer. In this case, these happened to reflect tender or suspenseful moments. If we were working with a different movie, perhaps “The Omen”, it might have selected different types of scenes. If we were working with a comedy, it would have a different set of parameters to select different types of moments.

It’s important to note that there is no “ground truth” with creative projects like this one. Neither our team, or the Fox team, knew exactly what we were looking for before we started the process. Based on our training and testing of the system, we knew that tender and suspenseful scenes would be short-listed, but we didn’t know which ones the system would pick to create a complete trailer. As most creative projects go, we thought, “we’ll know it when we see it.”

Our system could select the moments, but it’s not an editor. We partnered with a resident IBM filmmaker to arrange and edit each of the moments together into a comprehensive trailer. You’ll see his expertise in the addition of black title cards, the musical overlay and the order of moments in the trailer.

Not surprisingly, our system chose some moments in the movie that were not included in other “Morgan”trailers. The system allowed us to look at moments in the movie in different ways –moments that might not have traditionally made the cut, were now short-listed as candidates. On the other hand, when we reviewed all the scenes that our system selected, one didn’t seem to fit with the bigger story we were trying to tell –so we decided not to use it. Even Watson sometimes ends up with footage on the cutting room floor!

Traditionally, creating a movie trailer is a labor-intensive, completely manual process. Teams have to sort through hours of footage and manually select each and every potential candidate moment. This process is expensive and time consuming –taking anywhere between 10 and 30 days to complete.

From a 90-minute movie, our system provided our filmmaker a total of six minutes of footage. From the moment our system watched “Morgan” for the first time, to the moment our filmmaker finished the final editing, the entire process took about 24 hours.

Reducing the time of a process from weeks to hours –that is the true power of AI.

The combination of machine intelligence and human expertise is a powerful one. This research investigation is simply the first of many into what we hope will be a promising area of machine and human creativity. We don’t have the only solution for this challenge, but we’re excited about pushing the possibilities of how AI can augment the expertise and creativity of individuals.

AI is being put to work across a variety of industries; helping scientists discover promising treatment pathways to fight diseases or helping law experts discover connections between cases. Film making is just one more example of how cognitive computing systems can help people make new discoveries.

Original article here.


standard

Investing in AI offers more rewards than risks

2016-09-27 - By 

It’s difficult to predict how artificial intelligence technology will change over the next 10 to 20 years, but there are plenty of gains to be made. By 2018, robots will supervise more than 3 million human workers; by 2020, smart machines will be a top investment priority for more than 30 percent of CIOs.

Everything from journalism to customer service is already being replaced by AI that’s increasingly able to replicate the experience and ability of humans. What was once seen as the future of technology is already here, and the only question left is how it will be implemented in the mass market.

Over time, the insights gleaned from the industries currently taking advantage of AI — and improving the technology along the way — will make it ever more robust and useful within a growing range of applications. Organizations that can afford to invest heavily in AI are now creating the momentum for even more to follow suit; those that can’t will find their niches in AI at risk of being left behind.

Risk versus reward

While some may argue it’s impossible to predict whether the risks of AI applications to business are greater than the rewards (or vice versa), analysts predict that by 2020, 5 percent of all economic transactions will be handled by autonomous software agents.

The future of AI depends on companies willing to take the plunge and invest, no matter the challenge, to research the technology and fund its continued development. Some are even doing it by accident, like the company that paid a programmer more than half a million dollars over six years, only to learn he automated his own job.

Many of the AI advancements are coming from the military. The U.S. government alone has requested $4.6 billion in drone funding for next year, as automated drones are set to replace the current manned drones used in the field. AI drones simply need to be given a destination and they’ll be able to dodge air defenses and reach the destinations on their own, while any lethal decisions are still made by human eyes.

On the academic side, institutions like the Massachusetts Institute of Technology and the University of Oxford are hard at work mapping the human brain and attempting to emulate it. This provides two different pathways — creating an AI that replicates the complexities of the human brain and emulating an actual human brain, which comes with a slew of ethical questions and concerns. For example, what rights does an AI have? And what happens if the server storing your emulated loved one is shut down?

While these questions remain unanswered, eventually, the proven benefits of AI systems for all industries will spur major players from all sectors of the economy to engage with it. It should be obvious to anyone that, just as current information technology is now indispensable to practically every industry in existence, artificial intelligence will be, as well.

The future of computation

Until now, AI has mostly been about crafting preprogramming tools for specific functions. These have been markedly rigid. These kinds of AI-based computing strategies have become commonplace. The future of AI will be dependent on true learning. In other words,AI will no longer have to rely on being given direct commands to understand what it’s being told to do.

Currently, we use GPS systems that depend on automated perception and learning, mobile devices that can interpret speech and search engines that are learning to interpret our intentions. Programming, specifically, is what makes developments like Google’s DeepMind and IBM’s Watson the next step in AI.

DeepMind wasn’t programmed with knowledge — there are no handcrafted programs or specific modules for given tasks. DeepMind is designed to learn automatically. The system is specifically crafted for generality so that the end result will be emergent properties. Emergent properties, such as the ability to program software that can beat grandmaster-level Go players, is incalculably more impressive when you realize no one programmed DeepMind to do it.

Traditional AI is narrow and can only do what it is programmed to know, but Olli, an automated car powered by Watson, learns from monitoring and interacting with passengers. Each time a new passenger requests a recommendation or destination, it stores this information for use with the next person. New sensors are constantly added, and the vehicle (like a human driver) continuously becomes more intelligent as it does its job.

But will these AI systems be able to do what companies like Google really want them to do, like predict the buying habits of end users better than existing recommendation software? Or optimizing supply chain transactions dynamically by relating patterns from the past? That’s where the real money is, and it’s a significantly more complex problem than playing games, driving and completing repetitive tasks.

The current proof points from various AI platforms — like finding fashion mistakes or predicting health problems — clearly indicate that AI is expanding, and these more complicated tasks will become a reality in the near-term horizon.

Soon, AI will be able to mimic complex human decision-making processes, such as giving investment advice or providing prescriptions to patients. In fact, with continuous improvement in true learning, first-tier support positions and more dangerous jobs (such as truck driving) will be completely taken over by robotics, leading to a new Industrial Revolution where humans will be freed up to solve problems instead of doing repetitious business processes.

The price of not investing in AI

The benefits and risks of investment are nebulous, uncertain and a matter for speculation. The one known risk common to all things new in business is uncertainty itself. So the risks mainly come in the form of making a bad investment, which is nothing new to the world of finance.

So as with all things strange and new, the prevailing wisdom is that the risk of being left behind is far greater, and far grimmer, than the benefits of playing it safe.

Original article here.

 


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);