Posted In:Artificial Intelligence Archives - AppFerret

standard

AI in 2019 (video)

2019-01-03 - By 

2018 has been an eventful year for AI to say the least! We’ve seen advances in generative models, the AlphaGo victory, several data breach scandals, and so much more. I’m going to briefly review AI in 2018 before giving 10 predictions on where the space is going in 2019. Prepare yourself, my predictions range from more Kubernetes infused ML pipelines to the first business use case of generative modeling of 3D worlds. Happy New Year and enjoy!

Original video here.


standard

A Quick Introduction to Text Summarization in Machine Learning

2018-09-19 - By 

Text summarization refers to the technique of shortening long pieces of text. The intention is to create a coherent and fluent summary having only the main points outlined in the document.

Automatic text summarization is a common problem in machine learning and natural language processing (NLP).

Skyhoshi, who is a U.S.-based machine learning expert with 13 years of experience and currently teaches people his skills, says that “the technique has proved to be critical in quickly and accurately summarizing voluminous texts, something which could be expensive and time consuming if done without machines.”

Machine learning models are usually trained to understand documents and distill the useful information before outputting the required summarized texts.

What’s the need for text summarization?

Propelled by the modern technological innovations, data is to this century what oil was to the previous one. Today, our world is parachuted by the gathering and dissemination of huge amounts of data.

In fact, the International Data Corporation (IDC) projects that the total amount of digital data circulating annually around the world would sprout from 4.4 zettabytes in 2013 to hit 180 zettabytes in 2025. That’s a lot of data!

With such a big amount of data circulating in the digital space, there is need to develop machine learning algorithms that can automatically shorten longer texts and deliver accurate summaries that can fluently pass the intended messages.

Furthermore, applying text summarization reduces reading time, accelerates the process of researching for information, and increases the amount of information that can fit in an area.

What are the main approaches to automatic summarization?

There are two main types of how to summarize text in NLP:

  • Extraction-based summarization

The extractive text summarization technique involves pulling keyphrases from the source document and combining them to make a summary. The extraction is made according to the defined metric without making any changes to the texts.

Here is an example:

Source text: Joseph and Mary rode on a donkey to attend the annual event inJerusalem. In the city, Mary gave birth to a child named Jesus.

Extractive summary: Joseph and Mary attend event Jerusalem. Mary birth Jesus.

As you can see above, the words in bold have been extracted and joined to create a summary — although sometimes the summary can be grammatically strange.

  • Abstraction-based summarization

The abstraction technique entails paraphrasing and shortening parts of the source document. When abstraction is applied for text summarization in deep learning problems, it can overcome the grammar inconsistencies of the extractive method.

The abstractive text summarization algorithms create new phrases and sentences that relay the most useful information from the original text — just like humans do.

Therefore, abstraction performs better than extraction. However, the text summarization algorithms required to do abstraction are more difficult to develop; that’s why the use of extraction is still popular.

Here is an example:

Abstractive summary: Joseph and Mary came to Jerusalem where Jesus was born.

How does a text summarization algorithm work?

Usually, text summarization in NLP is treated as a supervised machine learning problem (where future outcomes are predicted based on provided data).

Typically, here is how using the extraction-based approach to summarize texts can work:

1. Introduce a method to extract the merited keyphrases from the source document. For example, you can use part-of-speech tagging, words sequences, or other linguistic patterns to identify the keyphrases.

2. Gather text documents with positively-labeled keyphrases. The keyphrases should be compatible to the stipulated extraction technique. To increase accuracy, you can also create negatively-labeled keyphrases.

3. Train a binary machine learning classifier to make the text summarization. Some of the features you can use include:

  • Length of the keyphrase
  • Frequency of the keyphrase
  • The most recurring word in the keyphrase
  • Number of characters in the keyphrase

4. Finally, in the test phrase, create all the keyphrase words and sentences and carry out classification for them.

Summary

Text summarization is an interesting machine learning field that is increasingly gaining traction. As research in this area continues, we can expect to see breakthroughs that will assist in fluently and accurately shortening long text documents.

Original article here.

 


standard

Communicate with Alexa Devices Using Sign Language

2018-07-16 - By 

Many have found Amazon’s Alexa devices to be helpful in their homes, but if you can’t physically speak, it’s a challenge to communicate with these things. So, Abhishek Singh used TensorFlow to train a program to recognize sign language and communicate with Alexa without voice.

Nice.

 


standard

Google’s AutoML is a Machine Learning Game-Changer

2018-05-24 - By 

Google’s AutoML is a new up-and-coming (alpha stage) cloud software suite of Machine Learning tools. It’s based on Google’s state-of-the-art research in image recognition called Neural Architecture Search (NAS). NAS is basically an algorithm that, given your specific dataset, searches for the most optimal neural network to perform a certain task on that dataset. AutoML is then a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge of deep learning or AI; all you need is labelled data! Google will use NAS to then find the best network for your specific dataset and task. They’ve already shown how their methods can achieve performance that is far better than that of hand-designed networks.

AutoML totally changes the whole machine learning game because for many applications, specialised skills and knowledge won’t be required. Many companies only need deep networks to do simpler tasks, such as image classification. At that point they don’t need to hire 5 machine learning PhDs; they just need someone who can handle moving around and organising their data.

There’s no doubt that this shift in how “AI” can be used by businesses will create change. But what kind of change are we looking at? Whom will this change benefit? And what will happen to all of the people jumping into the machine learning field? In this post, we’re going to breakdown what Google’s AutoML, and in general the shift towards Software 2.0, means for both businesses and developers in the machine learning field.

More development, less research for businesses

A lot of businesses in the AI space, especially start-ups, are doing relatively simple things in the context of deep learning. Most of their value is coming from their final put-together product. For example, most computer vision start-ups are using some kind of image classification network, which will actually be AutoML’s first tool in the suite. In fact, Google’s NASNet, which achieves the current state-of-the-art in image classification is already publicly available in TensorFlow! Businesses can now skip over this complex experimental-research part of the product pipeline and just use transfer learning for their task. Because there is less experimental-research, more business resources can be spent on product design, development, and the all important data.

Speaking of which…

It becomes more about product

Connecting from the first point, since more time is being spent on product design and development, companies will have faster product iteration. The main value of the company will become less about how great and cutting edge their research is and more about how well their product/technology is engineered. Is it well designed? Easy to use? Is their data pipeline set up in such a way that they can quickly and easily improve their models? These will be the new key questions for optimising their products and being able to iterate faster than their competition. Cutting edge research will also become less of a main driver of increasing the technology’s performance.

Now it’s more like…

Data and resources become critical

Now that research is a less significant part of the equation, how can companies stand out? How do you get ahead of the competition? Of course sales, marketing, and as we just discussed, product design are all very important. But the huge driver of the performance of these deep learning technologies is your data and resources. The more clean and diverse yet task-targeted data you have (i.e both quality and quantity), the more you can improve your models using software tools like AutoML. That means lots of resources for the acquisition and handling of data. All of this partially signifies us moving away from the nitty-gritty of writing tons of code.

It becomes more of…

Software 2.0: Deep learning becomes another tool in the toolbox for most

All you have to do to use Google’s AutoML is upload your labelled data and boom, you’re all set! For people who aren’t super deep (ha ha, pun) into the field, and just want to leverage the power of the technology, this is big. The application of deep learning becomes more accessible. There’s less coding, more using the tool suite. In fact, for most people, deep learning because just another tool in their toolbox. Andrej Karpathy wrote a great article on Software 2.0 and how we’re shifting from writing lots of code to more design and using tools, then letting AI do the rest.

But, considering all of this…

There’s still room for creative science and research

Even though we have these easy-to-use tools, the journey doesn’t just end! When cars were invented, we didn’t just stop making them better even though now they’re quite easy to use. And there’s still many improvements that can be made to improve current AI technologies. AI still isn’t very creative, nor can it reason, or handle complex tasks. It has the crutch of needing a ton of labelled data, which is both expensive and time consuming to acquire. Training still takes a long time to achieve top accuracy. The performance of deep learning models is good for some simple tasks, like classification, but does only fairly well, sometimes even poorly (depending on task complexity), on things like localisation. We don’t yet even fully understand deep networks internally.

All of these things present opportunities for science and research, and in particular for advancing the current AI technologies. On the business side of things, some companies, especially the tech giants (like Google, Microsoft, Facebook, Apple, Amazon) will need to innovate past current tools through science and research in order to compete. All of them can get lots of data and resources, design awesome products, do lots of sales and marketing etc. They could really use something more to set them apart, and that can come from cutting edge innovation.

That leaves us with a final question…

Is all of this good or bad?

Overall, I think this shift in how we create our AI technologies is a good thing. Most businesses will leverage existing machine learning tools, rather than create new ones since they don’t have a need for it. Near-cutting-edge AI becomes accessible to many people, and that means better technologies for all. AI is also quite an “open” field, with major figures like Andrew Ng creating very popular courses to teach people about this important new technology. Making things more accessible helps people transition with the fast-paced tech field.

Such a shift has happened many times before. Programming computers started with assembly level coding! We later moved on to things like C. Many people today consider C too complicated so they use C++. Much of the time, we don’t even need something as complex as C++, so we just use the super high level languages of Python or R! We use the tool that is most appropriate at hand. If you don’t need something super low-level, then you don’t have to use it (e.g C code optimisation, R&D of deep networks from scratch), and can simply use something more high-level and built-in (e.g Python, transfer learning, AI tools).

At the same time, continued efforts in the science and research of AI technologies is critical. We can definitely add tremendous value to the world by engineering new AI-based products. But there comes a point where new science is needed to move forward. Human creativity will always be valuable.

Conclusion

Thanks for reading! I hope you enjoyed this post and learned something new and useful about the current trend in AI technology! This is a partially opinionated piece, so I’d love to hear any responses you may have below!

Original article here.


standard

Google’s Duplex AI Demo Just Passed the Turing Test (video)

2018-05-11 - By 

Yesterday, at I/O 2018, Google showed off a new digital assistant capability that’s meant to improve your life by making simple boring phone calls on your behalf. The new Google Duplex feature is designed to pretend to be human, with enough human-like functionality to schedule appointments or make similarly inane phone calls. According to Google CEO Sundar Pichai, the phone calls the company played were entirely real. You can make an argument, based on these audio clips, that Google actually passed the Turing Test.

If you haven’t heard the audio of the two calls, you should give the clip a listen. We’ve embedded the relevant part of Pichai’s presentation below.

I suspect the calls were edited to remove the place of business, but apart from that, they sound like real phone calls. If you listen to both segments, the male voice booking the restaurant sounds a bit more like a person than the female does, but the gap isn’t large and the female voice is still noticeably better than a typical AI. The female speaker has a rather robotic “At 12PM” at one point that pulls the overall presentation down, but past that, Google has vastly improved AI speech. I suspect the same technologies at work in Google Duplex are the ones we covered about six weeks ago.

So what’s the Turing Test and why is passing it a milestone? The British computer scientist, mathematician, and philosopher Alan Turing devised the Turing test as a means of measuring whether a computer was capable of demonstrating intelligent behavior equivalent to or indistinguishable from that of a human. This broad formulation allows for the contemplation of many such tests, though the general test case presented in discussion is a conversation between a researcher and a computer in which the computer responds to questions. A third person, the evaluator, is tasked with determining which individual in the conversation is human and which is a machine. If the evaluator cannot tell, the machine has passed the Turing test.

The Turing test is not intended to be the final word on whether an AI is intelligent and, given that Turing conceived it in 1950, obviously doesn’t take into consideration later advances or breakthroughs in the field. There have been robust debates for decades over whether passing the Turing test would represent a meaningful breakthrough. But what sets Google Duplex apart is its excellent mimicry of human speech. The original Turing test supposed that any discussion between computer and researcher would take place in text. Managing to create a voice facsimile close enough to standard human to avoid suspicion and rejection from the company in question is a significant feat.

As of right now, Duplex is intended to handle rote responses, like asking to speak to a representative, or simple, formulaic social interactions. Even so, the program’s demonstrated capability to deal with confusion (as on the second call), is still a significant step forward for these kinds of voice interactions. As artificial intelligence continues to improve, voice quality will improve and the AI will become better at answering more and more types of questions. We’re obviously still a long way from creating a conscious AI, but we’re getting better at the tasks our systems can handle — and faster than many would’ve thought possible.

 

Original article here.

 


standard

The Difference Between Artificial Intelligence, Machine Learning, and Deep Learning

2018-04-08 - By 

Simple explanations of Artificial Intelligence, Machine Learning, and Deep Learning and how they’re all different. Plus, how AI and IoT are inextricably connected.

We’re all familiar with the term “Artificial Intelligence.” After all, it’s been a popular focus in movies such as The Terminator, The Matrix, and Ex Machina (a personal favorite of mine). But you may have recently been hearing about other terms like “Machine Learning” and “Deep Learning,” sometimes used interchangeably with artificial intelligence. As a result, the difference between artificial intelligence, machine learning, and deep learning can be very unclear.

I’ll begin by giving a quick explanation of what Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) actually mean and how they’re different. Then, I’ll share how AI and the Internet of Things are inextricably intertwined, with several technological advances all converging at once to set the foundation for an AI and IoT explosion.

So what’s the difference between AI, ML, and DL?

First coined in 1956 by John McCarthy, AI involves machines that can perform tasks that are characteristic of human intelligence. While this is rather general, it includes things like planning, understanding language, recognizing objects and sounds, learning, and problem solving.

We can put AI in two categories, general and narrow. General AI would have all of the characteristics of human intelligence, including the capacities mentioned above. Narrow AI exhibits some facet(s) of human intelligence, and can do that facet extremely well, but is lacking in other areas. A machine that’s great at recognizing images, but nothing else, would be an example of narrow AI.

At its core, machine learning is simply a way of achieving AI.

Arthur Samuel coined the phrase not too long after AI, in 1959, defining it as, “the ability to learn without being explicitly programmed.” You see, you can get AI without using machine learning, but this would require building millions of lines of codes with complex rules and decision-trees.

So instead of hard coding software routines with specific instructions to accomplish a particular task, machine learning is a way of “training” an algorithm so that it can learnhow. “Training” involves feeding huge amounts of data to the algorithm and allowing the algorithm to adjust itself and improve.

To give an example, machine learning has been used to make drastic improvements to computer vision (the ability of a machine to recognize an object in an image or video). You gather hundreds of thousands or even millions of pictures and then have humans tag them. For example, the humans might tag pictures that have a cat in them versus those that do not. Then, the algorithm tries to build a model that can accurately tag a picture as containing a cat or not as well as a human. Once the accuracy level is high enough, the machine has now “learned” what a cat looks like.

Deep learning is one of many approaches to machine learning. Other approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others.

Deep learning was inspired by the structure and function of the brain, namely the interconnecting of many neurons. Artificial Neural Networks (ANNs) are algorithms that mimic the biological structure of the brain.

In ANNs, there are “neurons” which have discrete layers and connections to other “neurons”. Each layer picks out a specific feature to learn, such as curves/edges in image recognition. It’s this layering that gives deep learning its name, depth is created by using multiple layers as opposed to a single layer.

AI and IoT are Inextricably Intertwined

I think of the relationship between AI and IoT much like the relationship between the human brain and body.

Our bodies collect sensory input such as sight, sound, and touch. Our brains take that data and makes sense of it, turning light into recognizable objects and turning sounds into understandable speech. Our brains then make decisions, sending signals back out to the body to command movements like picking up an object or speaking.

All of the connected sensors that make up the Internet of Things are like our bodies, they provide the raw data of what’s going on in the world. Artificial intelligence is like our brain, making sense of that data and deciding what actions to perform. And the connected devices of IoT are again like our bodies, carrying out physical actions or communicating to others.

Unleashing Each Other’s Potential

The value and the promises of both AI and IoT are being realized because of the other.

Machine learning and deep learning have led to huge leaps for AI in recent years. As mentioned above, machine learning and deep learning require massive amounts of data to work, and this data is being collected by the billions of sensors that are continuing to come online in the Internet of Things. IoT makes better AI.

Improving AI will also drive adoption of the Internet of Things, creating a virtuous cycle in which both areas will accelerate drastically. That’s because AI makes IoT useful.

On the industrial side, AI can be applied to predict when machines will need maintenance or analyze manufacturing processes to make big efficiency gains, saving millions of dollars.

On the consumer side, rather than having to adapt to technology, technology can adapt to us. Instead of clicking, typing, and searching, we can simply ask a machine for what we need. We might ask for information like the weather or for an action like preparing the house for bedtime (turning down the thermostat, locking the doors, turning off the lights, etc.).

Converging Technological Advancements Have Made this Possible

Shrinking computer chips and improved manufacturing techniques means cheaper, more powerful sensors.

Quickly improving battery technology means those sensors can last for years without needing to be connected to a power source.

Wireless connectivity, driven by the advent of smartphones, means that data can be sent in high volume at cheap rates, allowing all those sensors to send data to the cloud.

And the birth of the cloud has allowed for virtually unlimited storage of that data and virtually infinite computational ability to process it.

Of course, there are one or two concerns about the impact of AI on our society and our future. But as advancements and adoption of both AI and IoT continue to accelerate, one thing is certain; the impact is going to be profound.

 

Original article here.


standard

CEOs should do 3 things to help their workforce fully embrace AI

2018-02-07 - By 

There’s no denying it: the era of the intelligent enterprise is upon us. As technologies like AI, cognitive computing, and predictive analytics become hot topics in the corporate boardroom, sleek startups and centuries-old companies alike are laying plans for how to put these exciting innovations to work.

Many organizations, however, are in the nascent days of AI implementation. Of the three stages of adoption—education, prototyping, and application at scale—most executives are still taking a tentative approach to exploring AI’s true potential. They’re primarily using the tech to drive small-scale efficiencies.

But AI’s real opportunity lies in tapping completely new areas of value. AI can help established businesses expand their product offerings and infiltrate (or even invent) entirely new markets, as well as streamline internal processes. The rewards are ripe: According to Accenture projections, fully committing to AI could boost global profits by a whopping $4.8 trillion by 2022. For the average S&P 500 company, this would mean an additional $7.5 billion in revenue over the next four years.

There’s no denying it: the era of the intelligent enterprise is upon us. As technologies like AI, cognitive computing, and predictive analytics become hot topics in the corporate boardroom, sleek startups and centuries-old companies alike are laying plans for how to put these exciting innovations to work.

Many organizations, however, are in the nascent days of AI implementation. Of the three stages of adoption—education, prototyping, and application at scale—most executives are still taking a tentative approach to exploring AI’s true potential. They’re primarily using the tech to drive small-scale efficiencies.

But AI’s real opportunity lies in tapping completely new areas of value. AI can help established businesses expand their product offerings and infiltrate (or even invent) entirely new markets, as well as streamline internal processes. The rewards are ripe: According to Accenture projections, fully committing to AI could boost global profits by a whopping $4.8 trillion by 2022. For the average S&P 500 company, this would mean an additional $7.5 billion in revenue over the next four years.

Faster and more compelling innovations will be driven by the intersection of intelligent technology and human ingenuity—yes, the workforce will see sweeping changes. It’s already happening: Nearly all leaders report they’ve redesigned jobs to at least some degree to account for disruption. What’s more, while three-quarters of executives plan to use AI to automate tasks, virtually all intend to use it to augment the capabilities of their people.

Still, many execs report they’re not sure how to pull off what Accenture calls Applied Intelligence—the ability to implement intelligent technology and human ingenuity to drive growth.

Here are three steps the C-suite can take to elevate their organizations into companies that are fully committed to creating new value with AI/human partnerships.

1) Reimagine what it means to work

The most significant impact of AI won’t be on the number of jobs, but on job content. But skepticism about employees’ willingness to embrace AI is unfounded. Though nearly 25% of executives cite “resistance by the workforce” as one of the obstacles to integrating AI, 62% of workers—both low-skilled and high-skilled—are optimistic about the changes AI will bring. In line with that optimism, 67% of employees say it is important to develop the skills to work with intelligent technologies.

Yes, ambiguities about exactly how automation fits into the future workplace persist. Business leaders should look for specific skills and tasks to supplement instead of thinking about replacing generalized “jobs”. AI is less about replacing humans than it is about augmenting their roles.

While AI takes over certain repetitive or routine tasks, it opens doors for project-based work. For example, if AI steps into tasks like sorting email or analyzing inventory, it can free up employees to develop more in-depth customer service tactics, like targeted conversations with key clients. Interestingly, data suggests that greater investment in AI could actually increase employment by as much as 10% in the next three years alone.

2) Pivot the workforce to your company’s unique value proposition

Today, AI and human-machine collaboration is beginning to change how enterprises conduct business. But it has yet to transform whatbusiness they choose to pursue. Few companies are creating entirely new revenue streams or customer experiences. But, at least, 72% of execs agree that adopting intelligent technologies will be critical to their organization’s ability to differentiate in the market.

One of the challenges facing companies is how to make a business case for that pivot to new opportunities without disrupting today’s core business. A key part is turning savings generated by automation into the fuel for investing in the new business models and workforces that will ultimately take a company into new markets.

Take Accenture: the company puts 60% of the money it saves through AI investments into training programs. That’s resulted in the retraining of tens of thousands of people whose roles were automated. Those workers can now focus on more high-value projects, working with AI and other technologies to offer better services to clients.

3) Scaling new skilling: Don’t choose between hiring a human or a machine—hire both

Today, most people already interact with machines in the workplace—but humans still run the show. CEOs still value a number of decidedly human skills—resource management, leadership, communication skills, complex problem solving, and judgment—but in the future, human ingenuity will not suffice. Working in tandem, smarter machines and better skilled humans will likely drive swifter and more compelling innovations.

To scale up new skilling, employers may want to consider these three steps:

  1. Prioritize skills to be honed. While hard skills like data analytics, engineering, or coding are easy to define, innately “human” skills like ethical decision-making and complex problem-solving need to be considered carefully.
  2. Provide targeted training programs. Employees’ level of technical expertise, willingness to learn new technologies, and specific skill sets will determine how training programs should be developed across the organization.
  3. Use digital solutions for training. Taking advantage of cutting-edge technologies like virtual reality or augmented reality can teach workers how to interact with smart machinery via realistic simulations.

While it is natural for businesses to exploit AI to drive efficiencies in the short term, their long term growth depends on using AI far more creatively. It will take new forms of leadership and imagination to prepare the future workforce to truly partner with intelligent machines. If they succeed, it will be a case of humans helping AI help humans.

Original article here.

 


standard

MIT is aiming for AI moonshots with Intelligence Quest

2018-02-01 - By 

Artificial intelligence has long been a focus for MIT. The school’s been researching the space since the late ’50s, giving rise (and lending its name) to the lab that would ultimately become known as CSAIL. But the Cambridge university thinks it can do more to elevate the rapidly expanding field.

This week, the school announced the launch of the MIT Intelligence Quest, an initiative aimed at leveraging its AI research into something it believes could be game-changing for the category. The school has divided its plan into two distinct categories: “The Core” and “The Bridge.”

“The Core is basically reverse-engineering human intelligence,” dean of the MIT School of Engineering Anantha Chandrakasan tells TechCrunch, “which will give us new insights into developing tools and algorithms, which we can apply to different disciplines. And at the same time, these new computer science techniques can help us with the understanding of the human brain. It’s very tightly linked between cognitive science, near science and computer science.”

The Bridge, meanwhile, is designed to provide access to AI and ML tools across its various disciplines. That includes research from both MIT and other schools, made available to students and staff.

“Many of the products are moonshoots,” explains James DiCarlo, head of the Department of Brain and Cognitive Sciences. “They involve teams of scientists and engineers working together. It’s essentially a new model and we need folks and resources behind that.”

Funding for the initiative will be provided by a combination of philanthropic donations and partnerships with corporations. But while the school has had blanket partnerships in the past, including, notably, the MIT-IBM Watson AI Lab, the goal here is not to become beholden to any single company. Ideally the school will be able to work alongside a broad range of companies to achieve its large-scale goals.

“Imagine if we can build machine intelligence that grows the way a human does,” adds professor of Cognitive Science and Computation, Josh Tenenbaum. “That starts like a baby and learns like a child. That’s the oldest idea in AI and it’s probably the best idea… But this is a thing we can only take on seriously now and only by combining the science and engineering of intelligence.”

Original article here.

 


standard

Why 2018 Will be The Year of AI

2017-12-13 - By 

Artificial Intelligence, more commonly known as AI, isn’t a new topic and has been around for years; however, those who have tracked its’ progress have noted that 2017 is the year that has seen it accelerate than years previously.  This hot topic has made its’ way into the media, in boardrooms and within the government.  One reason for this is things that haven’t functioned for decades has suddenly began to work; this is going beyond embedded functions or just tools and expectations are high for 2018.

There are several reasons why this year has recorded the most progress when working with AI.  Before going into these four preconditions that allowed AI to progress over the past five years, it is important to understand what Artificial Intelligence means.  Then, we can take a closer look at each of the four preconditions and how they will shape what is to come next year.

What is Artificial Intelligence?

Basically, Artificial Intelligence is defined as the science of making computers do things that require intelligence when done by humans.  However, five decades have gone by since AI was born and progress in the field has moved very slowly; this has created a level of appreciation to the profound difficulty of this problem.  Fortunately, this year has seen considerable progress and has opened the probability of further advancement in 2018.

Preconditions That Allowed AI to Succeed in 2017 and Beyond

Everything is presently becoming connected with our devices, such as being able to start a project on your Desktop PC and then able to finish your work on a connected smartphone or tablet.  Ray Kurzweil believes that eventually, humans will be able to use sensors that connects our brains to the cloud.  Since the internet originally connected to computers and has advanced to connecting to our mobile devices, sensors that already enable buildings, homes and even our clothes to be linked to the internet can in our near future expand to be used to connect our minds into the cloud.

Another component for AI becoming more advanced is due to computing becoming freer to use.  Previously, it would be costly for new chips to come out during an eighteen-month-period at twice the speed; however, Marc Andreessen claims that new chips are being processed at the same speed but only at half of the cost.  The theory is that the future will see inexpensive processors at an inexpensive price so that a processor will be in everything; computing capacity will be able to solve problems that had no solution five years prior.

Another component for the advancement of AI is that data is becoming the new oil, which has been made available digitally over the past decade.  Since data can be retrieved through our mobile devices and can be tracked through sensors, new sources of data have appeared through video, social media and digital images.  Conditions that could only be modeled at an elevated level in the past can now be described more accurately due to the almost infinite set of real data that is available; this means accuracy will increase more soon.

Finally, the fourth component that has contributed towards AI advancement is that machine learning is transforming into the new combustion engine as it is accomplished through using mathematical models and algorithms to discover patterns that are implicit in data.  These complex patterns are used by machines to solve on their own whether a new data is similar, that it fits or can be used to predict future outcomes.  Virtual assistants use AI, such as Siri and Cortana, to solve equations and predict outcomes every day with great accuracy; virtual assistants will continue to be used in 2018 and beyond as well as what they will be able to accomplish the more AI continues to grow and evolve.

Artificial Intelligence has seen much improvements than in previous decades.  This year, many experts were amazed and excited about how AI has progressed from the many decades since its’ birth.  Now, we can expect 2018 to see advancements in work, school and possibly in self-driving cars that can result in up to ninety percent fewer car accidents; welcome to the future of Artificial Intelligence.

Original article here.


standard

AI-Generated Celebrity Faces Look Real (video)

2017-10-31 - By 

Researchers from NVIDIA published work with artificial intelligence algorithms, or more specifically, generative adversarial networks, to produce celebrity faces in high detail. Watch the results below.

Original article here.  Research PDF here.

 


standard

The Dark Secret at the Heart of AI

2017-10-09 - By 

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligenceThe car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

Original article here.

 


standard

State Of Machine Learning And AI, 2017

2017-10-01 - By 

AI is receiving major R&D investment from tech giants including Google, Baidu, Facebook and Microsoft.

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

Key takeaways from the study include the following:

  • Tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions. The current rate of AI investment is 3X the external investment growth since 2013. McKinsey found that 20% of AI-aware firms are early adopters, concentrated in the high-tech/telecom, automotive/assembly and financial services industries. The graphic below illustrates the trends the study team found during their analysis.
  • AI is turning into a race for patents and intellectual property (IP) among the world’s leading tech companies. McKinsey found that only a small percentage (up to 9%) of Venture Capital (VC), Private Equity (PE), and other external funding. Of all categories that have publically available data, M&A grew the fastest between 2013 And 2016 (85%).The report cites many examples of internal development including Amazon’s investments in robotics and speech recognition, and Salesforce on virtual agents and machine learning. BMW, Tesla, and Toyota lead auto manufacturers in their investments in robotics and machine learning for use in driverless cars. Toyota is planning to invest $1B in establishing a new research institute devoted to AI for robotics and driverless vehicles.
  • McKinsey estimates that total annual external investment in AI was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. Robotics and speech recognition are two of the most popular investment areas. Investors are most favoring machine learning startups due to quickness code-based start-ups have at scaling up to include new features fast. Software-based machine learning startups are preferred over their more cost-intensive machine-based robotics counterparts that often don’t have their software counterparts do. As a result of these factors and more, Corporate M&A is soaring in this area with the Compound Annual Growth Rate (CAGR) reaching approximately 80% from 20-13 to 2016. The following graphic illustrates the distribution of external investments by category from the study.
  • High tech, telecom, and financial services are the leading early adopters of machine learning and AI. These industries are known for their willingness to invest in new technologies to gain competitive and internal process efficiencies. Many startups have also had their start by concentrating on the digital challenges of this industries as well. The MGI Digitization Index is a GDP-weighted average of Europe and the United States. See Appendix B of the study for a full list of metrics and explanation of methodology. McKinsey also created an overall AI index shown in the first column below that compares key performance indicators (KPIs) across assets, usage, and labor where AI could make a contribution. The following is a heat map showing the relative level of AI adoption by industry and key area of asset, usage, and labor category.
  • McKinsey predicts High Tech, Communications, and Financial Services will be the leading industries to adopt AI in the next three years. The competition for patents and intellectual property (IP) in these three industries is accelerating. Devices, products and services available now and on the roadmaps of leading tech companies will over time reveal the level of innovative activity going on in their R&D labs today. In financial services, for example, there are clear benefits from improved accuracy and speed in AI-optimized fraud-detection systems, forecast to be a $3B market in 2020. The following graphic provides an overview of sectors or industries leading in AI addition today and who intend to grow their investments the most in the next three years.
  • Healthcare, financial services, and professional services are seeing the greatest increase in their profit margins as a result of AI adoption.McKinsey found that companies who benefit from senior management support for AI initiatives have invested in infrastructure to support its scale and have clear business goals achieve 3 to 15% percentage point higher profit margin. Of the over 3,000 business leaders who were interviewed as part of the survey, the majority expect margins to increase by up to 5% points in the next year.
  • Amazon has achieved impressive results from its $775 million acquisition of Kiva, a robotics company that automates picking and packing according to the McKinsey study. “Click to ship” cycle time, which ranged from 60 to 75 minutes with humans, fell to 15 minutes with Kiva, while inventory capacity increased by 50%. Operating costs fell an estimated 20%, giving a return of close to 40% on the original investment
  • Netflix has also achieved impressive results from the algorithm it uses to personalize recommendations to its 100 million subscribers worldwide. Netflix found that customers, on average, give up 90 seconds after searching for a movie. By improving search results, Netflix projects that they have avoided canceled subscriptions that would reduce its revenue by $1B annually.

 

Original article here.


standard

New Theory Cracks Open the Black Box of Deep Learning

2017-09-22 - By 

A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

Even as machines known as “deep neural networks” have learned to converse, drive cars, beat video games and Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called “deep-learning” algorithms to work so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either).

Like a brain, a deep neural network has layers of neurons — artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data — the pixels of a photo of a dog, for instance — up through the layers to neurons associated with the right high-level concepts, such as “dog.” After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

Last month, a YouTube videoof a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

Geoffrey Hinton, a pioneer of deep learning who works at Google and the University of Toronto, emailed Tishby after watching his Berlin talk. “It’s extremely interesting,” Hinton wrote. “I have to listen to it another 10,000 times to really understand it, but it’s very rare nowadays to hear a talk with a really original idea in it that may be the answer to a really major puzzle.”

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”

The Bottleneck

Tishby began contemplating the information bottleneck around the time that other researchers were first mulling over deep neural networks, though neither concept had been named yet. It was the 1980s, and Tishby was thinking about how good humans are at speech recognition — a major challenge for AI at the time. Tishby realized that the crux of the issue was the question of relevance: What are the most relevant features of a spoken word, and how do we tease these out from the variables that accompany them, such as accents, mumbling and intonation? In general, when we face the sea of data that is reality, which signals do we keep?

“This notion of relevant information was mentioned many times in history but never formulated correctly,” Tishby said in an interview last month. “For many years people thought information theory wasn’t the right way to think about relevance, starting with misconceptions that go all the way to Shannon himself.”

Claude Shannon, the founder of information theory, in a sense liberated the study of information starting in the 1940s by allowing it to be considered in the abstract — as 1s and 0s with purely mathematical meaning. Shannon took the view that, as Tishby put it, “information is not about semantics.” But, Tishby argued, this isn’t true. Using information theory, he realized, “you can define ‘relevant’ in a precise sense.”

Imagine X is a complex data set, like the pixels of a dog photo, and Yis a simpler variable represented by those data, like the word “dog.” You can capture all the “relevant” information in X about Y by compressing X as much as you can without losing the ability to predict Y. In their 1999 paper, Tishby and co-authors Fernando Pereira, now at Google, and William Bialek, now at Princeton University, formulated this as a mathematical optimization problem. It was a fundamental idea with no killer application.

“I’ve been thinking along these lines in various contexts for 30 years,” Tishby said. “My only luck was that deep neural networks became so important.”

Eyeballs on Faces on People on Scenes

Though the concept behind deep neural networks had been kicked around for decades, their performance in tasks like speech and image recognition only took off in the early 2010s, due to improved training regimens and more powerful computer processors. Tishby recognized their potential connection to the information bottleneck principle in 2014 after reading a surprising paper by the physicists David Schwaband Pankaj Mehta.

The duo discovered that a deep-learning algorithm invented by Hinton called the “deep belief net” works, in a particular case, exactly like renormalization, a technique used in physics to zoom out on a physical system by coarse-graining over its details and calculating its overall state. When Schwab and Mehta applied the deep belief net to a model of a magnet at its “critical point,” where the system is fractal, or self-similar at every scale, they found that the network automatically used the renormalization-like procedure to discover the model’s state. It was a stunning indication that, as the biophysicist Ilya Nemenman said at the time, “extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

The only problem is that, in general, the real world isn’t fractal. “The natural world is not ears on ears on ears on ears; it’s eyeballs on faces on people on scenes,” Cranmer said. “So I wouldn’t say [the renormalization procedure] is why deep learning on natural images is working so well.” But Tishby, who at the time was undergoing chemotherapy for pancreatic cancer, realized that both deep learning and the coarse-graining procedure could be encompassed by a broader idea. “Thinking about science and about the role of my old ideas was an important part of my healing and recovery,” he said.

In 2015, he and his student Noga Zaslavsky hypothesizedthat deep learning is an information bottleneck procedure that compresses noisy data as much as possible while preserving information about what the data represent. Tishby and Shwartz-Ziv’s new experiments with deep neural networks reveal how the bottleneck procedure actually plays out. In one case, the researchers used small networks that could be trained to label input data with a 1 or 0 (think “dog” or “no dog”) and gave their 282 neural connections random initial strengths. They then tracked what happened as the networks engaged in deep learning with 3,000 sample input data sets.

The basic algorithm used in the majority of deep-learning procedures to tweak neural connections in response to data is called “stochastic gradient descent”: Each time the training data are fed into the network, a cascade of firing activity sweeps upward through the layers of artificial neurons. When the signal reaches the top layer, the final firing pattern can be compared to the correct label for the image — 1 or 0, “dog” or “no dog.” Any differences between this firing pattern and the correct pattern are “back-propagated” down the layers, meaning that, like a teacher correcting an exam, the algorithm strengthens or weakens each connection to make the network layer better at producing the correct output signal. Over the course of training, common patterns in the training data become reflected in the strengths of the connections, and the network becomes expert at correctly labeling the data, such as by recognizing a dog, a word, or a 1.

In their experiments, Tishby and Shwartz-Ziv tracked how much information each layer of a deep neural network retained about the input data and how much information each one retained about the output label. The scientists found that, layer by layer, the networks converged to the information bottleneck theoretical bound: a theoretical limit derived in Tishby, Pereira and Bialek’s original paper that represents the absolute best the system can do at extracting relevant information. At the bound, the network has compressed the input as much as possible without sacrificing the ability to accurately predict its label.

Tishby and Shwartz-Ziv also made the intriguing discovery that deep learning proceeds in two phases: a short “fitting” phase, during which the network learns to label its training data, and a much longer “compression” phase, during which it becomes good at generalization, as measured by its performance at labeling new test data.

As a deep neural network tweaks its connections by stochastic gradient descent, at first the number of bits it stores about the input data stays roughly constant or increases slightly, as connections adjust to encode patterns in the input and the network gets good at fitting labels to it. Some experts have compared this phase to memorization.

Then learning switches to the compression phase. The network starts to shed information about the input data, keeping track of only the strongest features — those correlations that are most relevant to the output label. This happens because, in each iteration of stochastic gradient descent, more or less accidental correlations in the training data tell the network to do different things, dialing the strengths of its neural connections up and down in a random walk. This randomization is effectively the same as compressing the system’s representation of the input data. As an example, some photos of dogs might have houses in the background, while others don’t. As a network cycles through these training photos, it might “forget” the correlation between houses and dogs in some photos as other photos counteract it. It’s this forgetting of specifics, Tishby and Shwartz-Ziv argue, that enables the system to form general concepts. Indeed, their experiments revealed that deep neural networks ramp up their generalization performance during the compression phase, becoming better at labeling test data. (A deep neural network trained to recognize dogs in photos might be tested on new photos that may or may not include dogs, for instance.)

It remains to be seen whether the information bottleneck governs all deep-learning regimes, or whether there are other routes to generalization besides compression. Some AI experts see Tishby’s idea as one of many important theoretical insights about deep learning to have emerged recently. Andrew Saxe, an AI researcher and theoretical neuroscientist at Harvard University, noted that certain very large deep neural networks don’t seem to need a drawn-out compression phase in order to generalize well. Instead, researchers program in something called early stopping, which cuts training short to prevent the network from encoding too many correlations in the first place.

Tishby argues that the network models analyzed by Saxe and his colleagues differ from standard deep neural network architectures, but that nonetheless, the information bottleneck theoretical bound defines these networks’ generalization performance better than other methods. Questions about whether the bottleneck holds up for larger neural networks are partly addressed by Tishby and Shwartz-Ziv’s most recent experiments, not included in their preliminary paper, in which they train much larger, 330,000-connection-deep neural networks to recognize handwritten digits in the 60,000-image Modified National Institute of Standards and Technology database, a well-known benchmark for gauging the performance of deep-learning algorithms. The scientists saw the same convergence of the networks to the information bottleneck theoretical bound; they also observed the two distinct phases of deep learning, separated by an even sharper transition than in the smaller networks. “I’m completely convinced now that this is a general phenomenon,” Tishby said.

Humans and Machines

The mystery of how brains sift signals from our senses and elevate them to the level of our conscious awareness drove much of the early interest in deep neural networks among AI pioneers, who hoped to reverse-engineer the brain’s learning rules. AI practitioners have since largely abandoned that path in the mad dash for technological progress, instead slapping on bells and whistles that boost performance with little regard for biological plausibility. Still, as their thinking machines achieve ever greater feats — even stoking fears that AI could someday pose an existential threat — many researchers hope these explorations will uncover general insights about learning and intelligence.

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example. Lake and his colleagues’ models suggest the brain may deconstruct the new letter into a series of strokes — previously existing mental constructs — allowing the conception of the letter to be tacked onto an edifice of prior knowledge. “Rather than thinking of an image of a letter as a pattern of pixels and learning the concept as mapping those features” as in standard machine-learning algorithms, Lake explained, “instead I aim to build a simple causal model of the letter,” a shorter path to generalization.

Such brainy ideas might hold lessons for the AI community, furthering the back-and-forth between the two fields. Tishby believes his information bottleneck theory will ultimately prove useful in both disciplines, even if it takes a more general form in human learning than in AI. One immediate insight that can be gleaned from the theory is a better understanding of which kinds of problems can be solved by real and artificial neural networks. “It gives a complete characterization of the problems that can be learned,” Tishby said. These are “problems where I can wipe out noise in the input without hurting my ability to classify. This is natural vision problems, speech recognition. These are also precisely the problems our brain can cope with.”

Meanwhile, both real and artificial neural networks stumble on problems in which every detail matters and minute differences can throw off the whole result. Most people can’t quickly multiply two large numbers in their heads, for instance. “We have a long class of problems like this, logical problems that are very sensitive to changes in one variable,” Tishby said. “Classifiability, discrete problems, cryptographic problems. I don’t think deep learning will ever help me break cryptographic codes.”

Generalizing — traversing the information bottleneck, perhaps — means leaving some details behind. This isn’t so good for doing algebra on the fly, but that’s not a brain’s main business. We’re looking for familiar faces in the crowd, order in chaos, salient signals in a noisy world.

Original article here.


standard

AI will be Bigger than the Internet

2017-09-19 - By 

AI will be the next general purpose technology (GPT), according to experts. Beyond the disruption of business and data – things that many of us don’t have a need to care about – AI is going to change the way most people live, as well.

As a GPT, AI is predicted to integrate within our entire society in the next few years, and become entirely mainstream — like electricity and the internet.

The field of AI research has the potential to fundamentally change more technologies than, arguably, anything before it. While electricity brought illumination and the internet revolutionized communication, machine-learning is already disrupting financechemistrydiagnosticsanalytics, and consumer electronics – to name a few. This is going to bring efficiency to a world with more data than we know what to do with.

It’s also going to disrupt your average person’s day — in a good way — like other GPTs before AI did.

Anyone who lived in a world before Google and the internet may recall a time when people would actually have arguments about simple facts. There wasn’t an easy way, while riding in a car, to determine which band sang a song that was on the radio. If the DJ didn’t announce the name of the artist and track before a song played, you could be subject to anywhere from three to seven minutes of heated discussion over whether “The Four Tops” or “The Temptations” sang a particular song, for example.

Today we’re used to looking up things, and for many of us it’s almost second nature. We’re throwing cookbooks out, getting rid of encyclopedias, and libraries are mostly meeting places for fiction enthusiasts these days. This is what a general purpose technology does — it changes everything.

If your doctor told you they didn’t believe in the internet you’d get a new doctor. Imagine a surgeon who chose not to use electricity — would you let them operate on you?

The AI that truly changes the world beyond simply augmenting humans, like assisted steering does, is the one that starts removing other technology from our lives, like the internet did. With the web we’ve shrunken millions of books and videos down to the size of a single iPhone, at least as far as consumers are concerned.

AI is being layered into our everyday lives, as a general purpose technology, like electricity and the internet. And once it reaches its early potential we’ll be getting back seconds of time at first, then minutes, and eventually we’ll have devices smart enough to no longer need us to direct them at every single step, giving us back all the time we lost when we started splitting our reality between people and computers.

Siri and Cortana won’t need to be told what to do all the time, for example, once AI learns to start paying attention to the world outside of the smart phone.

Now, if only I could convince the teenager in my house to do the same …

Original article here.


standard

AI detectives are cracking open the black box of deep learning (video)

2017-08-30 - By 

Jason Yosinski sits in a small glass box at Uber’s San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinski’s program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: It’s a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AI’s individual computational nodes—the neurons, so to speak—to see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. “This responds to your face and my face,” he says. “It responds to different size faces, different color faces.”

No one trained this network to identify faces. Humans weren’t labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinski’s probe had illuminated one small part of it, but overall, it remained opaque. “We build amazing models,” he says. “But we don’t quite understand them. And every year, this gap is going to get a bit larger.”

This video provides a high-level overview of the problem:

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as it’s known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it “AI neuroscience.”

Opening up the black box

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AI—be they text, images, or anything else—in clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiro’s program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the words—or parts of an image or molecular structure, or any other kind of data—most important in the AI’s original judgment. The tests might reveal that the word “horrible” was vital to a panning or that “Daniel Day Lewis” led to a positive review. But although LIME can diagnose those singular examples, that result says little about the network’s overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesn’t require testing the network a thousand times over: a boon if you’re trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank reference—a black image or a zeroed-out array in place of text—and transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting in—outfitted with the standard medley of mugs, tables, chairs, and computers—as a Google conference room. “I can give a zillion reasons.” But say you slowly dim the lights. “When the lights become very dim, only the biggest reasons stand out.” Those transitions from a blank reference allow Sundararajan to capture more of the network’s decisions than Ribeiro’s variations do. But deeper, unanswered questions are always there, Sundararajan says—a state of mind familiar to him as a parent. “I have a 4-year-old who continually reminds me of the infinite regress of ‘Why?’”

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create “explanations” for their models’ internal logic. The Defense Advanced Research Projects Agency, the U.S. military’s blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasn’t the only thing on their minds, she says. “I’m not sure what it’s doing,” they told her. “I’m not sure I can trust it.”

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. “Fear of a neural net is completely justified,” he says. “What really terrifies me is what else did the neural net learn that’s equally wrong?”

Today’s neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of data—say, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections “fire” in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learns—somehow—to make fine distinctions among breeds. “Using modern horsepower and chutzpah, you can get these things to really sing,” Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Complete original article here.

 


standard

World’s top AI Companies Plead for Ban on Killer Robots

2017-08-21 - By 

A revolution in warfare where killer robots, or autonomous weapons systems, are common in battlefields is about to start.

Both scientists and industry are worried.

The world’s top artificial intelligence (AI) and robotics companies have used a conference in Melbourne to collectively urge the United Nations to ban killer robots or lethal autonomous weapons.

An open letter by 116 founders of robotics and artificial intelligence companies from 26 countries was launched at the world’s biggest artificial intelligence conference, the International Joint Conference on Artificial Intelligence (IJCAI), as the UN delays meeting until later this year to discuss the robot arms race.

Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, released the letter at the opening of the opening of the conference, the world’s pre-eminent gathering of experts in artificial intelligence and robotics.

The letter is the first time that AI and robotics companies have taken a joint stand on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal talks on autonomous weapons. Of these, 19 have already called for a ban.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter says.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Signatories of the 2017 letter include:

  • Elon Musk, founder of Tesla, SpaceX and OpenAI (US)
  • Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
  • Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
  • Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
  • Jü rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
  • Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)

Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons.

The 2015 letter was signed by thousands of researchers working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” says Walsh.

“It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.”

Ryan Gariepy, founder of Clearpath Robotics, says the number of prominent companies and individuals who have signed this letter reinforces the warning that this is not a hypothetical scenario but a very real and pressing concern.

“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he says.

“The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”

The letter:

An Open Letter to the United Nations Convention on Certain Conventional Weapons 
As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations. We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.

We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

FULL LIST OF SIGNATORIES (by country):

  • Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.
  • Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia.
  • Charles Gretton, founder of Hivery, Australia. Brad Lorge, founder & CEO of Premonition.io, Australia
  • Brenton O’Brien, founder & CEO of Microbric, Australia.
  • Samir Sinha, founder & CEO of Robonomics AI, Australia.
  • Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.
  • Peter Turner, founder & MD of Tribotix, Australia.
  • Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.
  • Ryan Gariepy, founder & CTO, Clearpath Robotics, found & CTO of OTTO Motors, Canada.
  • James Chow, founder & CEO of UBTECH Rob otics, China.
  • Robert Li, founder & CEO of Sankobot, China.
  • Marek Rosa, founder & CEO of GoodAI, Czech Republic.
  • Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.
  • Markus Järve, founder & CEO of Krakul, Estonia.
  • Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.
  • Esben Østergaard, founder & CTO of Universal Robotics, Denmark.
  • Raul Bravo, founder & CEO of DIBOTICS, France.
  • Raphael Cherrier, founder & CEO of Qucit, France.
  • Jerome Monceaux, founder & CEO of Spoon.ai, founder & CCO of Aldebaran Robotics, France.
  • Charles Ollion, founder & Head of Research at Heuritech, France.
  • Anis Sahbani, founder & CEO of Enova Robotics, France.
  • Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.
  • Marcus Frei, founder & CEO of NEXT.robotics, Germany
  • Kirstinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.
  • Fahad Azad, founder of Robosoft Systems, India.
  • Debashis Das, Ashish Tupate, Jerwin Prabu, founders (incl. CEO ) of Bharati Robotics, India.
  • Pulkit Gaur, founder & CTO of Gridbots Technologies, India.
  • Pranay Kishore, founder & CEO of Phi Robotics Research, India.
  • Shahid Memom, founder & CTO of Vanora Robots, India.
  • Krishnan Nambiar & Shahid Memon, founders, CEO & C TO of Vanora Robotics, India.
  • Achu Wilson, founder & CTO of Sastra Robotics, India.
  • Neill Gernon, founder & MD of Atrovate, founder of Dublin.AI, Ireland.
  • Parsa Ghaffari, founder & CEO of Aylien, Ireland.
  • Alan Holland, founder & CEO of Keelvar Systems, Ireland.
  • Alessandro Prest, founder & CTO of LogoGrab, Ireland.
  • Alessio Bonfietti, founder & CEO of MindIT, Italy.
  • Angelo Sudano, founder & CTO of ICan Robotics, Italy.
  • Shigeo Hirose, Michele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of HiBot
  • Corporation, Japan.
  • Luis Samahí García González, founder & CEO of QOLbotics, Mexico.
  • Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.
  • Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.
  • Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.
  • Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at
  • BlueEye Robotics, Norway.
  • Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.
  • Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.
  • Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.
  • Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations Officer, Singapore.
  • Jasper Horrell, founder of DeepData, South Africa.
  • Toni Ferrate, founder & CEO of RO – BOTICS, Spain.
  • José Manuel del Río, founder & CEO of Aisoy Robotics, Spain. Victor Martin, founder & CEO of Macco Robotics, Spain.
  • Timothy Llewellynn, founder & CEO of nViso, Switzerland.
  • Francesco Mondada, founder of K – Team, Switzerland.
  • Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders,
  • President & CEO of Nnaisense, Switzerland.
  • Satish Ramachandran, founder of AROBOT, United Arab Emirates.
  • Silas Adekunle, founder & CEO of Reach Robotics, UK.
  • Steve Allpress, founder & CTO of FiveAI, UK.
  • Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.
  • Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.
  • Nic Greenway, founder of React AI Ltd (Aiseedo), UK.
  • Daniel Hulme, founder & CEO of Satalia, UK.
  • Charlie Muirhead & Tabitha Goldstaub, founders & CEO of Cognitio nX, UK.
  • Geoff Pegman, founder & MD of R U Robots, UK.
  • Mustafa Suleyman, founder & Head of Applied AI, DeepMind, UK.
  • Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of PredictionIO, UK.
  • Antoine Biondeau, founder & CEO of Sentient Technologies, USA.
  • Brian Gerkey, founder & CEO of Open Source Robotics, USA.
  • Ryan Hickman & Soohyun Bae, founders, CEO & CTO of TickTock.AI, USA.
  • Henry Hu, founder & CEO of Cafe X Technologies, USA.
  • Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.
  • Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.
  • Brian Mingus, founder & CTO of Latently, USA.
  • Mohammad Musa, founder & CEO at Deepen AI, USA.
  • Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.
  • Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.
  • Erik Nieves, founder & CEO of PlusOne Robotics, USA.
  • Steve Omohundro, founder & President of Possibility Research, USA.
  • Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.
  • Dan Reuter, found & CEO of Electric Movement, USA.
  • Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA. Dan Rubins, founder & CEO of Legal Robot, USA.
  • Stuart Russell, founder & VP of Bayesian Logic Inc., USA.
  • Andrew Schroeder, founder of WeRo botics, USA.
  • Gabe Sibley & Alex Flint, founders, CEO & CPO of Zippy.ai, USA.
  • Martin Spencer, founder & CEO of GeckoSystems, USA.
  • Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.
  • Michael Stuart, founder & CEO of Lucid Holdings, USA.
  • Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.

Original article here.


standard

Gartner’s Hype Cycle: AI for Marketing

2017-07-24 - By 

Gartner’s 2017 Hype Cycle for Marketing and Advertising is out (subscription required) and, predictably, AI for Marketing has appeared as a new dot making a rapid ascent toward the Peak of Inflated Expectations. I say “rapid” but some may be surprised to see us projecting that it will take more than 10 years for AI in Marketing to reach the Plateau of Productivity. Indeed, the timeframe drew some skepticism and we deliberated on this extensively, as have many organizations and communities.

AI for Marketing on the 2017 Hype Cycle for Marketing and Advertising

First, let’s be clear about one thing: a long journey to the plateau is not a recommendation to ignore a transformational technology. However, it does raise questions of just what to expect in the nearer term.

Skeptics of a longer timeframe rightly point out the velocity with which digital leaders from Google to Amazon to Baidu and Alibaba are embracing these technologies today, and the impact they’re likely to have on marketing and advertising once they’ve cracked the code on predicting buying behavior and customer satisfaction and acting accordingly.

There’s no point in debating the seriousness of the leading digital companies when it comes to AI. The impact that AI will have on marketing is perhaps more debatable – some breakthrough benefits are already being realized, but – to use some AI jargon here – many problems at the heart of marketing exhibit high enough dimensionality to suggest they’re AI-complete. In other words, human behavior is influenced by a large number of variables which makes it hard to predict unless you’re human. On the other hand, we’ve seen dramatic lifts in conversion rates from AI-enhanced campaigns and the global scale of markets means that even modest improvements in matching people with products could have major effects. Net-net, we do believe AI that will have a transformational on marketing and that some of these transformational effects will be felt in fewer than ten years – in fact, they’re being felt already.

Still, in the words of Paul Saffo, “Never mistake a clear view for a short distance.” The magnitude of a technology’s impact is, if anything, a sign it will take longer than expected to reach some sort of equilibrium. Just look at the Internet. I still vividly recall the collective expectation that many of us held in 1999 that business productivity was just around the corner. The ensuing descent into the Trough of Disillusionment didn’t diminish the Internet’s ultimate impact – it just delayed it. But the delay was significant enough to give a few companies that kept the faith, like Google and Amazon, an insurmountable advantage when Internet at last plateaued, about 10 years later.

Proponents of faster impact point out that AI has already been through a Trough of Disillusionment maybe ten times as long as the Internet – the “AI Winter” that you can trace to the 1980s. By this reckoning, productivity is long overdue. This may be true for a number of domains – such as natural language processing and image recognition – but it’s hardly the case for the kinds of applications we’re considering in AI for Marketing. Before we could start on those we needed massive data collection on the input side, a cloud-based big data machine learning infrastructure, and real-time operations on the output side to accelerate the learning process to the point where we could start to frame the optimization problem in AI. Some of the algorithms may be quite old, but their real-time marketing context is certainly new.

More importantly, consider the implications of replacing the way marketing works today with true lights-out AI-driven operations. Even when machines do outperform human counterparts in making the kinds of judgments marketers pride themselves on, the organizational and cultural resistance they will face from the enterprise is profound….with important exceptions: disruptive start-ups and the digital giants who are developing these technologies and currently dominate digital media.

And enterprises aren’t the only source of resistance. The data being collected in what’s being billed as “people-based marketing” – the kind that AI will need to predict and influence behavior – is the subject of privacy concerns that stem from the “people’s” notable lack of an AI ally in the data collection business. See more comments here.

Then consider this: In 2016, P&G spent over $4B media. Despite their acknowledgment of the growing importance of the Internet to their marketing (20 years in), they still spend orders of magnitude more on TV (see Ad Age, ubscription required). As we know, Marc Pritchard, P&G’s global head of brands, doesn’t care much for the Internet’s way of doing business and has demanded fundamental changes in what he calls its “corrupt and non-transparent media supply chain.”

Well, if Marc and his colleagues don’t like the Internet’s media supply chain, wait until they get a load of the emerging AI marketing supply chain. Here’s a market where the same small group of gatekeepers own the technology, the data, the media, the infrastructure – even some key physical distribution channels – and their business models are generally based on extracting payment from suppliers, not consumers who enjoy their services for “free.” The business impulses of these companies are clear: just ask Alexa. What they haven’t perfected yet is that shopping concierge that gets you exactly what you want, but they’re working on it. If their AI can solve that, then two of P&G’s most valuable assets – its legacy media-based brand power and its retail distribution network – will be neutralized. Does this mean the end of consumer brands? Not necessarily, but our future AI proxies may help us cultivate different ideas about brand loyalty.

This brings us to the final argument against putting AI for Marketing too far out on the hype cycle: it will encourage complacency in companies that need to act. By the time established brands recognize what’s happened, it will be too late.

Business leaders have told me they use Gartner’s Hype Cycles in two ways. One is to help build urgency behind initiatives that are forecast to have a large, near-term impact, especially ones tarnished by disillusionment. The second is to subdue people who insist on drawing attention to seductive technologies on the distant horizon. Neither use is appropriate for AI for Marketing. In this case, the long horizon is neither a cause for contentment nor is a reason to go shopping.

First, brands need a plan. And the plan has to anticipate major disruptions, not just in marketing, but in the entire consumer-driven, AI-mediated supply chain in which brands – or their AI agents – will find themselves negotiating with a lot of very smart algorithms. I feel confident in predicting that this will take a long time. But that doesn’t mean it’s time to ignore AI. On the contrary, it’s time to put learning about and experiencing AI at the top of the strategic priority list, and to consider what role your organization will play when these technologies are woven into our markets and brand experiences.

Original article here.


standard

The Internet of Things Is the Next Digital Evolution — What Will It Mean? (video)

2017-06-28 - By 

As digital technology infuses everyday life, it will change human behavior—raising new challenges about equality and fairness.

In a single generation, this has become the new normal: Nearly all adult Americans use the internet, with three-fourths of them having broadband access in their homes. And the internet travels with them in their pockets—95 percent have a cellphone, 81 percent have a smartphone. This ability to constantly connect has changed how people interact, especially in their social networks—more than two-thirds of adults are on Facebook or Twitter or another social media platform.

Digital innovations have made it easier for people to find more information than ever before, and made it easier to create and share material with others. From smartphone-delivered directions to voice-driven queries to on-demand news, people’s lives have been transformed by these technologies. Yet today’s inventions and innovations mark only the start, and tomorrow’s digital disruption, which is already underway, will probably dwarf them in impact.

The next digital evolution is the rise of the internet of things—sometimes now called the “internet on things.” This refers to the growing phenomenon of building connectivity into vehicles, wearable devices, appliances and other household items such as thermostats, as well as goods moving through business supply chains. It also covers the rapid spread of data-emitting or tracking sensors in the physical environment that give readouts on everything from crop conditions to pollution levels to where there are open parking spaces to babies’ breathing rates in their cribs.

The Pew Research Center and Elon University in North Carolina invited hundreds of technology experts in 2014 to predict the future of the internet by the year 2025, and the overriding theme of their answers addressed this reality. They predicted that the growth of the internet of things will soon make the internet like electricity—less visible, yet more deeply embedded in people’s lives, for good and for ill.

The internet of things will have literally life-changing impact on innovation and the application of knowledge in the coming years. Here are four major developments to anticipate.

The emergence of the ‘datacosm’

The spread of the internet of things will accelerate the digitization of data, spawning creation of record amounts of information. Data and connectivity will be ubiquitous in an environment sometimes called the “datacosm”— a term used to describe the importance of data, analytics, and algorithms in technology’s evolution. As previous information revolutions have taught us, once people—and things—get more connected, their very nature changes.

“When we are connected, power shifts. It changes who we are, what we might expect, how we might be manipulated, attacked, or enriched,” writes Joshua Cooper Ramo in his new book, The Seventh Sense. Networks of constant connection “destroy the nature of even the most solid-looking objects.” Connected things and connected people become more useful, more powerful, but also more hair-trigger and more destructive because their power is multiplied by a networking effect. The more connections they have, the more capacity they have for good and harmful purposes.

On the human level, the datacosm arising from the internet of things could function like a “fifth limb,” an extra brain lobe, and another layer of “skin” because it will be enveloping and omnipresent. People will have unparalleled self-awareness via their “lifestreams”: their genome, their current physical condition, their memories, and other trackable aspects of their well-being. Data ubiquity will allow reality to be augmented in helpful—and creepy—ways.

For instance, people will be able to look at others and, thanks to facial recognition and digital profiling, simultaneously browse their digital dossiers through an app that could display the data on “smart” contact lenses or a nearby wall surface. They will gaze at artifacts such as paintings or movies and be able to download material about how the art was created and the life story of the creator. They will take in landscapes and cityscapes and be able to learn quickly what transpired in these places long ago or what kinds of environmental problems threaten them. They will size up buildings and have an overlay of insight about what takes place inside them.

Part of the reason that data will be infused into so much is that the interfaces of connectivity and the ability to summon data will be radically enhanced. Human voices, haptic interfaces that can be manipulated by finger movements (think of the movie “Minority Report”), real-time language translators, data dashboards that give readouts on a user’s personally designed webpage, even, eventually, brain-initiated commands will make it possible for people to bring data into whatever surroundings they find themselves. Not only will this allow people to apply knowledge of all kinds to their immediate circumstances, but it will also advance analysts’ understanding of entire populations as their “data exhaust” is captured by their GPS-enabled devices and web clickstream activity.

Many experts in the Pew Research Center’s canvassings expect major benefits to emerge from this growth and spread of data, starting with the fact that knowledge will be ever-easier to apply to real-time decisions such as which custom-designed medicine a person should receive, or which commuting route to take to work. Beyond that, this data overlay and growing analytic power will allow swifter interventions when public health problems arise, weather emergencies threaten, environmental stressors mount, educational programs are introduced, and products are brought to the market.

This new reality will also cause major hardships. When information is superabundant, what is the best way to find the best knowledge and apply it to decisions? When so much personal data is captured, how can people retain even a sliver of privacy? What mechanisms can be created to overcome polarizing propaganda that can weaken societies? What are the right ways to avoid “fake news,” disinformation, and distracting sideshows in a world of info-glut?

Struggles over people’s “right relationship” to information will be one of the persistent realities of the 21st

Growing reliance on algorithms

The explosion of data has given prominence to algorithms as tools for finding meaning in data and using it to shape decisions, predict humans’ behavior, and anticipate their needs. Analysts such as Aneesh Aneesh of the University of Wisconsin, Milwaukee, foresee algorithms taking over public and private activities in a new era of “algocratic governance” that supplants the way current “bureaucratic hierarchies” make government decisions. Others, like Harvard University’s Shoshana Zuboff, describe the emergence of “surveillance capitalism” that gains profits from monetizing data captured through surveillance and organizes economic behavior in an “information civilization.”

The experts’ views compiled by the Pew Research Center and Elon University offer several broad predictions about the algorithmic age. They predicted that algorithms will continue to spread everywhere and agreed that the benefits of computer codes can lead to greater human insights into the world, less waste, and major safety advantages. A share of respondents said data-driven approaches to problem-solving will often improve on human approaches to addressing issues because the computer codes will be refined at much greater speeds. Many predicted that algorithms will be effective tools to make up for human shortcomings.

But respondents also expressed concerns about algorithms.

They worried that humanity and human judgment are lost when data and predictive modeling become paramount. These experts argued that algorithms are primarily created in pursuit of profits and efficiencies and that this can be a threat; that algorithms can manipulate people and outcomes; that a somewhat flawed yet inescapable “logic-driven society” could emerge; that code will supplant humans in decision-making and that, in the process, humans will lose skills and specialized, local intelligence in a world where decisions are based on more homogenized algorithms; and that respect for individuals could diminish.

Just as grave a concern is that biases exist in algorithmically organized systems that could worsen social divisions. Many in the expert sampling said that algorithms reflect the biases of programmers and that the data sets they use are often limited, deficient, or incorrect. This can deepen societal divides. Those who are disadvantaged could be even more so in an algorithm-organized future, especially if algorithms are shaped by corporate data collectors. That could limit people’s exposure to a wider range of ideas and eliminate serendipitous encounters with information.

A new relationship with machines and complementary intelligence

As data and algorithms permeate daily life, people will have to renegotiate the way they use and think about machines, which now are in a state of accelerating learning. Many experts see a new equilibrium emerging as people take advantage of artificial intelligence that can be consulted in an instant, context-aware gadgets that “read” a situation and assemble relevant information, robotic devices that serve their needs, smart assistants or bots (possibly in the form of holograms) that help people navigate the world or help represent them to others, and device-based enhancements to their bodies and brains. “Basically, it is the Metaverse from Snow Crash,” predicts futurist Stowe Boyd, referring to Neal Stephenson’s sci-fi vision of a world where people and their avatars seamlessly interact with other people, their avatars, and independent artificial intelligence agents developed by third parties, including corporations.

Even if it does not fully reach that state, there will be a great re-sorting of the roles people play in the world and the functions machines assume. Now that IBM’s supercomputer Watson has beaten the world’s best chess and “Jeopardy” players, and Google’s AI system has vanquished the world’s Go champion, there is strong incentive to bring these masterful machines into hospital operating rooms and have them help assess radiology readouts; to outsource them to stock trading and insurance risk analysts; to use them in self-driving cars and drones; to let them aid people’s capacity to move around smart homes and smart cities.

The creation and application of all this knowledge has vast implications for basic human activity—starting with cognition. The very act of thinking is already undergoing significant change as people learn how to tap into all this information and cope with processing it. That impact will expand in the future. The quality of “being” will change as people are able to be “with” each other via lifelike telepresence. People’s capacities are likely to expand as digital devices, prostheses, and brain-enhancing chips become available. Human behavior itself could change as an overlay of data gives people enhanced situational and self-awareness. The way people allocate their time and attention will be restructured as options proliferate. For instance, the manner in which they spend their leisure time is likely to be radically recast as people are able to amuse themselves in compelling new virtual worlds and enrich themselves with vivid new learning experiences.

Greater innovation in social norms, collective action, credentials, and laws

With so much upheaval ahead, people, groups, and organizations will be forced to adjust. At the level of social norms, it is easy to envision social environments in which people must constantly negotiate what information can be shared, what kinds of interruptions are tolerable, what balance of fact-checking and gossip is acceptable, and what personal multitasking is harmful. In other words, much of what constitutes civil behavior will be up for grabs.

At a more formal level, some primary aspects of collective action and power are already altered as social networks become a societal force, both as pathways of knowledge sharing and as mechanisms for mobilizing others to do something. There are new ways for people to collaborate and solve problems. Moreover, there are a growing number of group structures that address problems ranging from microniche matters (my neighbors and I respond to a local issue) to macroglobal wicked problems (multinational alliances tackle climate change and pandemics).

Shifts in labor markets in the knowledge economy, which are constantly pressing workers to acquire new skills, will probably refashion some of the features of higher education and prompt change in work-related training efforts. Fully 87 percent of current U.S. workers believe it will be important or essential for them to pursue new skills during their work lives. Not many believe the existing certification and licensing systems are up to that job. A notable number of experts in another Pew Research Center-Elon University canvassing are convinced that the training system will begin breaking into several parts: one that specializes in basic work preparation education to coach students in lifelong learning strategies; another that upgrades the capacity of workers inside their existing fields; and yet another that is more designed to handle the elaborate work of schooling those whose skills are obsolete.

At the most structured level, new laws and court battles are inevitable. They are likely to address questions such as: Who owns what information and can use it and profit from it? When something goes wrong with an information-processing system (say, a self-driving car propels itself off a bridge), who is responsible? Where is the right place to draw the line between data capture—that is, surveillance—and privacy? Can a certain level of privacy be maintained as an equal right for all, or is it no longer possible? What kinds of personal information are legitimate to consider in assessing someone’s employment, creditworthiness, or insurance status? Where should libel laws apply in an age when everyone can be a “publisher” or “broadcaster” via social media and when people’s reputations can rise and fall depending on the tone of a tweet? Can information transparency regimes be applied to those who amass data and create profiles from it? Who’s overseeing the algorithms that will be making so many decisions about what happens in society? (Several experts in the Pew Research Center canvassing called for new governmental regulations relating to the development and deployment of algorithms.) Which entities should define what is appropriate out-of-bounds speech for a community, a culture, a nation?

The information revolution in the digital age is magnitudes faster than those of previous ages. Much greater movement is occurring in technology innovation than in social innovation—and this potentially dangerous gap seems to be expanding. As we grapple with this, it would be useful to keep in mind the Enlightenment sensibility of Thomas Jefferson. He wrote in 1816: “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.”

We are likely to have to depend on our machines to help us figure out how to avoid being crushed by this avalanche.

 

Original article here.


standard

Top 15 Python Libraries for Data Science in 2017

2017-06-18 - By 

As Python has gained a lot of traction in the recent years in Data Science industry, I wanted to outline some of its most useful libraries for data scientists and engineers, based on recent experience.

And, since all of the libraries are open sourced, we have added commits, contributors count and other metrics from Github, which could be served as a proxy metrics for library popularity.

 

Core Libraries.

1. NumPy (Commits: 15980, Contributors: 522)

When starting to deal with the scientific task in Python, one inevitably comes for help to Python’s SciPy Stack, which is a collection of software specifically designed for scientific computing in Python (do not confuse with SciPy library, which is part of this stack, and the community around this stack). This way we want to start with a look at it. However, the stack is pretty vast, there is more than a dozen of libraries in it, and we want to put a focal point on the core packages (particularly the most essential ones).

The most fundamental package, around which the scientific computation stack is built, is NumPy (stands for Numerical Python). It provides an abundance of useful features for operations on n-arrays and matrices in Python. The library provides vectorization of mathematical operations on the NumPy array type, which ameliorates performance and accordingly speeds up the execution.

 

2. SciPy (Commits: 17213, Contributors: 489)

SciPy is a library of software for engineering and science. Again you need to understand the difference between SciPy Stack and SciPy Library. SciPy contains modules for linear algebra, optimization, integration, and statistics. The main functionality of SciPy library is built upon NumPy, and its arrays thus make substantial use of NumPy. It provides efficient numerical routines as numerical integration, optimization, and many others via its specific submodules. The functions in all submodules of SciPy are well documented — another coin in its pot.

See the full article here.


standard

Robots are doing the work of $326,000-a-year Goldman Sachs employees

2017-06-14 - By 

Goldman Sachs is automating the work previously done by associates earning $326,000 a year, and Bloomberg reports that half of the tasks needed to prepare for an IPO can be done by algorithms as well.

According to a recent study by the bank, half of the 127 tasks done to prepare for initial public offerings of stock can be automated.

Why it matters: Investment banking is another job on a growing list of high-paid work that is being automated, along with legal services and medicine.

Why it may not (technically) kill jobs: Goldman swears that this technology won’t reduce headcount, but will instead free bankers to focus on tasks like shaping marketing strategy and spending time with clients. But it also says that it has eliminated thousands of hours of human work, which will reduce the need to increase its headcount going forward.

Original article here.

 


standard

McKinsey Analysis rates machine learning

2017-06-13 - By 

McKinsey outline the range of opportunities for applying artificial intelligence in their article. They say:

For companies, successful adoption of these evolving technologies will significantly enhance performance. Some of the gains will come from labor substitution, but automation also has the potential to enhance productivity, raise throughput, improve predictions, outcomes, accuracy, and optimization, as well expand the discovery of new solutions in massively complex areas such as synthetic biology and material science‘.

At Smart Insights, we’ve been looking beyond the hype to look at specific practical applications for applying AI in marketing. Our recommendation is that the best marketing applications are in machine learning where predictive analytics is applied to learn from historic data to deliver more relevant personalization, both on site, using email automation and offsite in programmatic advertising.  This high potential is also clear from the chart from McKinsey (see top of page).

You can see that ‘personalize advertising’ is rated highly and this relates to different forms of personalised messaging I mentioned above. Optimize merchandising strategy is a retail application which is related.

Original article here.

 


standard

When Will AI Exceed Human Performance?

2017-06-07 - By 

Advances in artificial intelligence (AI) will have massive social consequences. Self-driving technology might replace millions of driving jobs over the coming decade. In addition to possible unemployment, the transition will bring new challenges, such as rebuilding infrastructure, protecting vehicle cyber-security, and adapting laws and regulations [5]. New challenges, both for AI developers and policy-makers, will also arise from applications in law enforcement, military technology, and marketing [6]. To prepare for these challenges, accurate forecasting of transformative AI would be invaluable.

Several sources provide objective evidence about future AI advances: trends in computing hardware [7], task performance [8], and the automation of labor [9]. The predictions of AI experts provide crucial additional information. We survey a larger and more representative sample of AI experts than any study to date [10, 11]. Our questions cover the timing of AI advances (including both practical applications of AI and the automation of various human jobs), as well as the social and ethical impacts of AI.

Time Until Machines Outperform Humans

AI would have profound social consequences if all tasks were more cost effectively accomplished by machines. Our survey used the following definition: “High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers. 1 arXiv:1705.08807v2 [cs.AI] 30 May 2017 Each individual respondent estimated the probability of HLMI arriving in future years. Taking the mean over each individual, the aggregate forecast gave a 50% chance of HLMI occurring within 45 years and a 10% chance of it occurring within 9 years. Figure 1 displays the probabilistic predictions for a random subset of individuals, as well as the mean predictions. There is large inter-subject variation: Figure 3 shows that Asian respondents expect HLMI in 30 years, whereas North Americans expect it in 74 years.

Original PDF article here: https://arxiv.org/pdf/1705.08807.pdf


standard

New Leader, Trends, and Surprises in Analytics, Data Science, Machine Learning Software Poll

2017-05-24 - By 

Python caught up with R and (barely) overtook it; Deep Learning usage surges to 32%; RapidMiner remains top general Data Science platform; Five languages of Data Science.

The 18th annual KDnuggets Software Poll again got huge participation from analytics and data science community and vendors, attracting about 2,900 voters, almost exactly the same as last year. Here is the initial analysis, with more detailed results to be posted later.

Python, whose share has been growing faster than R for the last several years, has finally caught up with R, and (barely) overtook it, with 52.6% share vs 52.1% for R.

The biggest surprise is probably the phenomenal share of Deep Learning tools, now used by 32% of all respondents, while only 18% used DL in 2016 and 9% in 2015. Google Tensorflow rapidly became the leading Deep Learning platform with 20.2% share, up from only 6.8% in 2016 poll, and entered the top 10 tools.

While in 2014 I wrote about Four main languages for Analytics, Data Mining, Data Science being R, Python, SQL, and SAS, the 5 main languages of Data Science in 2017 appear to be Python, R, SQL, Spark, and Tensorflow.

RapidMiner remains the most popular general platform for data mining/data science, with about 33% share, almost exactly the same as in 2016.

We note that many vendors have encouraged their users to vote, but all vendors had equal chances, so this does not violate KDnuggets guidelines. We have not seen any bot voting or direct links to vote for only one tool this year.

Spark grew to about 23% and kept its place in top 10 ahead of Hadoop.

Besides TensorFlow, another new tool in the top tier is Anaconda, with 22% share.

Top Analytics/Data Science Tools

Fig 1: KDnuggets Analytics/Data Science 2017 Software Poll: top tools in 2017, and their share in the 2015-6 polls

See original full article here.


standard

Top 10 Hot Artificial Intelligence (AI) Technologies

2017-02-11 - By 

The market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.

Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:

 

  1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors: Attivio, Automated Insights, Cambridge Semantics, Digital Reasoning, Lucidworks, Narrative Science, SAS, Yseop.
  2. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors: NICE, Nuance Communications, OpenText, Verint Systems.
  3. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors: Amazon, Apple, Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft, Microsoft, Satisfi.
  4. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors: Amazon, Fractal Analytics, Google, H2O.ai, Microsoft, SAS, Skytree.
  5. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors: Alluviate, Cray, Google, IBM, Intel, Nvidia.
  6. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors: Advanced Systems Concepts, Informatica, Maana, Pegasystems, UiPath.
  7. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors: Deep Instinct, Ersatz Labs, Fluid AI, MathWorks, Peltarion, Saffron Technology, Sentient Technologies.
  8. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors: 3VR, Affectiva, Agnitio, FaceFirst, Sensory, Synqera, Tahzoo.
  9. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors: Advanced Systems Concepts, Automation Anywhere, Blue Prism, UiPath, WorkFusion.
  10. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors: Basis Technology, Coveo, Expert System, Indico, Knime, Lexalytics, Linguamatics, Mindbreeze, Sinequa, Stratifyd, Synapsify.

 

There are certainly many business benefits gained from AI technologies today, but according to a survey Forrester conducted last year, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:

There is no defined business case                                                       42%

Not clear what AI can be used for                                                       39%

Don’t have the required skills                                                               33%

Need first to invest in modernizing data mgt platform                   29%

Don’t have the budget                                                                           23%

Not certain what is needed for implementing an AI system           19%

AI systems are not proven                                                                    14%

Do not have the right processes or governance                                13%

AI is a lot of hype with little substance                                               11%

Don’t own or have access to the required data                                8%

Not sure what AI means                                                                        3%

Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.

Original article here.


standard

10 new AWS cloud services you never expected

2017-01-27 - By 

From data scooping to facial recognition, Amazon’s latest additions give devs new, wide-ranging powers in the cloud

In the beginning, life in the cloud was simple. Type in your credit card number and—voilà—you had root on a machine you didn’t have to unpack, plug in, or bolt into a rack.

That has changed drastically. The cloud has grown so complex and multifunctional that it’s hard to jam all the activity into one word, even a word as protean and unstructured as “cloud.” There are still root logins on machines to rent, but there are also services for slicing, dicing, and storing your data. Programmers don’t need to write and install as much as subscribe and configure.

Here, Amazon has led the way. That’s not to say there isn’t competition. Microsoft, Google, IBM, Rackspace, and Joyent are all churning out brilliant solutions and clever software packages for the cloud, but no company has done more to create feature-rich bundles of services for the cloud than Amazon. Now Amazon Web Services is zooming ahead with a collection of new products that blow apart the idea of the cloud as a blank slate. With the latest round of tools for AWS, the cloud is that much closer to becoming a concierge waiting for you to wave your hand and give it simple instructions.

Here are 10 new services that show how Amazon is redefining what computing in the cloud can be.

Glue

Anyone who has done much data science knows it’s often more challenging to collect data than it is to perform analysis. Gathering data and putting it into a standard data format is often more than 90 percent of the job.

Glue is a new collection of Python scripts that automatically crawls your data sources to collect data, apply any necessary transforms, and stick it in Amazon’s cloud. It reaches into your data sources, snagging data using all the standard acronyms, like JSON, CSV, and JDBC. Once it grabs the data, it can analyze the schema and make suggestions.

The Python layer is interesting because you can use it without writing or understanding Python—although it certainly helps if you want to customize what’s going on. Glue will run these jobs as needed to keep all the data flowing. It won’t think for you, but it will juggle many of the details, leaving you to think about the big picture.

FPGA

Field Programmable Gate Arrays have long been a secret weapon of hardware designers. Anyone who needs a special chip can build one out of software. There’s no need to build custom masks or fret over fitting all the transistors into the smallest amount of silicon. An FPGA takes your software description of how the transistors should work and rewires itself to act like a real chip.

Amazon’s new AWS EC2 F1 brings the power of FGPA to the cloud. If you have highly structured and repetitive computing to do, an EC2 F1 instance is for you. With EC2 F1, you can create a software description of a hypothetical chip and compile it down to a tiny number of gates that will compute the answer in the shortest amount of time. The only thing faster is etching the transistors in real silicon.

Who might need this? Bitcoin miners compute the same cryptographically secure hash function a bazillion times each day, which is why many bitcoin miners use FPGAs to speed up the search. Anyone with a similar compact, repetitive algorithm you can write into silicon, the FPGA instance lets you rent machines to do it now. The biggest winners are those who need to run calculations that don’t map easily onto standard instruction sets—for example, when you’re dealing with bit-level functions and other nonstandard, nonarithmetic calculations. If you’re simply adding a column of numbers, the standard instances are better for you. But for some, EC2 with FGPA might be a big win.

Blox

As Docker eats its way into the stack, Amazon is trying to make it easier for anyone to run Docker instances anywhere, anytime. Blox is designed to juggle the clusters of instances so that the optimum number are running—no more, no less.

Blox is event driven, so it’s a bit simpler to write the logic. You don’t need to constantly poll the machines to see what they’re running. They all report back, so the right number can run. Blox is also open source, which makes it easier to reuse Blox outside of the Amazon cloud, if you should need to do so.

X-Ray

Monitoring the efficiency and load of your instances used to be simply another job. If you wanted your cluster to work smoothly, you had to write the code to track everything. Many people brought in third parties with impressive suites of tools. Now Amazon’s X-Ray is offering to do much of the work for you. It’s competing with many third-party tools for watching your stack.

When a website gets a request for data, X-Ray traces as it as flows your network of machines and services. Then X-Ray will aggregate the data from multiple instances, regions, and zones so that you can stop in one place to flag a recalcitrant server or a wedged database. You can watch your vast empire with only one page.

Rekognition

Rekognition is a new AWS tool aimed at image work. If you want your app to do more than store images, Rekognition will chew through images searching for objects and faces using some of the best-known and tested machine vision and neural-network algorithms. There’s no need to spend years learning the science; you simply point the algorithm at an image stored in Amazon’s cloud, and voilà, you get a list of objects and a confidence score that ranks how likely the answer is correct. You pay per image.

The algorithms are heavily tuned for facial recognition. The algorithms will flag faces, then compare them to each other and references images to help you identify them. Your application can store the meta information about the faces for later processing. Once you put a name to the metadata, your app will find people wherever they appear. Identification is only the beginning. Is someone smiling? Are their eyes closed? The service will deliver the answer, so you don’t need to get your fingers dirty with pixels. If you want to use impressive machine vision, Amazon will charge you not by the click but by the glance at each image.

Athena

Working with Amazon’s S3 has always been simple. If you want a data structure, you request it and S3 looks for the part you want. Amazon’s Athena now makes it much simpler. It will run the queries on S3, so you don’t need to write the looping code yourself. Yes, we’ve become too lazy to write loops.

Athena uses SQL syntax, which should make database admins happy. Amazon will charge you for every byte that Athena churns through while looking for your answer. But don’t get too worried about the meter running out of control because the price is only $5 per terabyte. That’s about 50 billionths of a cent per byte. It makes the penny candy stores look expensive.

Lambda@Edge

The original idea of a content delivery network was to speed up the delivery of simple files like JPG images and CSS files by pushing out copies to a vast array of content servers parked near the edges of the Internet. Amazon is taking this a step further by letting us push Node.js code out to these edges where they will run and respond. Your code won’t sit on one central server waiting for the requests to poke along the backbone from people around the world. It will clone itself, so it can respond in microseconds without being impeded by all that network latency.

Amazon will bill your code only when it’s running. You won’t need to set up separate instances or rent out full machines to keep the service up. It is currently in a closed test, and you must apply to get your code in their stack.

Snowball Edge

If you want some kind of physical control of your data, the cloud isn’t for you. The power and reassurance that comes from touching the hard drive, DVD-ROM, or thumb drive holding your data isn’t available to you in the cloud. Where is my data exactly? How can I get it? How can I make a backup copy? The cloud makes anyone who cares about these things break out in cold sweats.

The Snowball Edge is a box filled with data that can be delivered anywhere you want. It even has a shipping label that’s really an E-Ink display exactly like Amazon puts on a Kindle. When you want a copy of massive amounts of data that you’ve stored in Amazon’s cloud, Amazon will copy it to the box and ship the box to wherever you are. (The documentation doesn’t say whether Prime members get free shipping.)

Snowball Edge serves a practical purpose. Many developers have collected large blocks of data through cloud applications and downloading these blocks across the open internet is far too slow. If Amazon wants to attract large data-processing jobs, it needs to make it easier to get large volumes of data out of the system.

If you’ve accumulated an exabyte of data that you need somewhere else for processing, Amazon has a bigger version called Snowmobile that’s built into an 18-wheel truck complete with GPS tracking.

Oh, it’s worth noting that the boxes aren’t dumb storage boxes. They can run arbitrary Node.js code too so you can search, filter, or analyze … just in case.

Pinpoint

Once you’ve amassed a list of customers, members, or subscribers, there will be times when you want to push a message out to them. Perhaps you’ve updated your app or want to convey a special offer. You could blast an email to everyone on your list, but that’s a step above spam. A better solution is to target your message, and Amazon’s new Pinpoint tool offers the infrastructure to make that simpler.

You’ll need to integrate some code with your app. Once you’ve done that, Pinpoint helps you send out the messages when your users seem ready to receive them. Once you’re done with a so-called targeted campaign, Pinpoint will collect and report data about the level of engagement with your campaign, so you can tune your targeting efforts in the future.

Polly

Who gets the last word? Your app can, if you use Polly, the latest generation of speech synthesis. In goes text and out comes sound—sound waves that form words that our ears can hear, all the better to make audio interfaces for the internet of things.

Original article here.


standard

IoT 2016 in review: The 8 Biggest IoT developments of the year

2017-01-16 - By 

As we go into 2017 our IoT Analytics team is again evaluating the main IoT developments of the past year in the global “Internet of Things” arena. This article highlights some general IoT 2016 observations as well as our top 8 news stories, with a preview for the new year of opportunities and challenges for global IoT businesses. (For your reference, here is our 2015 IoT year in review article.)

In 2016 the main theme for IoT was the shift from hype to reality. While in 2015, most people only heard about IoT in the media or consumed some marketing blogs, 2016 was different. Many consumers and enterprises went out and started their own IoT endeavors or bought their own IoT devices. Both consumer IoT and enterprise IoT enjoyed record uptake, but also saw some major setbacks.

A. General IoT 2016 observations

A1. Consumer IoT

Millions of consumers bought their first IoT Device in 2016. For many of them this was Amazon Echo (see below for more details).

Image 1: The Amazon Echo Dot was a consumer IoT 2016 success (left hand side) while other devices didn’t always convince (e.g., Nest thermostat – right hand side)

Unfortunately many consumers also realized that marketing promises and reality are often still disparate. Cases of disappointed users are increasing (For example a smart thermostat user who discovered that his thermostat was disconnected for a day).

Some companies were dissolved in 2016 (like the Smart Home Hub Revolv in April – causing many angry customers), others went bankrupt (like the smart watch maker Pebble in December) or didn’t even come to life at all (such as the augmented reality helmet startup Skully that enjoyed a lot of publicity, but filed for bankruptcy in August without having sold a single product).

 

A2. Enterprise IoT

On the enterprise/industrial side of things, IoT 2016 will go down as the year many firms got real about their first IoT pilot projects.

A general wake-up call came in September when a massive cybersecurity attack that involved IoT devices (mainly CCTV cameras) shut down DNS provided Dyn and with it their customer’s websites (e.g., AirBnB, Netflix and Twitter). While this kind of attack didn’t directly affect most IoT companies, its implications scared many IT and IoT decision-makers. As a result, many IoT discussions have now shifted towards cybersecurity solutions.

 

B. Top 8 IoT 2016 Stories

For us at IoT Analytics, the IoT Security Attack on Dyn servers qualifies as the #1 story of the year. Here are our top takeaways from IoT 2016:

1.    Biggest overall story: IoT Security attack on Dyn servers

The Dyn DDoS attack was the first large-scale cybersecurity attack that involved IoT devices – Dyn estimates that 100,000 infected IoT devices were involved. As a first-of-a-kind, it sent shockwaves through corporate IT and IoT.

Chinese CCTV system manufacturer, Hangzhou Xiongmai Technology Company, was at the core of the attack.  Its cameras (among others) were infected with the so-called Mirai malware. This allowed the hackers to connect to the infected IoT devices and launch a flood of well-timed massive requests on Dyn servers – which led to the shutdown of their services.

2.    Biggest Consumer IoT Success: Amazon Echo

Launched in June 2015, the Amazon Echo Smart Home Voice Control was undoubtedly the consumer IoT success story of the year. Recent data provided by Amazon reveals that device sales exploded by 9x (year-on-year vs. last Christmas).

Amazon sold more than 1 million Echo devices in December 2016

Our app-based Smart Home models confirm this trend suggesting that Amazon sold more than 1 million Echo devices in December 2016 and close to 4 million devices throughout the whole of 2016.

With these gains, Amazon has suddenly become the #1 Smart Home Hub and is leading the paradigm shift towards a voice-controlled automated home. Google jumped on the same train in October by releasing Google Home; Microsoft Home Hub is expected to follow in 2017.

3.    Most overcrowded space: IoT Platforms

When we launched our coverage of IoT Platforms in early 2015, little did we know that the topic would soon become the hottest IoT area. Our count of platform providers in May 2016 showed 360 platforms. Our internal research is now well over 400. IoT Platforms is also well placed in the Gartner Hype Cycle 2016.

Companies have realized that the value of IoT lies in the data and that those that manage this data will be the ones capturing a large chunk of this value. Hence, everyone is building an IoT platform.

The frightening part is not necessarily the number but rather the fact that the sales pitches of the platform providers all sound like this: “We are the only true end-2-end platform which is device-agnostic and completely secure”.

4.    Largest M&A Deal: Qualcomm/NXP

While we can see a massive expansion of global IoT software/analytics and platform offerings, we are also witnessing a consolidation among larger IoT hardware providers – notably in the chip sector. In October 2016, US-based chipmaker Qualcomm announced it would buy the leader in connected car chips NXP for $39B, making it the biggest-ever deal in the semiconductor industry.

Other large hardware/semiconductor acquisitions and mergers during IoT 2016 include Softbank/ARM ($32B) and TDK/Invensense ($1.3B)

5.    Most discussed M&A Deal: Cisco/Jasper

In February, Cisco announced that it would buy IoT Platform provider Jasper Technologies for $1.4B. Journalists celebrated the acquisition as a logical next step for Cisco’s “Internet of Everything” story – combining Cisco’s enterprise routers with Jasper’s backend software for network operators and hopefully helping Cisco put an end to declining hardware sales.

6.    Largest startup funding: Sigfox

Sigfox already made it into our 2015 IoT news list with their $100M Series D round. Their momentum and the promise of a global Low Power Wide Area Network led to an even larger funding round in 2016. In November, the French-based company received a record $160M in a Series E that involved Intel Capital and Air Liquide among others.

Another notable startup funding during IoT 2016 involved the IoT Platform C3IoT. The Redwood City based company received $70M in their Series D funding.

7.    Investment story of the year: IoT Stocks

For the first time IoT stocks outperformed the Nasdaq significantly. The IoT & Industry 4.0 stock fund (Traded in Germany under ISIN: DE000LS9GAC8) is up 17.5% year-on-year, beating the Nasdaq which is up 9.6% in the same time frame. Cloud service providers Amazon and Microsoft are up 14% for the year, IoT Platform provider PTC is up 35%. Even communication hardware firm Sierra Wireless started rebounding in Q4/2016.

Some of the IoT 2016 outperformance is due to an increasing number of IoT acquisitions (e.g., TDK/Invensense). At the beginning of 2016 we asked if the underperformance of IoT stocks in 2015 was an opportunity in 2016. In hindsight, the answer to that question is “Yes”. Will the trend continue in 2017?

8.    Most important government initiative: EU Data Protection policy

In May, the European Union passed the General Data Protection Regulation (“GDPR”) which will come into effect on 25 May 2018. The new law has a wide range of implications for IoT technology vendors and users. Among other aspects:

  • Security breaches must be reported
  • Each IoT user must provide explicit consent that their data may be processed
  • Each user must be given the right to object to automated decision making
  • Data coming from IoT Devices used by children may not be processed

From a security and privacy policy point of view the law is a major step forward. IoT technology providers working in Europe and around the world now need to revisit their data governance, data privacy and data security practices and policies.

C. What to expect in 2017:

  • War for IoT platform leadership. The large IoT platform providers are gearing up for the war for IoT (platform) leadership. After years of organic development, several larger vendors started buying smaller platform providers in 2016, mainly to close existing technology gaps (e.g., GE-Bitstew, SAP-Plat.one, Microsoft-Solair)
  • War for IoT connectivity leadership. NB-IoT will finally be introduced in 2017. The new low-power standard that is heavily backed by major telco technology providers will go head-to-head with existing LPWAN technology such as Sigfox or LoRa.
  • AR/VR becoming mainstream. IoT Platform providers PTC (Vuforia) and Microsoft (Hololens) have already showcased a vast range of Augmented Reality / Virtual Reality use cases. We should expect the first real-life use cases emerging in 2017.
  • Even more reality and less hype. The attention is shifting from vendor/infrastructure topics such as what the next generation of platforms or connectivity standards will look like and towards actual implementations and use cases. While there are still major developments the general IoT audience will start taking some of these technology advancements for granted and focus on where the value lies. We continue to follow that story and will update our list of IoT projects

Our IoT coverage in 2017: Subscribe to our newsletter for continued coverage and updates. In 2017, we will keep our focus on important IoT topics such as IoT Platforms, Security and Industry 4.0 with plenty of new reports due in Q1/2017. If you are interested in a comprehensive IoT coverage you may contact us for an enterprise subscription to our complete IoT research content.

Much success for 2017 from our IoT Analytics Team to yours!

 

Original article here.


standard

Worried AI taking your job? It’s already happening in Japan

2017-01-08 - By 

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs. But AI that can handle knowledge-based, white-collar work are also becoming increasingly competent.

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years.

Watson AI is expected to improve productivity by 30%, Fukoku Mutual says. The company was encouraged by its use of similar IBM technology to analyze customer’s voices during complaints. The software typically takes the customer’s words, converts them to text, and analyzes whether those words are positive or negative. Similar sentiment analysis software is also being used by a range of US companies for customer service; incidentally, a large benefit of the software is understanding when customers get frustrated with automated systems.

The Mainichi reports that three other Japanese insurance companies are testing or implementing AI systems to automate work such as finding ideal plans for customers. An Israeli insurance startup, Lemonade, has raised $60 million on the idea of “replacing brokers and paperwork with bots and machine learning,” says CEO Daniel Schreiber.

Artificial intelligence systems like IBM’s are poised to upend knowledge-based professions, like insurance and financial services, according to the Harvard Business Review, due to the fact that many jobs can be “composed of work that can be codified into standard steps and of decisions based on cleanly formatted data.” But whether that means augmenting workers’ ability to be productive, or replacing them entirely remains to be seen.

“Almost all jobs have major elements that—for the foreseeable future—won’t be possible for computers to handle,” HBR writes. “And yet, we have to admit that there are some knowledge-work jobs that will simply succumb to the rise of the robots.”

Original article here.


standard

All the Big Players Are Remaking Themselves Around AI

2017-01-02 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.

 


standard

124 HIDDEN AMAZON ALEXA EASTER EGGS EVERY ALEXA USER SHOULD KNOW [INFOGRAPHIC]

2016-12-31 - By 

Amazon’s Alexa is one of the most popular V.A. and connected home hubs out there.

Amazon Echo and Dot, the main Alexa-enabled devices, are so popular, that they are almost impossible to get online. Even Amazon’s spokesperson recommends that if you happen to see one for sale – grab it right away.

There’s no doubt that the secret to Amazon’s device’s success is in its connected-home system, Alexa.

From being the center hub to your Mark Zuckerberg’s-like smart home to simply giving you the right answers when you need them Alexa is one of the best connected-home assistants in the market.

To get the most out of your Alexa device, you can either enable 3rd party skills from the Amazon Alexa store (like News Feed, for example), but you can also use its built in skills that come natively with the device.

But wait, there’s more.

In addition to the popular uses and familiar skills, there are a bunch of hidden invocations and not-so-well-known questions you can ask it – and get great answers on.

We’ve collected 124 Amazon Alexa must-know voice commands that you might not know of.

Some will make you more productive.
Some will make you laugh.
Some will get you playing with your Echo all day long.
In any case, you’ve got a long day ahead of you.

So go ahead and dive into the 124 Alexa commands.

Original article here.


standard

Tech trends for 2017: more AI, machine intelligence, connected devices and collaboration

2016-12-30 - By 

The end of year or beginning of year is always a time when we see many predictions and forecasts for the year ahead. We often publish a selection of these to show how tech-based innovation and economic development will be impacted by the major trends.

A number of trends reports and articles have bene published – ranging from investment houses, to research firms, and even innovation agencies. In this article we present headlines and highlights of some of these trends – from Gartner, GP Bullhound, Nesta and Ovum.

Artificial intelligence will have the greatest impact

GP Bullhound released its 52-page research report, Technology Predictions 2017, which says artificial intelligence (AI) is poised to have the greatest impact on the global technology sector. It will experience widespread consumer adoption, particularly as virtual personal assistants such as Apple Siri and Amazon Alexa grow in popularity as well as automation of repetitive data-driven tasks within enterprises.

Online streaming and e-sports are also significant market opportunities in 2017 and there will be a marked growth in the development of content for VR/AR platforms. Meanwhile, automated vehicles and fintech will pose longer-term growth prospects for investors.

The report also examines the growth of Europe’s unicorn companies. It highlights the potential for several firms to reach a $10 billion valuation and become ‘decacorns’, including BlaBlaCar, Farfetch, and HelloFresh.

Alec Dafferner, partner, GP Bullhound, commented, “The technology sector has faced up to significant challenges in 2016, from political instability through to greater scrutiny of unicorns. This resilience and the continued growth of the industry demonstrate that there remain vast opportunities for investors and entrepreneurs.”

Big data and machine learning will be disruptors

Advisory firm Ovum says big data continues to be the fastest-growing segment of the information management software market. It estimates the big data market will grow from $1.7bn in 2016 to $9.4bn by 2020, comprising 10 percent of the overall market for information management tooling. Its 2017 Trends to Watch: Big Data report highlights that while the breakout use case for big data in 2017 will be streaming, machine learning will be the factor that disrupts the landscape the most.

Key 2017 trends:

  • Machine learning will be the biggest disruptor for big data analytics in 2017.
  • Making data science a team sport will become a top priority.
  • IoT use cases will push real-time streaming analytics to the front burner.
  • The cloud will sharpen Hadoop-Spark ‘co-opetition’.
  • Security and data preparation will drive data lake governance.

Intelligence, digital and mesh

In October, Gartner issued its top 10 strategic technology trends for 2017, and recently outlined the key themes – intelligent, digital, and mesh – in a webinar.  It said that autonomous cars and drone transport will have growing importance in the year ahead, alongside VR and AR.

“It’s not about just the IoT, wearables, mobile devices, or PCs. It’s about all of that together,” said Cearley, according to hiddenwires magazine. “We need to put the person at the canter. Ask yourself what devices and service capabilities do they have available to them,” said David Cearley, vice president and Gartner fellow, on how ‘intelligence everywhere’ will put the consumer in charge.

“We need to then look at how you can deliver capabilities across multiple devices to deliver value. We want systems that shift from people adapting to technology to having technology and applications adapt to people.  Instead of using forms or screens, I tell the chatbot what I want to do. It’s up to the intelligence built into that system to figure out how to execute that.”

Gartner’s view is that the following will be the key trends for 2017:

  • Artificial intelligence (AI) and machine learning: systems that learn, predict, adapt and potentially operate autonomously.
  • Intelligent apps: using AI, there will be three areas of focus — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces.
  • Intelligent things, as they evolve, will shift from stand-alone IoT devices to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks.
  • Virtual and augmented reality: VR can be used for training scenarios and remote experiences. AR will enable businesses to overlay graphics onto real-world objects, such as hidden wires on the image of a wall.
  • Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.
  • Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.
  • Conversational systems will shift from a model where people adapt to computers to one where the computer ‘hears’ and adapts to a person’s desired outcome.
  • Mesh and app service architecture is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs (application programming interfaces) and events to deliver modular, flexible and dynamic solutions.
  • Digital technology platforms: every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the internet of things and business ecosystems.
  • Adaptive security architecture: multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

The real-world vision of these tech trends

UK innovation agency Nesta also offers a vision for the year ahead, a mix of the plausible and the more aspirational, based on real-world examples of areas that will be impacted by these tech trends:

  • Computer says no: the backlash: the next big technological controversy will be about algorithms and machine learning, which increasingly make decisions that affect our daily lives; in the coming year, the backlash against algorithmic decisions will begin in earnest, with technologists being forced to confront the effects of aspects like fake news, or other events caused directly or indirectly by the results of these algorithms.
  • The Splinternet: 2016’s seismic political events and the growth of domestic and geopolitical tensions, means governments will become wary of the internet’s influence, and countries around the world could pull the plug on the open, global internet.
  • A new artistic approach to virtual reality: as artists blur the boundaries between real and virtual, the way we create and consume art will be transformed.
  • Blockchain powers a personal data revolution: there is growing unease at the way many companies like Amazon, Facebook and Google require or encourage users to give up significant control of their personal information; 2017 will be the year when the blockchain-based hardware, software and business models that offer a viable alternative reach maturity, ensuring that it is not just companies but individuals who can get real value from their personal data.
  • Next generation social movements for health: we’ll see more people uniting to fight for better health and care, enabled by digital technology, and potentially leading to stronger engagement with the system; technology will also help new social movements to easily share skills, advice and ideas, building on models like Crohnology where people with Crohn’s disease can connect around the world to develop evidence bases and take charge of their own health.
  • Vegetarian food gets bloodthirsty: the past few years have seen growing demand for plant-based food to mimic meat; the rising cost of meat production (expected to hit $5.2 billion by 2020) will drive kitchens and laboratories around the world to create a new wave of ‘plant butchers, who develop vegan-friendly meat substitutes that would fool even the most hardened carnivore.
  • Lifelong learners: adult education will move from the bottom to the top of the policy agenda, driven by the winds of automation eliminating many jobs from manufacturing to services and the professions; adult skills will be the keyword.
  • Classroom conundrums, tackled together: there will be a future-focused rethink of mainstream education, with collaborative problem solving skills leading the charge, in order to develop skills beyond just coding – such as creativity, dexterity and social intelligence, and the ability to solve non-routine problems.
  • The rise of the armchair volunteer: volunteering from home will become just like working from home, and we’ll even start ‘donating’ some of our everyday data to citizen science to improve society as well; an example of this trend was when British Red Cross volunteers created maps of the Ebola crisis in remote locations from home.

In summary

It’s clear that there is an expectation that the use of artificial intelligence and machine learning platforms will proliferate in 2017 across multiple business, social and government spheres. This will be supported with advanced tools and capabilities like virtual reality and augmented reality. Together, there will be more networks of connected devices, hardware, and data sets to enable collaborative efforts in areas ranging from health to education and charity. The Nesta report also suggests that there could be a reality check, with a possible backlash against the open internet and the widespread use of personal data.

Original article here.


Site Search

Search
Generic filters
Exact matches only
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);