Posted On:Machine Learning Archives - AppFerret

standard

AI in 2019 (video)

2019-01-03 - By 

2018 has been an eventful year for AI to say the least! We’ve seen advances in generative models, the AlphaGo victory, several data breach scandals, and so much more. I’m going to briefly review AI in 2018 before giving 10 predictions on where the space is going in 2019. Prepare yourself, my predictions range from more Kubernetes infused ML pipelines to the first business use case of generative modeling of 3D worlds. Happy New Year and enjoy!

Original video here.


standard

A Quick Introduction to Text Summarization in Machine Learning

2018-09-19 - By 

Text summarization refers to the technique of shortening long pieces of text. The intention is to create a coherent and fluent summary having only the main points outlined in the document.

Automatic text summarization is a common problem in machine learning and natural language processing (NLP).

Skyhoshi, who is a U.S.-based machine learning expert with 13 years of experience and currently teaches people his skills, says that “the technique has proved to be critical in quickly and accurately summarizing voluminous texts, something which could be expensive and time consuming if done without machines.”

Machine learning models are usually trained to understand documents and distill the useful information before outputting the required summarized texts.

What’s the need for text summarization?

Propelled by the modern technological innovations, data is to this century what oil was to the previous one. Today, our world is parachuted by the gathering and dissemination of huge amounts of data.

In fact, the International Data Corporation (IDC) projects that the total amount of digital data circulating annually around the world would sprout from 4.4 zettabytes in 2013 to hit 180 zettabytes in 2025. That’s a lot of data!

With such a big amount of data circulating in the digital space, there is need to develop machine learning algorithms that can automatically shorten longer texts and deliver accurate summaries that can fluently pass the intended messages.

Furthermore, applying text summarization reduces reading time, accelerates the process of researching for information, and increases the amount of information that can fit in an area.

What are the main approaches to automatic summarization?

There are two main types of how to summarize text in NLP:

  • Extraction-based summarization

The extractive text summarization technique involves pulling keyphrases from the source document and combining them to make a summary. The extraction is made according to the defined metric without making any changes to the texts.

Here is an example:

Source text: Joseph and Mary rode on a donkey to attend the annual event inJerusalem. In the city, Mary gave birth to a child named Jesus.

Extractive summary: Joseph and Mary attend event Jerusalem. Mary birth Jesus.

As you can see above, the words in bold have been extracted and joined to create a summary — although sometimes the summary can be grammatically strange.

  • Abstraction-based summarization

The abstraction technique entails paraphrasing and shortening parts of the source document. When abstraction is applied for text summarization in deep learning problems, it can overcome the grammar inconsistencies of the extractive method.

The abstractive text summarization algorithms create new phrases and sentences that relay the most useful information from the original text — just like humans do.

Therefore, abstraction performs better than extraction. However, the text summarization algorithms required to do abstraction are more difficult to develop; that’s why the use of extraction is still popular.

Here is an example:

Abstractive summary: Joseph and Mary came to Jerusalem where Jesus was born.

How does a text summarization algorithm work?

Usually, text summarization in NLP is treated as a supervised machine learning problem (where future outcomes are predicted based on provided data).

Typically, here is how using the extraction-based approach to summarize texts can work:

1. Introduce a method to extract the merited keyphrases from the source document. For example, you can use part-of-speech tagging, words sequences, or other linguistic patterns to identify the keyphrases.

2. Gather text documents with positively-labeled keyphrases. The keyphrases should be compatible to the stipulated extraction technique. To increase accuracy, you can also create negatively-labeled keyphrases.

3. Train a binary machine learning classifier to make the text summarization. Some of the features you can use include:

  • Length of the keyphrase
  • Frequency of the keyphrase
  • The most recurring word in the keyphrase
  • Number of characters in the keyphrase

4. Finally, in the test phrase, create all the keyphrase words and sentences and carry out classification for them.

Summary

Text summarization is an interesting machine learning field that is increasingly gaining traction. As research in this area continues, we can expect to see breakthroughs that will assist in fluently and accurately shortening long text documents.

Original article here.

 


standard

Google’s AutoML is a Machine Learning Game-Changer

2018-05-24 - By 

Google’s AutoML is a new up-and-coming (alpha stage) cloud software suite of Machine Learning tools. It’s based on Google’s state-of-the-art research in image recognition called Neural Architecture Search (NAS). NAS is basically an algorithm that, given your specific dataset, searches for the most optimal neural network to perform a certain task on that dataset. AutoML is then a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge of deep learning or AI; all you need is labelled data! Google will use NAS to then find the best network for your specific dataset and task. They’ve already shown how their methods can achieve performance that is far better than that of hand-designed networks.

AutoML totally changes the whole machine learning game because for many applications, specialised skills and knowledge won’t be required. Many companies only need deep networks to do simpler tasks, such as image classification. At that point they don’t need to hire 5 machine learning PhDs; they just need someone who can handle moving around and organising their data.

There’s no doubt that this shift in how “AI” can be used by businesses will create change. But what kind of change are we looking at? Whom will this change benefit? And what will happen to all of the people jumping into the machine learning field? In this post, we’re going to breakdown what Google’s AutoML, and in general the shift towards Software 2.0, means for both businesses and developers in the machine learning field.

More development, less research for businesses

A lot of businesses in the AI space, especially start-ups, are doing relatively simple things in the context of deep learning. Most of their value is coming from their final put-together product. For example, most computer vision start-ups are using some kind of image classification network, which will actually be AutoML’s first tool in the suite. In fact, Google’s NASNet, which achieves the current state-of-the-art in image classification is already publicly available in TensorFlow! Businesses can now skip over this complex experimental-research part of the product pipeline and just use transfer learning for their task. Because there is less experimental-research, more business resources can be spent on product design, development, and the all important data.

Speaking of which…

It becomes more about product

Connecting from the first point, since more time is being spent on product design and development, companies will have faster product iteration. The main value of the company will become less about how great and cutting edge their research is and more about how well their product/technology is engineered. Is it well designed? Easy to use? Is their data pipeline set up in such a way that they can quickly and easily improve their models? These will be the new key questions for optimising their products and being able to iterate faster than their competition. Cutting edge research will also become less of a main driver of increasing the technology’s performance.

Now it’s more like…

Data and resources become critical

Now that research is a less significant part of the equation, how can companies stand out? How do you get ahead of the competition? Of course sales, marketing, and as we just discussed, product design are all very important. But the huge driver of the performance of these deep learning technologies is your data and resources. The more clean and diverse yet task-targeted data you have (i.e both quality and quantity), the more you can improve your models using software tools like AutoML. That means lots of resources for the acquisition and handling of data. All of this partially signifies us moving away from the nitty-gritty of writing tons of code.

It becomes more of…

Software 2.0: Deep learning becomes another tool in the toolbox for most

All you have to do to use Google’s AutoML is upload your labelled data and boom, you’re all set! For people who aren’t super deep (ha ha, pun) into the field, and just want to leverage the power of the technology, this is big. The application of deep learning becomes more accessible. There’s less coding, more using the tool suite. In fact, for most people, deep learning because just another tool in their toolbox. Andrej Karpathy wrote a great article on Software 2.0 and how we’re shifting from writing lots of code to more design and using tools, then letting AI do the rest.

But, considering all of this…

There’s still room for creative science and research

Even though we have these easy-to-use tools, the journey doesn’t just end! When cars were invented, we didn’t just stop making them better even though now they’re quite easy to use. And there’s still many improvements that can be made to improve current AI technologies. AI still isn’t very creative, nor can it reason, or handle complex tasks. It has the crutch of needing a ton of labelled data, which is both expensive and time consuming to acquire. Training still takes a long time to achieve top accuracy. The performance of deep learning models is good for some simple tasks, like classification, but does only fairly well, sometimes even poorly (depending on task complexity), on things like localisation. We don’t yet even fully understand deep networks internally.

All of these things present opportunities for science and research, and in particular for advancing the current AI technologies. On the business side of things, some companies, especially the tech giants (like Google, Microsoft, Facebook, Apple, Amazon) will need to innovate past current tools through science and research in order to compete. All of them can get lots of data and resources, design awesome products, do lots of sales and marketing etc. They could really use something more to set them apart, and that can come from cutting edge innovation.

That leaves us with a final question…

Is all of this good or bad?

Overall, I think this shift in how we create our AI technologies is a good thing. Most businesses will leverage existing machine learning tools, rather than create new ones since they don’t have a need for it. Near-cutting-edge AI becomes accessible to many people, and that means better technologies for all. AI is also quite an “open” field, with major figures like Andrew Ng creating very popular courses to teach people about this important new technology. Making things more accessible helps people transition with the fast-paced tech field.

Such a shift has happened many times before. Programming computers started with assembly level coding! We later moved on to things like C. Many people today consider C too complicated so they use C++. Much of the time, we don’t even need something as complex as C++, so we just use the super high level languages of Python or R! We use the tool that is most appropriate at hand. If you don’t need something super low-level, then you don’t have to use it (e.g C code optimisation, R&D of deep networks from scratch), and can simply use something more high-level and built-in (e.g Python, transfer learning, AI tools).

At the same time, continued efforts in the science and research of AI technologies is critical. We can definitely add tremendous value to the world by engineering new AI-based products. But there comes a point where new science is needed to move forward. Human creativity will always be valuable.

Conclusion

Thanks for reading! I hope you enjoyed this post and learned something new and useful about the current trend in AI technology! This is a partially opinionated piece, so I’d love to hear any responses you may have below!

Original article here.


standard

IBM Fueling 2018 Cloud Growth With 1,900 Cloud Patents Plus Blazingly Fast AI-Optimized Chip

2018-01-17 - By 

CLOUD WARS — Investing in advanced technology to stay near the top of the savagely competitive enterprise-cloud market, IBM earned more than 1,900 cloud-technology patents in 2017 and has just released an AI-optimized chip said to have 10 times more IO and bandwidth than its nearest rival.

IBM is coming off a year in which it stunned many observers by establishing itself as one of the world’s top three enterprise-cloud providers—along with Microsoft and Amazon—by generating almost $16 billion in cloud revenue for the trailing 12 months ended Oct. 31, 2017.

While that $16-billion cloud figure pretty much matched the cloud-revenue figures for Microsoft and Amazon, many analysts and most media observers continue—for reasons I cannot fathom—to fail to acknowledge IBM’s stature as a broad-based enterprise-cloud powerhouse whose software capabilities position the company superbly for the next wave of cloud growth in hybrid cloud, PaaS, and SaaS.

And IBM, which announces its Q4 and annual earnings results on Thursday, Jan. 18, is displaying its full commitment to remaining among the top ranks of cloud vendors by earning almost 2,000 patents for cloud technologies in 2017, part of a companywide total of 9,043 patents received last year.

Noting that almost half of those 9,043 patents came from “pioneering advancements in AI, cloud computing, cybersecurity, blockchain and quantum computing,” IBM CEO Ginni Rometty said this latest round of advanced-technology innovation is “aimed at helping our clients create smarter businesses.”

In those cloud-related areas, IBM said its new patents include the following:

  • 1,400 AI patents, including one for an AI system that analyzes and can mirror a user’s speech patterns to make it easier for humans and AI to understand one another.
  • 1,200 cybersecurity patents, “including one for technology that enables AI systems to turn the table on hackers by baiting them into email exchanges and websites that expend their resources and frustrate their attacks.”
  • In machine learning, a system for autonomous vehicles that transfers control of the vehicle to humans “as needed, such as in an emergency.”
  • In blockchain, a method for reducing the number of steps needed to settle transactions among multiple business parties, “even those that are not trusted and might otherwise require a third-party clearinghouse to execute.”

For IBM, the pursuit of new cloud technologies is particularly important because a huge portion of its approximately $16 billion in cloud revenue comes from outside the standard cloud-revenue stream of IaaS, PaaS and SaaS and instead is generated by what I call IBM’s “cloud-conversion” business—an approach unique to IBM.

While IBM rather aridly defines that business as “hardware, software and services to enable IBM clients to implement comprehensive cloud solutions,” the concept comes alive when viewed through the perspective of what those offerings mean to big corporate customers. To understand how four big companies are tapping into IBM’s cloud conversion business, please check out my recent article called Inside IBM’s $7-Billion Cloud-Solutions Business: 4 Great Digital-Transformation Stories.

IBM’s most-recent batch of cloud-technology patents—and IBM has now received more patents per year than any other U.S. company for 25 straight years—includes a patent that an IBM blog post describes this way: “a system that monitors data sources including weather reports, social networks, newsfeeds and network statistics to determine the best uses of cloud resources to meet demand. It’s one of the numerous examples of using unstructured data can help organizations work more efficiently.”

That broad-based approach to researching and developing advanced technology also led to the launch last month of a microchip that IBM says is specifically optimized for artificial-intelligence workloads.

A TechCrunch article about IBM’s new Power9 chip said it will be used not only in the IBM Cloud but also the Google Cloud: “The company intends to sell the chips to third-party manufacturers and to cloud vendors including Google. Meanwhile, it’s releasing a new computer powered by the Power9 chip, the AC922 and it intends to offer the chips in a service on the IBM cloud.”

How does the new IBM chip stack up? The TechCrunch article offered this breathless endorsement of the Power9’s performance from analyst Patrick Moorhead of Moor Insights & Strategy: “Power9 is a chip which has a new systems architecture that is optimized for accelerators used in machine learning. Intel makes Xeon CPUs and Nervana accelerators and NVIDIA makes Tesla accelerators. IBM’s Power9 is literally the Swiss Army knife of ML acceleration as it supports an astronomical amount of IO and bandwidth, 10X of anything that’s out there today.”

It’s shaping up to be a very interesting year from IBM in the cloud, and I’ll be reporting later this week on Thursday’s earnings release.

As businesses jump to the cloud to accelerate innovation and engage more intimately with customers, my Cloud Wars series analyze the major cloud vendors from the perspective of business customers.

 

Original article here.

 


standard

AI detectives are cracking open the black box of deep learning (video)

2017-08-30 - By 

Jason Yosinski sits in a small glass box at Uber’s San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinski’s program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: It’s a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AI’s individual computational nodes—the neurons, so to speak—to see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. “This responds to your face and my face,” he says. “It responds to different size faces, different color faces.”

No one trained this network to identify faces. Humans weren’t labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinski’s probe had illuminated one small part of it, but overall, it remained opaque. “We build amazing models,” he says. “But we don’t quite understand them. And every year, this gap is going to get a bit larger.”

This video provides a high-level overview of the problem:

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as it’s known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it “AI neuroscience.”

Opening up the black box

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AI—be they text, images, or anything else—in clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiro’s program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the words—or parts of an image or molecular structure, or any other kind of data—most important in the AI’s original judgment. The tests might reveal that the word “horrible” was vital to a panning or that “Daniel Day Lewis” led to a positive review. But although LIME can diagnose those singular examples, that result says little about the network’s overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesn’t require testing the network a thousand times over: a boon if you’re trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank reference—a black image or a zeroed-out array in place of text—and transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting in—outfitted with the standard medley of mugs, tables, chairs, and computers—as a Google conference room. “I can give a zillion reasons.” But say you slowly dim the lights. “When the lights become very dim, only the biggest reasons stand out.” Those transitions from a blank reference allow Sundararajan to capture more of the network’s decisions than Ribeiro’s variations do. But deeper, unanswered questions are always there, Sundararajan says—a state of mind familiar to him as a parent. “I have a 4-year-old who continually reminds me of the infinite regress of ‘Why?’”

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create “explanations” for their models’ internal logic. The Defense Advanced Research Projects Agency, the U.S. military’s blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasn’t the only thing on their minds, she says. “I’m not sure what it’s doing,” they told her. “I’m not sure I can trust it.”

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. “Fear of a neural net is completely justified,” he says. “What really terrifies me is what else did the neural net learn that’s equally wrong?”

Today’s neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of data—say, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections “fire” in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learns—somehow—to make fine distinctions among breeds. “Using modern horsepower and chutzpah, you can get these things to really sing,” Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Complete original article here.

 


standard

Top 10 Hot Artificial Intelligence (AI) Technologies

2017-02-11 - By 

The market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.

Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:

 

  1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors: Attivio, Automated Insights, Cambridge Semantics, Digital Reasoning, Lucidworks, Narrative Science, SAS, Yseop.
  2. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors: NICE, Nuance Communications, OpenText, Verint Systems.
  3. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors: Amazon, Apple, Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft, Microsoft, Satisfi.
  4. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors: Amazon, Fractal Analytics, Google, H2O.ai, Microsoft, SAS, Skytree.
  5. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors: Alluviate, Cray, Google, IBM, Intel, Nvidia.
  6. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors: Advanced Systems Concepts, Informatica, Maana, Pegasystems, UiPath.
  7. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors: Deep Instinct, Ersatz Labs, Fluid AI, MathWorks, Peltarion, Saffron Technology, Sentient Technologies.
  8. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors: 3VR, Affectiva, Agnitio, FaceFirst, Sensory, Synqera, Tahzoo.
  9. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors: Advanced Systems Concepts, Automation Anywhere, Blue Prism, UiPath, WorkFusion.
  10. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors: Basis Technology, Coveo, Expert System, Indico, Knime, Lexalytics, Linguamatics, Mindbreeze, Sinequa, Stratifyd, Synapsify.

 

There are certainly many business benefits gained from AI technologies today, but according to a survey Forrester conducted last year, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:

There is no defined business case                                                       42%

Not clear what AI can be used for                                                       39%

Don’t have the required skills                                                               33%

Need first to invest in modernizing data mgt platform                   29%

Don’t have the budget                                                                           23%

Not certain what is needed for implementing an AI system           19%

AI systems are not proven                                                                    14%

Do not have the right processes or governance                                13%

AI is a lot of hype with little substance                                               11%

Don’t own or have access to the required data                                8%

Not sure what AI means                                                                        3%

Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.

Original article here.


standard

All the Big Players Are Remaking Themselves Around AI

2017-01-02 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.

 


standard

Gartner’s Top 10 Strategic Technology Trends for 2017

2016-12-05 - By 

Artificial intelligence, machine learning, and smart things promise an intelligent future.

Today, a digital stethoscope has the ability to record and store heartbeat and respiratory sounds. Tomorrow, the stethoscope could function as an “intelligent thing” by collecting a massive amount of such data, relating the data to diagnostic and treatment information, and building an artificial intelligence (AI)-powered doctor assistance app to provide the physician with diagnostic support in real-time. AI and machine learning increasingly will be embedded into everyday things such as appliances, speakers and hospital equipment. This phenomenon is closely aligned with the emergence of conversational systems, the expansion of the IoT into a digital mesh and the trend toward digital twins.

Three themes — intelligent, digital, and mesh — form the basis for the Top 10 strategic technology trends for 2017, announced by David Cearley, vice president and Gartner Fellow, at Gartner Symposium/ITxpo 2016 in Orlando, Florida. These technologies are just beginning to break out of an emerging state and stand to have substantial disruptive potential across industries.

Intelligent

AI and machine learning have reached a critical tipping point and will increasingly augment and extend virtually every technology enabled service, thing or application.  Creating intelligent systems that learn, adapt and potentially act autonomously rather than simply execute predefined instructions is primary battleground for technology vendors through at least 2020.

Trend No. 1: AI & Advanced Machine Learning

AI and machine learning (ML), which include technologies such as deep learning, neural networks and natural-language processing, can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously. Systems can learn and change future behavior, leading to the creation of more intelligent devices and programs.  The combination of extensive parallel processing power, advanced algorithms and massive data sets to feed the algorithms has unleashed this new era.

In banking, you could use AI and machine-learning techniques to model current real-time transactions, as well as predictive models of transactions based on their likelihood of being fraudulent. Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios..

Trend No. 2: Intelligent Apps

Intelligent apps, which include technologies like virtual personal assistants (VPAs), have the potential to transform the workplace by making everyday tasks easier (prioritizing emails) and its users more effective (highlighting important content and interactions). However, intelligent apps are not limited to new digital assistants – every existing software category from security tooling to enterprise applications such as marketing or ERP will be infused with AI enabled capabilities.  Using AI, technology providers will focus on three areas — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces. By 2018, Gartner expects most of the world’s largest 200 companies to exploit intelligent apps and utilize the full toolkit of big data and analytics tools to refine their offers and improve customer experience.

Trend No. 3: Intelligent Things

New intelligent things generally fall into three categories: robots, drones and autonomous vehicles. Each of these areas will evolve to impact a larger segment of the market and support a new phase of digital business but these represent only one facet of intelligent things.  Existing things including IoT devices will become intelligent things delivering the power of AI enabled systems everywhere including the home, office, factory floor, and medical facility.

As intelligent things evolve and become more popular, they will shift from a stand-alone to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks. However, nontechnical issues such as liability and privacy, along with the complexity of creating highly specialized assistants, will slow embedded intelligence in some scenarios.

Digital

The lines between the digital and physical world continue to blur creating new opportunities for digital businesses.  Look for the digital world to be an increasingly detailed reflection of the physical world and the digital world to appear as part of the physical world creating fertile ground for new business models and digitally enabled ecosystems.

Trend No. 4: Virtual & Augmented Reality

Virtual reality (VR) and augmented reality (AR) transform the way individuals interact with each other and with software systems creating an immersive environment.  For example, VR can be used for training scenarios and remote experiences. AR, which enables a blending of the real and virtual worlds, means businesses can overlay graphics onto real-world objects, such as hidden wires on the image of a wall.  Immersive experiences with AR and VR are reaching tipping points in terms of price and capability but will not replace other interface models.  Over time AR and VR expand beyond visual immersion to include all human senses.  Enterprises should look for targeted applications of VR and AR through 2020.

Trend No. 5: Digital Twin

Within three to five years, billions of things will be represented by digital twins, a dynamic software model of a physical thing or system. Using physics data on how the components of a thing operate and respond to the environment as well as data provided by sensors in the physical world, a digital twin can be used to analyze and simulate real world conditions, responds to changes, improve operations and add value. Digital twins function as proxies for the combination of skilled individuals (e.g., technicians) and traditional monitoring devices and controls (e.g., pressure gauges). Their proliferation will require a cultural change, as those who understand the maintenance of real-world things collaborate with data scientists and IT professionals.  Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.

Trend No. 6: Blockchain

Blockchain is a type of distributed ledger in which value exchange transactions (in bitcoin or other token) are sequentially grouped into blocks.  Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.  They promise a model to add trust to untrusted environments and reduce business friction by providing transparent access to the information in the chain.  While there is a great deal of interest the majority of blockchain initiatives are in alpha or beta phases and significant technology challenges exist.

Mesh

The mesh refers to the dynamic connection of people, processes, things and services supporting intelligent digital ecosystems.  As the mesh evolves, the user experience fundamentally changes and the supporting technology and security architectures and platforms must change as well.

Trend No. 7: Conversational Systems

Conversational systems can range from simple informal, bidirectional text or voice conversations such as an answer to “What time is it?” to more complex interactions such as collecting oral testimony from crime witnesses to generate a sketch of a suspect.  Conversational systems shift from a model where people adapt to computers to one where the computer “hears” and adapts to a person’s desired outcome.  Conversational systems do not use text/voice as the exclusive interface but enable people and machines to use multiple modalities (e.g., sight, sound, tactile, etc.) to communicate across the digital device mesh (e.g., sensors, appliances, IoT systems).

Trend No. 8: Mesh App and Service Architecture

The intelligent digital mesh will require changes to the architecture, technology and tools used to develop solutions. The mesh app and service architecture (MASA) is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs and events to deliver modular, flexible and dynamic solutions.  Solutions ultimately support multiple users in multiple roles using multiple devices and communicating over multiple networks. However, MASA is a long term architectural shift that requires significant changes to development tooling and best practices.

Trend No. 9: Digital Technology Platforms

Digital technology platforms are the building blocks for a digital business and are necessary to break into digital. Every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the Internet of Things and business ecosystems. In particular new platforms and services for IoT, AI and conversational systems will be a key focus through 2020.   Companies should identify how industry platforms will evolve and plan ways to evolve their platforms to meet the challenges of digital business.

Trend No. 10: Adaptive Security Architecture

The evolution of the intelligent digital mesh and digital technology platforms and application architectures means that security has to become fluid and adaptive. Security in the IoT environment is particularly challenging. Security teams need to work with application, solution and enterprise architects to consider security early in the design of applications or IoT solutions.  Multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

Original article here.


Site Search

Search
Generic filters
Exact matches only
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);