Posted In:Artificial Intelligence Archives - Page 2 of 2 - AppFerret

standard

World’s largest hedge fund to replace managers with AI

2016-12-26 - By 

Bridgewater Associates has a team of engineers working on a project to automate decision-making to save time and eliminate human emotional volatility

The world’s largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including hiring, firing and other strategic decision-making.

Bridgewater Associates has a team of software engineers working on the project at the request of billionaire founder Ray Dalio, who wants to ensure the company can run according to his vision even when he’s not there, the Wall Street Journal reported.

“The role of many remaining humans at the firm wouldn’t be to make individual choices but to design the criteria by which the system makes decisions, intervening when something isn’t working,” wrote the Journal, which spoke to five former and current employees.

The firm, which manages $160bn, created the team of programmers specializing in analytics and artificial intelligence, dubbed the Systematized Intelligence Lab, in early 2015. The unit is headed up by David Ferrucci, who previously led IBM’s development of Watson, the supercomputer that beat humans at Jeopardy! in 2011.

The company is already highly data-driven, with meetings recorded and staff asked to grade each other throughout the day using a ratings system called “dots”. The Systematized Intelligence Lab has built a tool that incorporates these ratings into “Baseball Cards” that show employees’ strengths and weaknesses. Another app, dubbed The Contract, gets staff to set goals they want to achieve and then tracks how effectively they follow through.

These tools are early applications of PriOS, the over-arching management software that Dalio wants to make three-quarters of all management decisions within five years. The kinds of decisions PriOS could make include finding the right staff for particular job openings and ranking opposing perspectives from multiple team members when there’s a disagreement about how to proceed.

The machine will make the decisions, according to a set of principles laid out by Dalio about the company vision.

“It’s ambitious, but it’s not unreasonable,” said Devin Fidler, research director at the Institute For The Future, who has built a prototype management system called iCEO. “A lot of management is basically information work, the sort of thing that software can get very good at.”

 

Automated decision-making is appealing to businesses as it can save time and eliminate human emotional volatility.

“People have a bad day and it then colors their perception of the world and they make different decisions. In a hedge fund that’s a big deal,” he added.

Will people happily accept orders from a robotic manager? Fidler isn’t so sure. “People tend not to accept a message delivered by a machine,” he said, pointing to the need for a human interface.

“In companies that are really good at data analytics very often the decision is made by a statistical algorithm but the decision is conveyed by somebody who can put it in an emotional context,” he explained.

Futurist Zoltan Istvan, founder of the Transhumanist party, disagrees. “People will follow the will and statistical might of machines,” he said, pointing out that people already outsource way-finding to GPS or the flying of planes to autopilot.

However, the period in which people will need to interact with a robot manager will be brief.

“Soon there just won’t be any reason to keep us around,” Istvan said. “Sure, humans can fix problems, but machines in a few years time will be able to fix those problems even better.

“Bankers will become dinosaurs.”

It’s not just the banking sector that will be affected. According to a report by Accenture, artificial intelligence will free people from the drudgery of administrative tasks in many industries. The company surveyed 1,770 managers across 14 countries to find out how artificial intelligence would impact their jobs.

“AI will ultimately prove to be cheaper, more efficient, and potentially more impartial in its actions than human beings,” said the authors writing up the results of the survey in Harvard Business Review.

However, they didn’t think there was too much cause for concern. “It just means that their jobs will change to focus on things only humans can do.”

The authors say that machines would be better at administrative tasks like writing earnings reports and tracking schedules and resources while humans would be better at developing messages to inspire the workforce and drafting strategy.

Fidler disagrees. “There’s no reason to believe that a lot of what we think of as strategic work or even creative work can’t be substantially overtaken by software.”

However, he said, that software will need some direction. “It needs human decision making to set objectives.”

Bridgewater Associates did not respond to a request for comment.

Original article here.


standard

IoT + Big Data Means 92% Of Everything We Do Will Be In The Cloud

2016-12-24 - By 

You don’t need Sherlock Holmes to tell you that cloud computing is on the rise, and that cloud traffic keeps going up. However, it is enlightening to see the degree by which it is increasing, which is, in essence, about to quadruple in the next few years. By that time, 92% percent of workloads will be processed by cloud data centers; versus only eight percent being processed by traditional data centers.

Cisco, which does a decent job of measuring such things, just released estimates that shows cloud traffic likely to rise 3.7-fold by 2020, increasing 3.9 zettabytes (ZB) per year in 2015 (the latest full year data for which data is available) to 14.1 ZB per year by 2020.

The big data and associated Internet of Things are a big part of this growth, the study’s authors state. By 2020, database, analytics and IoT workloads will account for 22% of total business workloads, compared to 20% in 2015. The total volume of data generated by IoT will reach 600 ZB per year by 2020, 275 times higher than projected traffic going from data centers to end users/devices (2.2 ZB); 39 times higher than total projected data center traffic (15.3 ZB).

Public cloud is growing faster than private cloud growth, the survey also finds. By 2020, 68% (298 million) of the cloud workloads will be in public cloud data centers, up from 49% (66.3 million) in 2015. During the same time period, 32% (142 million) of the cloud workloads will be in private cloud data centers, down from 51% (69.7 million) in 2015.

As the Cisco team explains it, much of the shift to public cloud will likely be part of hybrid cloud strategies. For example, “cloud bursting is an example of hybrid cloud where daily computing requirements are handled by a private cloud, but for sudden spurts of demand the additional traffic demand — bursting — is handled by a public cloud.”

The Cisco estimates also show that while Software as a Service (SaaS, for online applications) will keep soaring, there will be less interest in Infrastructure as a Service (IaaS, for online servers, capacity, storage).  By 2020, 74% of the total cloud workloads will be software-as-a-service (SaaS) workloads, up from 65% at this time. Platform as a Service (PaaS, for development tools, databases, middleware) also will see a boost — eight percent of the total cloud workloads will be PaaS workloads, down from nine percent in 2015. However,  IaaS workloads will total 17% of the total cloud workloads, down from 26%.

The Cisco analysts explain that the lower percentage growth for IaaS may be attributable to the growing shift away from private cloud to public cloud providers. For starters, IaaS was far less disruptive to the business — a rearrangement of data center resources, if you will. As SaaS offerings gain in sophistication, those providers may offer IaaS support behind the scenes. “In the private cloud, initial deployments were predominantly IaaS. Test and development types of cloud services were the first to be used in the enterprise; cloud was a radical change in deploying IT services, and this use was a safe and practical initial use of private cloud for enterprises. It was limited, and it did not pose a risk of disrupting the workings of IT resources in the enterprise. As trust in adoption of SaaS or mission-critical applications builds over time with technology enablement in processing power, storage advancements, memory advancements, and networking advancements, we foresee the adoption of SaaS type applications to accelerate over the forecast period, while shares of IaaS and PaaS workloads decline.”

On the consumer side, video and social networking will lead the increase in consumer workloads. By 2020, consumer cloud storage traffic per user will be 1.7 GB per month, compared to 513 MB per month in 2015. By 2020, video streaming workloads will account for 34% of total consumer workloads, compared to 29% in 2015. Social networking workloads will account for 24% of total consumer workloads, up from 20 percent in 2015.  In the next four years, 59% (2.3 billion users) of the consumer Internet population will use personal cloud storage up from 47% (1.3 billion users) in 2015.

Original article here.


standard

Artificial Intelligence, Hybrid Cloud & IoT Are High on Amazon’s Agenda

2016-12-24 - By 

At the AWS re:Invent event, Amazon has announced a host of new services that highlight its commitment to enterprises. Andy Jassy, CEO of AWS, emphasized on the innovation in the areas of artificial intelligence, analytics, and hybrid cloud.

Amazon has been using deep learning and artificial intelligence in its retail business for enhancing the customer experience. The company claims that it has thousands of engineers working on artificial intelligence to improve search and discovery, fulfillment and logistics, product recommendations, and inventory management. Amazon is now bringing the same expertise to the cloud to expose the APIs that developers can consume to build intelligent applications. Dubbed as Amazon AI, the new service offers powerful AI capabilities such as image analysis, text to speech conversion, and natural language processing.

Amazon Rekognition is the rich image analysis service that can identify various attributes of an image. Amazon Polly is a service that accepts text or a string and returns an MP3 audio file containing the speech. With support for 47 different voices in 23 different languages, the service exposes rich cognitive speech capabilities. Amazon Lex is the new service for natural language processing and automatic speech recognition. It is the same service that powers Alexa and Amazon Echo. The service converts text or voice to a set of actions that developers can parse to perform a set of actions.

Amazon is also investing in MXNet, a deep learning framework that can run in a variety of environments. Apart from this, Amazon is also optimizing EC2 images to run popular deep learning frameworks including CNTK, TensorFlow, and Theano.

In the last decade, Amazon has added many services and features to its platform. While customers appreciate the pace of innovation, first-time users often complain about the overwhelming number of options and choices. Even to launch a simple virtual machine that runs a blog or a development environment in EC2, users may have to choose from a variety of options. To simplify the experience of launching non-mission critical workloads in EC2, AWS has announced a new service called Amazon Lightsail. Customers can launch a VM with just a few clicks without worrying about the complex choices that they need to make. When they get familiar with EC2, they can start integrating with other services such as EBS and Elastic IP. Starting at $5 a month, this is the cheapest compute service available in AWS. Amazon calls Lightsail as the express mode for EC2 as dramatically reduces the launch time of a VM.

Amazon Lightsail competes with the VPS providers such as DigitalOcean and Linode. The sweet spot of these vendors has been developers and non-technical users who need a virtual private server to run a workload in the cloud. With Amazon Lightsail, AWS wants to attract developers, small and medium businesses, and digital agencies that typically use a VPS service for their needs.

On the analytics front, Amazon is adding a new interactive, serverless query service called Amazon Athena that can be used to retrieve data stored in Amazon S3. The service supports complex SQL queries including joins to return data from Amazon S3. Customers can use custom metadata to perform complex queries. Amazon Athena’s pricing is based on per query model.

Last month, AWS and VMware partnered to bring hybrid cloud capabilities to customers. With this, customers can run and manage workloads in the cloud, seamlessly from existing VMware tools.

Amazon claims that the customers will be able to use VMware’s virtualization and management software to seamlessly deploy and manage VMware workloads across all of their on-premises and AWS environments. This offering allows customers to leverage their existing investments in VMware skills and tooling to take advantage of the flexibility of the AWS Cloud.

Pat Gelsinger, CEO of VMware was on stage with Andy Jassy talking about the value that this partnership brings to customers.

In a surprising move, Amazon is making its serverless computing framework, AWS Lambda available outside of its cloud environment. Extending Lambda to connected devices, AWS has announced AWS Greengrass – an embedded Lambda compute environment that can be installed in IoT devices and hubs. It delivers local compute, storage, and messaging infrastructure in environments that demand offline access. Developers can use the simple Lambda programming model to develop applications for both offline and online scenarios. Amazon Greengrass Core is designed to run on hub and gateways while Greengrass runtime will power low-end, resource-constrained devices.

Extending the hybrid scenarios to industrial IoT, Amazon has also announced a new appliance called Snowball Edge that runs Greengrass Core. This appliance is expected to be deployed in environments that generate extensive offline data. It exposes an S3-compatible endpoint for developers to use the same ingestion API as the cloud. Since the device runs Lambda, developers can create functions that respond to events locally. Amazon Snowball Edge ships with 100TB capacity, hi-speed Ethernet Wi-Fi, and 3G cellular connectivity. When the ingestion process is completed, customers can send the appliance to AWS for uploading the data.

Pushing the limits of data migration to the cloud, Amazon is also launching a specialized truck called AWS Snowmobile that can move Exabytes of data to AWS. The truck carries a 48-foot long container that can hold up to 100 Petabytes of data. Customers must call AWS to open the vestibule of the truck to start ingesting the data. They just need to plug the fiber cable and the power cable to start loading the data. Amazon estimates that the loading and unloading process takes about three months on each side.

Apart from these services, Andy Jassy has also announced a slew of enhancements to Amazon EC2 and RDS.

Original article here.


standard

IBM Watson Analytics Beta Now Open

2016-12-20 - By 

IBM announced that Watson Analytics, a breakthrough natural language-based cognitive service that can provide instant access to powerful predictive and visual analytic tools for businesses, is available in beta. See Vine(vine.co/v/Ov6uvi1m7lT) for a sneak peek now.

I’m pleased to announce that I have my access, and its amazing. Uploading raw CSV data in and playing with it as a great shortcut to finding insights. It works really well and really quickly.

IBM Watson Analytics automates the once time-consuming tasks such as data preparation, predictive analysis, and visual storytelling for business professionals. Offered as a cloud-based freemium service, all business users can now access Watson Analytics from any desktop or mobile device. Since being announced on September 16, more than 22,000 people have already registered for the beta. The Watson Analytics Community, a user group for sharing news, best practices, technical support and training, is also accessible starting today.

This news follows IBM’s recently announced global partnership with Twitter, which includes plans to offer Twitter data as part of IBM Watson Analytics.

Learn more about how IBM Watson Analytics works:

As part of its effort to equip all professionals with the tools needed to do their jobs better, Watson Analytics provides business professionals with a unified experience and natural language dialogue so they can better understand data and more quickly reach business goals. For example, a marketing, HR or sales rep can quickly source data, cleanse and refine it, discover insights, predict outcomes, visualize results, create reports and dashboards and explain results in familiar business terms.

To view today’s news and access a video to be shared, visit the Watson Analytics Storybook:https://ibm.biz/WAStorybook.

Original article here.


standard

Capita to replace staff with robots to save money

2016-12-10 - By 

Outsourcing giant to axe 2,000 jobs and use ‘proprietary robotic solutions’ after clients cut spending following Brexit vote.

A British outsourcing company whose contracts include collecting the BBC licence fee is to replace staff with robots as it slashes costs.

Capita, a FTSE 100-listed firm that also runs the London congestion charge, said it needed to axe 2,000 jobs as part of a cost-cutting drive in response to poor trading.

 

It said it would use the money it saved from sacking thousands of staff to fund investment in automated technology across all of the company’s divisions. The announcement will fuel growing fears that human workers will have to make way for robots, as companies turn to technology to boost profits.

The Apple and Samsung supplier Foxconn was reported to have replaced 60,000 workers with robots earlier this year, while the former chief executive of McDonald’s suggested a similar tactic in response to low-paid workers’ demands for better pay and conditions.

In a gloomy statement that sent its shares to a 10-year low at one stage, Capita said it had been hit by “headwinds” as its corporate clients reined in their spending. The company unveiled plans to shore up its finances, saving £50m a year via austerity measures, including greater use of “proprietary robotic solutions” and moving around 200 jobs to India.

The chief executive, Andy Parker, said Capita, which made a pre-tax profit of £186m in the first six months of this year, would use robots to help eliminate human error and make decisions faster. The company employs 78,000 people.

“It doesn’t remove the need for an individual but it speeds up how they work, which means you need less [sic] people to do it.”

He said a human assisted by automated robotic technology could do a 40-minute job in much less time.

“They [human staff members] can then do 10 times the amount they used to, so you need less [sic] people to do the same amount of work.”

Parker said this would make the company more efficient by “taking away some of the decision-making and cutting down potential errors”.

Capita, which provides services ranging from electronic tagging for offenders to store card services for retailers, will also move some of its IT operation abroad. Parker said this would involve “a couple of hundred” jobs being shifted to India.

 

The company’s decisions on staffing are part of an attempt to reduce costs without causing shareholders any financial pain. Parker said the cost cuts – coupled with asset sales – would allow Capita to avoid reducing its annual dividend, which was worth £200m last year and £180m the year before.

But despite the effort to protect investors, shares in the company finished down more than 4%, having fallen more than 14% during the day, as investors were left stunned by the company’s pessimistic outlook.

Parker said he “would have thought there’d be a more positive reaction”.

Rehana Azam, the national secretary for public services at the GMB union, said: “Public services are predominantly delivered by people so it’s hard to see how they’re going to provide a cost-efficient service from call centres in another country.

“We’d want to sit down with Capita and make sure people are treated fairly in any process that ends with them losing jobs.”

Azam cast doubt on whether using robots to automate some of its systems would work. “We’ve never had a good track record with private providers delivering computerised systems. I’d like to see where there have been good examples of that kind of automation.”

Capita has struggled as its clients, which include O2, M&S, John Lewis and Dixons Carphone, have looked to cut costs in areas such as corporate travel and recruitment.

The company refused to blame the Brexit vote for the disappointing update but said earlier this year that uncertainty over the UK’s relationship with the European Union had hit its business, delaying key contracts.

Capita is predominantly UK-based, unlike bigger rivals, such as G4S and Serco, which have been sheltered to a large degree from the Brexit-related fallout by their bigger geographical footprint.

Original article here.


standard

Microsoft Researchers Predict What’s Coming in AI for the Next Decade

2016-12-06 - By 

Microsoft Research’s female contingent makes their calls for AI breakthroughs to come.

Seventeen Microsoft researchers—all of whom happen to be women this year—have made their calls for what will be hot in the burgeoning realm of artificial intelligence (AI) in the next decade.

Ripping a page out of the IBM IBM -0.48% 5 for 5 playbook, Microsoft MSFT -0.37% likes to use these annual predictions to showcase the work of its hotshot research brain trust. Some of the picks are already familiar. One is about how advances in deep learning—which endows computers with human-like thought processes—will make computers or other smart devices more intuitive and easier to use. This is something we’ve all heard before, but the work is not done, I guess.

 

For example, “the search box” most of us use on Google or Bing search engines will disappear, enabling people to search for things based on spoken commands, images, or video, according to Susan Dumais, distinguished scientist and deputy managing director of Microsoft’s Redmond, Wash. research lab. That’s actually already happening with products like Google GOOG -0.10% Now, Apple Siri AAPL 0.26% , and Microsoft Cortana—but there’s more to do.

Dumais says the box will go away. She explains:

That is more ubiquitous, embedded and contextually sensitive. We are seeing the beginnings of this transformation with spoken queries, especially in mobile and smart home settings. This trend will accelerate with the ability to issue queries consisting of sound, images or video, and with the use of context to proactively retrieve information related to the current location, content, entities or activities without explicit queries.

Virtual reality will become more ubiquitous as researchers enhance better “body tracking” capabilities, says Mar Gonzalez Franco, a researcher at the Redmond research lab. That will enable such rich, multi-sensorial experiences that could actually cause subjects to hallucinate. That doesn’t sound so great to some, but that capability could help people with disabilities “retrain” their perceptual systems, she notes.

Get Data Sheet, Fortune’s tech newsletter.

There’s but one mention on this list of the need for ethical or moral guidelines for the use of AI. That comes from Microsoft distinguished scientist Jennifer Chayes.

Chayes, who is also managing director of Microsoft’s New England and New York City research labs, thinks AI can be used to police the ethical application of AI.

Our lives are being enhanced tremendously by artificial intelligence and machine learning algorithms. However, current algorithms often reproduce the discrimination and unfairness in our data and, moreover, are subject to manipulation by the input of misleading data. One of the great algorithmic advances of the next decade will be the development of algorithms which are fair, accountable and much more robust to manipulation.

Microsoft experienced the mis-use of AI’s power first-hand earlier this year when its experimental Tay chatbot offended many Internet users with racist and sexist slurs that the program was taught by others. Microsoft chose to focus on female researchers to stress that, while women and girls make up half of the world’s population, they account for less than 20% of computer science graduates.

This is particularly true for women and girls who comprise 50% of the world’s population, but account for less than 20 percent of computer science graduates, according to the Organization for Economic Cooperation and Development. The fact that the U.S. Bureau of Labor Statistics expects that there will be fewer than 400,000 qualified applicants to take on 1.4 million computing jobs in 2020 means there is great opportunity for women in technology going forward.

Original article here.


standard

Gartner’s Top 10 Strategic Technology Trends for 2017

2016-12-05 - By 

Artificial intelligence, machine learning, and smart things promise an intelligent future.

Today, a digital stethoscope has the ability to record and store heartbeat and respiratory sounds. Tomorrow, the stethoscope could function as an “intelligent thing” by collecting a massive amount of such data, relating the data to diagnostic and treatment information, and building an artificial intelligence (AI)-powered doctor assistance app to provide the physician with diagnostic support in real-time. AI and machine learning increasingly will be embedded into everyday things such as appliances, speakers and hospital equipment. This phenomenon is closely aligned with the emergence of conversational systems, the expansion of the IoT into a digital mesh and the trend toward digital twins.

Three themes — intelligent, digital, and mesh — form the basis for the Top 10 strategic technology trends for 2017, announced by David Cearley, vice president and Gartner Fellow, at Gartner Symposium/ITxpo 2016 in Orlando, Florida. These technologies are just beginning to break out of an emerging state and stand to have substantial disruptive potential across industries.

Intelligent

AI and machine learning have reached a critical tipping point and will increasingly augment and extend virtually every technology enabled service, thing or application.  Creating intelligent systems that learn, adapt and potentially act autonomously rather than simply execute predefined instructions is primary battleground for technology vendors through at least 2020.

Trend No. 1: AI & Advanced Machine Learning

AI and machine learning (ML), which include technologies such as deep learning, neural networks and natural-language processing, can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously. Systems can learn and change future behavior, leading to the creation of more intelligent devices and programs.  The combination of extensive parallel processing power, advanced algorithms and massive data sets to feed the algorithms has unleashed this new era.

In banking, you could use AI and machine-learning techniques to model current real-time transactions, as well as predictive models of transactions based on their likelihood of being fraudulent. Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios..

Trend No. 2: Intelligent Apps

Intelligent apps, which include technologies like virtual personal assistants (VPAs), have the potential to transform the workplace by making everyday tasks easier (prioritizing emails) and its users more effective (highlighting important content and interactions). However, intelligent apps are not limited to new digital assistants – every existing software category from security tooling to enterprise applications such as marketing or ERP will be infused with AI enabled capabilities.  Using AI, technology providers will focus on three areas — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces. By 2018, Gartner expects most of the world’s largest 200 companies to exploit intelligent apps and utilize the full toolkit of big data and analytics tools to refine their offers and improve customer experience.

Trend No. 3: Intelligent Things

New intelligent things generally fall into three categories: robots, drones and autonomous vehicles. Each of these areas will evolve to impact a larger segment of the market and support a new phase of digital business but these represent only one facet of intelligent things.  Existing things including IoT devices will become intelligent things delivering the power of AI enabled systems everywhere including the home, office, factory floor, and medical facility.

As intelligent things evolve and become more popular, they will shift from a stand-alone to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks. However, nontechnical issues such as liability and privacy, along with the complexity of creating highly specialized assistants, will slow embedded intelligence in some scenarios.

Digital

The lines between the digital and physical world continue to blur creating new opportunities for digital businesses.  Look for the digital world to be an increasingly detailed reflection of the physical world and the digital world to appear as part of the physical world creating fertile ground for new business models and digitally enabled ecosystems.

Trend No. 4: Virtual & Augmented Reality

Virtual reality (VR) and augmented reality (AR) transform the way individuals interact with each other and with software systems creating an immersive environment.  For example, VR can be used for training scenarios and remote experiences. AR, which enables a blending of the real and virtual worlds, means businesses can overlay graphics onto real-world objects, such as hidden wires on the image of a wall.  Immersive experiences with AR and VR are reaching tipping points in terms of price and capability but will not replace other interface models.  Over time AR and VR expand beyond visual immersion to include all human senses.  Enterprises should look for targeted applications of VR and AR through 2020.

Trend No. 5: Digital Twin

Within three to five years, billions of things will be represented by digital twins, a dynamic software model of a physical thing or system. Using physics data on how the components of a thing operate and respond to the environment as well as data provided by sensors in the physical world, a digital twin can be used to analyze and simulate real world conditions, responds to changes, improve operations and add value. Digital twins function as proxies for the combination of skilled individuals (e.g., technicians) and traditional monitoring devices and controls (e.g., pressure gauges). Their proliferation will require a cultural change, as those who understand the maintenance of real-world things collaborate with data scientists and IT professionals.  Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.

Trend No. 6: Blockchain

Blockchain is a type of distributed ledger in which value exchange transactions (in bitcoin or other token) are sequentially grouped into blocks.  Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.  They promise a model to add trust to untrusted environments and reduce business friction by providing transparent access to the information in the chain.  While there is a great deal of interest the majority of blockchain initiatives are in alpha or beta phases and significant technology challenges exist.

Mesh

The mesh refers to the dynamic connection of people, processes, things and services supporting intelligent digital ecosystems.  As the mesh evolves, the user experience fundamentally changes and the supporting technology and security architectures and platforms must change as well.

Trend No. 7: Conversational Systems

Conversational systems can range from simple informal, bidirectional text or voice conversations such as an answer to “What time is it?” to more complex interactions such as collecting oral testimony from crime witnesses to generate a sketch of a suspect.  Conversational systems shift from a model where people adapt to computers to one where the computer “hears” and adapts to a person’s desired outcome.  Conversational systems do not use text/voice as the exclusive interface but enable people and machines to use multiple modalities (e.g., sight, sound, tactile, etc.) to communicate across the digital device mesh (e.g., sensors, appliances, IoT systems).

Trend No. 8: Mesh App and Service Architecture

The intelligent digital mesh will require changes to the architecture, technology and tools used to develop solutions. The mesh app and service architecture (MASA) is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs and events to deliver modular, flexible and dynamic solutions.  Solutions ultimately support multiple users in multiple roles using multiple devices and communicating over multiple networks. However, MASA is a long term architectural shift that requires significant changes to development tooling and best practices.

Trend No. 9: Digital Technology Platforms

Digital technology platforms are the building blocks for a digital business and are necessary to break into digital. Every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the Internet of Things and business ecosystems. In particular new platforms and services for IoT, AI and conversational systems will be a key focus through 2020.   Companies should identify how industry platforms will evolve and plan ways to evolve their platforms to meet the challenges of digital business.

Trend No. 10: Adaptive Security Architecture

The evolution of the intelligent digital mesh and digital technology platforms and application architectures means that security has to become fluid and adaptive. Security in the IoT environment is particularly challenging. Security teams need to work with application, solution and enterprise architects to consider security early in the design of applications or IoT solutions.  Multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

Original article here.


standard

Pay a universal income because robots will take all our jobs, says Elon Musk

2016-12-03 - By 

Elon Musk has said that there’s a “pretty good chance” that automation will entirely replace workers in the future, meaning governments will have to make up for lost wages by paying people.

The billionaire founder of Tesla and SpaceX said that rapid changes to the workforce from automation is likely to force us to introduce a “universal basic income” in which people will have to be supported by a stipend instead of working for a living.

Musk has been vocal in his warnings about the potential downside of the rise of the robots. He has invested millions in OpenAI, a project to ensure that artificial intelligence benefits mankind, rather than destroys it, and last week he said that it was only a matter of time before AI was used to take down the internet.

“There is a pretty good chance we end up with a universal basic income, or something like that, due to automation,” Musk told CNBC. “I am not sure what else one would do. I think that is what would happen.”

The idea of a universal income – an unconditional government payment – has gained traction in recent years amid growing concerns about the effect that robots will have on employment.

Customer service agents, builders and leisure industry workers are just a few jobs where opportunities may be erased or severely diminished over the coming decades, and some economists and researchers believe the majority of jobs that exist today will be done by robots in 30 years.

ABOUT | The robots are taking over… or are they?

10 professions that will almost certainly be automated…

  • Telemarketing
  • Loan officers
  • Umpires
  • Bank tellers
  • Engravers
  • Credit analysts
  • Office clerks
  • Legal secretaries
  • Estate agents
  • Chefs

…and 10 professions that probably won’t

  • Therapists
  • Social workers
  • Human resources managers
  • Fashion designers
  • Teachers
  • Pharmacists
  • Engineers
  • Public relations
  • Computer scientists
  • Health and safety engineers

Musk said this presented an opportunity rather than a threat, however. “People will have time to do other things, more complex things, more interesting things. Certainly more leisure time,” he said.

The billionaire has previously said we need to embellish our brains with artificial intelligence to avoid being left behind by robots. He has floated the idea that humans should become cyborgs if we don’t want to become the equivalent of a pet for robots, backing the idea of a “neural lace”, a new electronic layer of the brain.

Original article here.


standard

How Economists View the Rise of Artificial Intelligence

2016-11-25 - By 

Machine learning will drop the cost of making predictions, but raise the value of human judgement.

To really understand the impact of artificial intelligence in the modern world, it’s best to think beyond the mega-research projects like those that helped Google recognize cats in photos.

According to professor Ajay Agrawal of the University of Toronto, humanity should be pondering how the ability of cutting edge A.I. techniques like deep learning—which has boosted the ability for computers to recognize patterns in enormous loads of data—could reshape the global economy.

Making his comments at the Machine Learning and the Market for Intelligence conference this week by the Rotman School of Management at the University of Toronto, Agrawal likened the current boom of A.I. to 1995, when the Internet went mainstream. Gaining enough mainstream traction, the Internet ceased to be seen as a new technology. Instead, it was a new economy where businesses could emerge online.

However, one group of people refused to call the Internet a new economy: economists. For them, the Internet didn’t usher in a new economy per se, instead it simply altered the existing economy by introducing a new way to purchase goods like shoes or toothbrushes at a cheaper rate than brick-and-mortar stores offered.

“Economists think of technology as drops in the cost of particular things,” Agrawal said.

Likewise, the advent of calculators or rudimentary computers lowered the cost for people to perform basic arithmetic, which aided workers at the census bureau who previously slaved away for hours manually crunching data without the help of those tools.

Similarly, with the rise of digital cameras, improvements in software and hardware helped manufacturers run better internal calculations within the device that could help users capture and improve their digital photos. Researchers essentially applied calculations to the old-school field of photography, something previous generations probably never believed would be touched by math, he explained.

As people “we shifted to an arithmetic solution” to help improve digital cameras, but their cost went up as more people wanted them, as opposed to traditional film cameras that require film and chemical baths to produce good photos, he added. “Those went down,” said Agrawal, in terms of both cost and want.

Artificial Intelligence and the future | André LeBlanc | TEDxMoncton

All this takes us back to the rise of machine learning and its ability to learn from data and make predictions based on the information.

The rise of machine learning will lead to “a drop in the cost of prediction,” he said. However, this drop will result in certain other things to go up in value, he explained.

For example, a doctor that works on a patient with a hurt leg will probably have to take an x-ray of the limb and ask questions to gather information so that he or she can make a prediction on what to do next. Advanced data analytics, however, would presumably make it easier to predict the best course of remedy for the doctor, but it will be up for the doctor to follow through or not.

So while “machine intelligence is a substitute for human prediction,” it can also be “a compliment to human judgment, so the value of human judgment increases,” Agrawal said.

In some ways, Agrawal’s comments call to mind a recent research paper in which researchers developed an A.I. system that could predict 79% of the time the correct outcome of roughly 600 human rights cases by the European Court of Human Rights. The report’s authors explained that while the tool could help discover patterns in the court cases, “they do not believe AI will be able to replace human judgement,” as reported by the Verge.

The authors of that research paper don’t want A.I. powered computers to replace humans as new, futuristic cyber judges. Instead, they want the tool to help humans to make more thoughtful judgements that can ultimately improve human rights.

Original article here.


standard

How to profit from the IoT: 4 quick successes and 4 bigger ideas

2016-11-24 - By 

During the past few years, much has been made of the billions of sensors, cameras, and other devices being connected exponentially in the “Internet of Things” (IoT)—and the trillions of dollars in potential economic value that is expected to come of it. Yet as exciting as the IoT future may be, a lot of the industry messaging has gone right over the heads of people who today operate plants, run businesses and are responsible for implementing IoT-based solutions. Investors find themselves wondering what is real, and what is a hyped-up vision of a future that is still years away.

Over the past decade, I have met with dozens of organizations in all corners of the globe, talking with people about IoT. I’ve worked with traditional industrial companies struggling to change outmoded manufacturing processes, and I’ve worked with innovative young startups that are redefining long-held assumptions and roles. And I can tell you that the benefits of IoT are not in some far-off future scenario. They are here and now—and growing. The question is not whether companies should begin deploying IoT—the benefits of IoT are clear—but how.

So, how do the companies get started on the IoT journey? It’s usually best to begin with a small, well-defined project that improves efficiency and productivity around existing processes. I’ve seen countless organizations, large and small, enjoy early success in their IoT journey by taking one of the following “fast paths” to IoT payback:

 
  • Connected operations. By connecting key processes and devices in their production process on a single network, iconic American motorcycle maker Harley Davidson increased productivity by 80%, reduced its build-to-order cycle from 18 months to two weeks, and grew overall profitability by 3%-4%.
  • Remote operations. A dairy company in India began remotely monitoring the freezers in its 150 ice cream stores, providing alerts in case of power outages. The company began realizing a payback within a month and saw a five-fold return on its investment within 13 months.
  • Predictive analytics. My employer Cisco has deployed sensors and used energy analytics software in manufacturing plants, reducing energy consumption by 15% to 20%.
  • Predictive maintenance. Global mining company Rio Tinto uses sensors to monitor the condition of its vehicles, identifying maintenance needs before they become problems—and saves $2 million a day every time it avoids a breakdown.

These four well-proven scenarios are ideal candidates to get started on IoT projects. Armed with an early success, companies can then build momentum and begin to tackle more transformative IoT solutions. Here, IoT provides rich opportunities across many domains, including:

 
  • New business opportunities and revenue streams. Connected operations combined with 3D printing, for example, are making personalization and mass customization possible in ways not imagined a few years ago.
  • New business models. IoT enables equipment manufacturers to adopt service-oriented business models. By gathering data from devices installed at a customer site, manufacturers like Japanese industrial equipment maker Fanuc can offer remote monitoring, analytics and predictive maintenance services to reduce costs and improve uptime.
  • New business structures. In many traditional industries, customers have typically looked to a single vendor for a complete end-to-end solution, often using closed, proprietary technologies. Today IoT, with its flexibility, cost, and time-to-market advantages, is driving a shift to an open technology model where solution providers form an ecosystem of partners. As a result, each participant provides its best-in-class capabilities to contribute to a complete IoT solution for their customers.
  • New value propositions for consumers. IoT is helping companies provide new hyper-relevant customer experiences and faster, more accurate services than ever before. Just think of the ever-increasing volume of holiday gift orders placed online on “Black Monday.” IoT is speeding up the entire fulfillment process, from ordering to delivery. Connected robots and Radio Frequency Identification (RFIUD) tags in the warehouse make the picking and packing process faster and more accurate. Real-time preventive maintenance systems keep delivery vehicles up and running. Telematic sensors record temperate and humidity throughout the process. So, not only can you track your order to your doorstep, your packages are delivered on time—and they arrive in optimal condition.

 

So, yes, IoT is real today and is already having a tremendous impact. It is gaining traction in industrial segments, logistics, transportation, and smart cities. Other industries, such as healthcare, retail, and agriculture are following closely.

We are just beginning to understand IoT’s potential. But if you are an investor wondering where the smart money is going, one thing is certain: 10 years from now, you’ll have to look hard to find an industry that has not been transformed by IoT.

Original article here.

 


standard

Google, Facebook, and Microsoft Are Remaking Themselves Around AI

2016-11-24 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.


standard

Amazon Echo now talks you through 60,000 recipes

2016-11-21 - By 

Allrecipes’ Alexa skill helps you cook, even if you’re not sure what you want to make.

Believe it or not, there hasn’t really been a comprehensive recipe skill for Amazon Echo speakers. Campbell’s skill is focused on the soup brand, IFTTT integration is imperfect and Jamie Oliver’s skill won’t read cooking instructions aloud. Allrecipes might just save the day, though. It just launched an Alexa skill that guides you through cooking 60,000 meals — and importantly, helps you find something to cook in the first place. You can ask what’s possible with the ingredients you have on hand, find a quick-to-make dish or check on measurements.

When you’re in the middle of cooking, you can pause, repeat or advance steps.

The skill is free to use, and works with any device that supports Alexa skills in the first place (including Fire TV). If it works as well as promised, it might be a crucial addition. The Echo is already the quintessential kitchen speaker for many people — it’s that much more useful if it can save you from flipping through a cookbook (or a recipe app on your phone) with your flour-covered hands.

Original article here.

 


standard

A $2 Billion Chip to Accelerate Artificial Intelligence

2016-11-10 - By 

A new chip design from Nvidia will allow machine-learning researchers to marshal larger collections of simulated neurons.

The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.

On Tuesday Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning. This technique has produced recent major advances such as the Google software AlphaGo that defeated the world’s top Go player last month (see “Five Lessons from AlphaGo’s Historic Victory”).

Deep learning involves passing data through large collections of crudely simulated neurons. The P100 could help deliver more breakthroughs by making it possible for computer scientists to feed more data to their artificial neural networks or to create larger collections of virtual neurons.

Artificial neural networks have been around for decades, but deep learning only became relevant in the last five years, after researchers figured out that chips originally designed to handle video-game graphics made the technique much more powerful. Graphics processors remain crucial for deep learning, but Nvidia CEO Jen-Hsun Huang says that it is now time to make chips customized for this use case.

At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang. It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia’s previous best chip.

Deep-learning researchers from Facebook, Microsoft, and other companies that Nvidia granted early access to the new chip said they expect it to accelerate their progress by allowing them to work with larger collections of neurons.

“I think we’re going to be able to go quite a bit larger than we have been able to in the past, like 30 times bigger,” said Bryan Catanzero, who works on deep learning at the Chinese search company Baidu. Increasing the size of neural networks has previously enabled major jumps in the smartness of software. For example, last year Microsoft managed to make software that beats humans at recognizing objects in photos by creating a much larger neural network.

Huang of Nvidia said that the new chip is already in production and that he expects cloud-computing companies to start using it this year. IBM, Dell, and HP are expected to sell it inside servers starting next year.

He also unveiled a special computer for deep-learning researchers that packs together eight P100 chips with memory chips and flash hard drives. Leading academic research groups, including ones at the University of California, Berkeley, Stanford, New York University, and MIT, are being given models of that computer, known as the DGX-1, which will also be sold for $129,000.

Original article here.


standard

Artificial Intelligence Will Grow 300% in 2017

2016-11-06 - By 

Insights matter. Businesses that use artificial intelligence (AI), big data and the Internet of Things (IoT) technologies to uncover new business insights “will steal $1.2 trillion per annum from their less informed peers by 2020.” So says Forrester in a new report, “Predictions 2017: Artificial Intelligence Will Drive The Insights Revolution.”

Across all businesses, there will be a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. Through the use of cognitive interfaces into complex systems, advanced analytics, and machine learning technology, AI will provide business users access to powerful insights never before available to them. It will help, says Forrester, “drive faster business decisions in marketing, ecommerce, product management and other areas of the business by helping close the gap from insights to action.”

The combination of AI, Big data, and IoT technologies will enable businesses investing in them and implementing them successfully to overcome barriers to data access and to mining useful insights. In 2017 these technologies will increase business’ access to data, broaden the types of data that can be analyzed, and raise the level of sophistication of the resulting insight. As a result, Forrester predicts an acceleration in the trend towards democratization of data analysis. While in 2015 it found that only 51% of data and analytics decision-makers said that they were able to easily obtain data and analyze it without the help of technologist, Forrester expects this figure to rise to around 66% in 2017.

Big data technologies will mature and vendors will increasingly integrate them with their traditional analytics platforms which will facilitate their incorporation in existing analytics processes in a wide range of organizations. The use of a single architecture for big data convergence with agile and actionable insights will become more widespread.

The third set of technologies supporting insight-driven businesses, those associated with IoT, will also become integrated with more traditional analytics offerings and Forrester expects the number of digital analytics vendors offering IoT insights capabilities to double in 2017. This will encourage their customers to invest in networking more devices and exploring the data they produce. For example, Forrester has found that 67% of telecommunications decision-makers are considering or prioritizing developing IoT or M2M initiatives in 2017.

The increased investment in IoT will lead to new type of analytics which in turn will lead to new business insights. Currently, much of the data that is generated by edge devices such as mobile phones, wearables, or cars, goes unused as “immature data and analytics practices cause most firms to squander these insights opportunities,” says Forrester. In 2016, less than 50% of data and analytics decision-makers have adopted location analytics, but Forrester expects the adoption of location analytics will grow to over two-thirds of businesses by the end of 2017.  The resulting new insights will enable firms to optimize their customers’ experiences as they engage in the physical world with products, services and support.

In general, Forrester sees encouraging signs that more companies are investing in initiatives to get rid of existing silos of customer knowledge so they can coordinate better and drive insights throughout the entire enterprise. Specifically, Forrester sees three such initiatives becoming prominent in 2017:

Organizations with Chief Data Officers (CDOs) will become the majority in 2017, up from a global average of 47% in 2016. But to become truly insights-driven, says Forrester, “firms must eventually assign data responsibilities to CIOs and CMOs, and even CEOs, in order to drive swift business action based on data driven insights.”

Customer data management projects will increase by 75%. In 2016, for the first time, 39% of organizations have embarked on a big data initiative to support cross-channel tracking and attribution, customer journey analytics, and better segmentation. And nearly one-third indicated plans to adopt big data technologies and solutions in the next twelve months.

Forrester expects to see a marked increase in the adoption of enterprise-wide insights-driven practices as firms digitally transform their business in 2017. Leading customer intelligence practices and strategies will become “the poster child for business transformation,” says Forrester.

Longer term, according to Forrester’s “The Top Emerging Technologies To Watch: 2017 To 2021,” Artificial intelligence-based services and applications will eventually change most industries and redistribute the workforce.

Original article here.


standard

The Race For AI: Google, Twitter, Intel, Apple In A Rush To Grab Artificial Intelligence Startups

2016-10-10 - By 

Nearly 140 private companies working to advance artificial intelligence technologies have been acquired since 2011, with over 40 acquisitions taking place in 2016 alone (as of 10/7/2016). Corporate giants like Google, IBM, Yahoo, Intel, Apple and Salesforce, are competing in the race to acquire private AI companies, with Samsung emerging as a new entrant this month with its acquisition of startup Viv Labs, which is developing a Siri-like AI assistant.

Google has been the most prominent global player, with 11 acquisitions in the category under its belt (follow all of Google’s M&A activity here through our real-time Google acquisitions tracker).

In 2013, the corporate giant picked up deep learning and neural network startup DNNresearchfrom the computer science department at the University of Toronto. This acquisition reportedly helped Google make major upgrades to its image search feature. In 2014 Googleacquired British company DeepMind Technologies for some $600M (Google DeepMind’s program recently beat a human world champion in the board game “Go”). This year, it acquired visual search startup Moodstock, and bot platform Api.ai.

Intel and Apple are tied for second place. The former acquired 3 startups this year alone: Itseez, Nervana Systems, and Movidius, while Apple acquired Turi and Tuplejump recently.

Twitter ranks third, with 4 major acquisitions, the most recent being image-processing startupMagic Pony.

Salesforce, which joined the race last year with the acquisition of Tempo AI, has already made two major acquisitions this year: Khosla Ventures-backed MetaMind and open-source machine-learning server PredictionIO.

We updated this timeline on 10/7/2016 to include acquirers who have made atleast 2 acquisitions since 2011.

Major Acquirers In Artificial Intelligence Since 2011
CompanyDate of AcquisitionAcquirer
Hunch11/21/2011eBay
Cleversense12/14/2011Google
Face.com5/29/2012Facebook
DNNresearch3/13/2013Google
Netbreeze3/20/2013Microsoft
Causata8/7/2013NICE
Indisys8/25/2013Yahoo
IQ Engines9/13/2013Intel
LookFlow10/23/2013Yahoo
SkyPhrase12/2/2013Yahoo
Gravity1/23/2014AOL
DeepMind1/27/2014Google
Convertro5/6/2014AOL
Cogenea5/20/2014IBM
Desti5/30/2014Nokia
Medio Systems6/12/2014Nokia
Madbits7/30/2014Twitter
Emu8/6/2014Google
Jetpac8/16/2014Google
Dark Blue Labs10/23/2014DeepMind
Vision Factory10/23/2014DeepMind
Wit.ai1/5/2015Facebook
Equivio1/20/2015Microsoft
Granata Decision Systems1/23/2015Google
AlchemyAPI3/4/2015IBM
Explorys4/13/2015IBM
TellApart4/28/2015Twitter
Timeful5/4/2015Google
Tempo AI5/29/2015Salesforce
Sociocast6/9/2015AOL
Whetlab6/17/2015Twitter
Orbeus10/1/2015Amazon
Vocal IQ10/2/2015Apple
Perceptio10/6/2015Apple
Saffron10/26/2015Intel
Emotient1/7/2016Apple
Nexidia1/11/2016NICE
PredictionIO2/19/2016Salesforce
MetaMind4/4/2016Salesforce
Crosswise4/14/2016Oracle
Expertmaker5/5/2016eBay
Itseez5/27/2016Intel
Magic Pony6/20/2016Twitter
Moodstocks7/6/2016Google
SalesPredict7/11/2016eBay
Turi8/5/2016Apple
Nervana Systems8/9/2016Intel
Genee8/22/2016Microsoft
Movidius9/6/2016Intel
Palerra9/19/2016Oracle
Api.ai9/19/2016Google
Angel.ai9/20/2016Amazon
tuplejump9/22/2016Apple

Original article here.


standard

IBM Research Takes Watson to Hollywood with the First “Cognitive Movie Trailer”

2016-10-03 - By 

How do you create a movie trailer about an artificially enhanced human?

You turn to the real thing – artificial intelligence.

20th Century Fox has partnered with IBM Research to develop the first-ever “cognitive movie trailer” for its upcoming suspense/horror film, “Morgan”. Fox wanted to explore using artificial intelligence (AI) to create a horror movie trailer that would keep audiences on the edge of their seats.

Movies, especially horror movies, are incredibly subjective. Think about the scariest movie you know (for me, it’s the 1976 movie, “The Omen”). I can almost guarantee that if you ask the person next to you, they’ll have a different answer. There are patterns and types of emotions in horror movies that resonate differently with each viewer, and the intricacies and interrelation of these are what an AI system would have to identify and understand in order to create a compelling movie trailer. Our team was faced with the challenge of not only teaching a system to understand, “what is scary”, but then to create a trailer that would be considered “frightening and suspenseful” by a majority of viewers.

As with any AI system, the first step was training it to understand a subject area. Using machine learning techniques and experimental Watson APIs, our Research team trained a system on the trailers of 100 horror movies by segmenting out each scene from the trailers. Once each trailer was segmented into “moments”, the system completed the following;

1)   A visual analysis and identification of the people, objects and scenery. Each scene was tagged with an emotion from a broad bank of 24 different emotions and labels from across 22,000 scene categories, such as eerie, frightening and loving;

2)   An audio analysis of the ambient sounds (such as the character’s tone of voice and the musical score), to understand the sentiments associated with each of those scenes;

3)   An analysis of each scene’s composition (such the location of the shot, the image framing and the lighting), to categorize the types of locations and shots that traditionally make up suspense/horror movie trailers.

The analysis was performed on each area separately and in combination with each other using statistical approaches. The system now “understands” the types of scenes that categorically fit into the structure of a suspense/horror movie trailer.

Then, it was time for the real test. We fed the system the full-length feature film, “Morgan”. After the system “watched” the movie, it identified 10 moments that would be the best candidates for a trailer. In this case, these happened to reflect tender or suspenseful moments. If we were working with a different movie, perhaps “The Omen”, it might have selected different types of scenes. If we were working with a comedy, it would have a different set of parameters to select different types of moments.

It’s important to note that there is no “ground truth” with creative projects like this one. Neither our team, or the Fox team, knew exactly what we were looking for before we started the process. Based on our training and testing of the system, we knew that tender and suspenseful scenes would be short-listed, but we didn’t know which ones the system would pick to create a complete trailer. As most creative projects go, we thought, “we’ll know it when we see it.”

Our system could select the moments, but it’s not an editor. We partnered with a resident IBM filmmaker to arrange and edit each of the moments together into a comprehensive trailer. You’ll see his expertise in the addition of black title cards, the musical overlay and the order of moments in the trailer.

Not surprisingly, our system chose some moments in the movie that were not included in other “Morgan”trailers. The system allowed us to look at moments in the movie in different ways –moments that might not have traditionally made the cut, were now short-listed as candidates. On the other hand, when we reviewed all the scenes that our system selected, one didn’t seem to fit with the bigger story we were trying to tell –so we decided not to use it. Even Watson sometimes ends up with footage on the cutting room floor!

Traditionally, creating a movie trailer is a labor-intensive, completely manual process. Teams have to sort through hours of footage and manually select each and every potential candidate moment. This process is expensive and time consuming –taking anywhere between 10 and 30 days to complete.

From a 90-minute movie, our system provided our filmmaker a total of six minutes of footage. From the moment our system watched “Morgan” for the first time, to the moment our filmmaker finished the final editing, the entire process took about 24 hours.

Reducing the time of a process from weeks to hours –that is the true power of AI.

The combination of machine intelligence and human expertise is a powerful one. This research investigation is simply the first of many into what we hope will be a promising area of machine and human creativity. We don’t have the only solution for this challenge, but we’re excited about pushing the possibilities of how AI can augment the expertise and creativity of individuals.

AI is being put to work across a variety of industries; helping scientists discover promising treatment pathways to fight diseases or helping law experts discover connections between cases. Film making is just one more example of how cognitive computing systems can help people make new discoveries.

Original article here.


standard

IBM, Google, Facebook, Microsoft, Amazon form enormous AI partnership

2016-09-29 - By 

On Wednesday, the world learned of a new industry association called the Partnership on Artificial Intelligence, and it includes some of the biggest tech companies in the world. IBM, Google, Facebook, Microsoft, and Amazon have all signed on as marquis members, though the group hopes to expand even further over time. The goal is to create a body that can provide a platform for discussions among stakeholders and work out best practices for the artificial intelligence industry. Not directly mentioned, but easily seen on the horizon, is its place as the primary force lobbying for smarter legislation on AI and related future-tech issues.

Best practices can be boring or important, depending on the context, and in this case they are very, very important. Best practices could provide a framework for accurate safety testing, which will be important as researchers ask people to put more and more of their lives in the hands of AI and AI-driven robots. This sort of effort might also someday work toward a list of inherently dangerous and illegitimate actions or AI “thought” processes. One of its core goals is to produce thought leadership on the ethics of AI development.

So, this could end up being the bureaucracy that produces our earliest laws of robotics, if not the one that enforces them. The world “law” is usually used metaphorically in robotics. But with access to the lobbying power of companies like Google and Microsoft, we should expect the Partnership on AI to wade into discussions of real laws soon enough. For instance, the specifics of regulations governing self-driving car technology could still determine which would-be software standard will hit the market first. With the founding of this group, Google has put itself in a position to perhaps direct that regulation for its own benefit.

But, boy, is that ever not how they want you to see it. The group is putting in a really ostentatious level of effort to assure the world it’s not just a bunch of technology super-corps determining the future of mankind, like some sort of cyber-Bilderberg Group. The group’s website makes it clear that it will have “equal representation for corporate and non-corporate members on the board,” and that it “will share leadership with independent third-parties, including academics, user group advocates, and industry domain experts.”

Well, it’s one thing to say that, and quite another to live it. It remains to be seen if the group will actually comport itself as it will need to if it wants real support from the best minds in open source development. Below, the Elon Musk-associated non-profit research company OpenAI responds to the announcement with a rather passive-aggressive word of encouragement.

The effort to include non-profits and other non-corporate bodies makes perfect sense. There aren’t many areas in software engineering where you can claim to be the definitive authority if you don’t have the public on-board. Microsoft, in particular, is painfully aware of how hard it is to push a proprietary standard without the support of the open-source community. Not only will its own research be stronger and more diverse for incorporating the “crowd,” any recommendations it makes will carry more weight with government and far more weight with the public.

That’s why it’s so notable that some major players are absent from this early roll coll — most notably Apple and Intel. Apple haslong been known to be secretive about its AI research, even to the point of hurting its own competitiveness, while Intel has a history of treating AI as an unwelcome distraction. Neither approach is going to win the day, though there is an argument to be made that by remaining outside the group, Apple can still selfishly consume any insights it releases to the public.

Leaving such questions of business ethics aside, robot ethics remains a pressing problem. Self-driving cars illustrate exactly why, and the classic thought experiment involves a crowded freeway tunnel, with no room to swerve or time to brake. Seeing a crash ahead, your car must decide whether to swerve left and crash you into a pillar, or swerve right and save itself while forcing the car beside you right off the road itself. What is moral, in this situation? Would your answer change if the other car was carrying a family of five?

Right now these questions are purely academic. The formation of groups like this show they might not remain so for long.

Original article here.


standard

Investing in AI offers more rewards than risks

2016-09-27 - By 

It’s difficult to predict how artificial intelligence technology will change over the next 10 to 20 years, but there are plenty of gains to be made. By 2018, robots will supervise more than 3 million human workers; by 2020, smart machines will be a top investment priority for more than 30 percent of CIOs.

Everything from journalism to customer service is already being replaced by AI that’s increasingly able to replicate the experience and ability of humans. What was once seen as the future of technology is already here, and the only question left is how it will be implemented in the mass market.

Over time, the insights gleaned from the industries currently taking advantage of AI — and improving the technology along the way — will make it ever more robust and useful within a growing range of applications. Organizations that can afford to invest heavily in AI are now creating the momentum for even more to follow suit; those that can’t will find their niches in AI at risk of being left behind.

Risk versus reward

While some may argue it’s impossible to predict whether the risks of AI applications to business are greater than the rewards (or vice versa), analysts predict that by 2020, 5 percent of all economic transactions will be handled by autonomous software agents.

The future of AI depends on companies willing to take the plunge and invest, no matter the challenge, to research the technology and fund its continued development. Some are even doing it by accident, like the company that paid a programmer more than half a million dollars over six years, only to learn he automated his own job.

Many of the AI advancements are coming from the military. The U.S. government alone has requested $4.6 billion in drone funding for next year, as automated drones are set to replace the current manned drones used in the field. AI drones simply need to be given a destination and they’ll be able to dodge air defenses and reach the destinations on their own, while any lethal decisions are still made by human eyes.

On the academic side, institutions like the Massachusetts Institute of Technology and the University of Oxford are hard at work mapping the human brain and attempting to emulate it. This provides two different pathways — creating an AI that replicates the complexities of the human brain and emulating an actual human brain, which comes with a slew of ethical questions and concerns. For example, what rights does an AI have? And what happens if the server storing your emulated loved one is shut down?

While these questions remain unanswered, eventually, the proven benefits of AI systems for all industries will spur major players from all sectors of the economy to engage with it. It should be obvious to anyone that, just as current information technology is now indispensable to practically every industry in existence, artificial intelligence will be, as well.

The future of computation

Until now, AI has mostly been about crafting preprogramming tools for specific functions. These have been markedly rigid. These kinds of AI-based computing strategies have become commonplace. The future of AI will be dependent on true learning. In other words,AI will no longer have to rely on being given direct commands to understand what it’s being told to do.

Currently, we use GPS systems that depend on automated perception and learning, mobile devices that can interpret speech and search engines that are learning to interpret our intentions. Programming, specifically, is what makes developments like Google’s DeepMind and IBM’s Watson the next step in AI.

DeepMind wasn’t programmed with knowledge — there are no handcrafted programs or specific modules for given tasks. DeepMind is designed to learn automatically. The system is specifically crafted for generality so that the end result will be emergent properties. Emergent properties, such as the ability to program software that can beat grandmaster-level Go players, is incalculably more impressive when you realize no one programmed DeepMind to do it.

Traditional AI is narrow and can only do what it is programmed to know, but Olli, an automated car powered by Watson, learns from monitoring and interacting with passengers. Each time a new passenger requests a recommendation or destination, it stores this information for use with the next person. New sensors are constantly added, and the vehicle (like a human driver) continuously becomes more intelligent as it does its job.

But will these AI systems be able to do what companies like Google really want them to do, like predict the buying habits of end users better than existing recommendation software? Or optimizing supply chain transactions dynamically by relating patterns from the past? That’s where the real money is, and it’s a significantly more complex problem than playing games, driving and completing repetitive tasks.

The current proof points from various AI platforms — like finding fashion mistakes or predicting health problems — clearly indicate that AI is expanding, and these more complicated tasks will become a reality in the near-term horizon.

Soon, AI will be able to mimic complex human decision-making processes, such as giving investment advice or providing prescriptions to patients. In fact, with continuous improvement in true learning, first-tier support positions and more dangerous jobs (such as truck driving) will be completely taken over by robotics, leading to a new Industrial Revolution where humans will be freed up to solve problems instead of doing repetitious business processes.

The price of not investing in AI

The benefits and risks of investment are nebulous, uncertain and a matter for speculation. The one known risk common to all things new in business is uncertainty itself. So the risks mainly come in the form of making a bad investment, which is nothing new to the world of finance.

So as with all things strange and new, the prevailing wisdom is that the risk of being left behind is far greater, and far grimmer, than the benefits of playing it safe.

Original article here.

 


standard

5 ways artificial intelligence will change enterprise IT

2016-09-12 - By 

It’s been a busy summer in the artificial intelligence (A.I.) space, but the most interesting A.I. opportunities may not come from the biggest names.

You may have heard about Tesla’s self-driving cars that made headlines twice, for vastly different reasons — a fatal crash in Florida in which the driver was using the Autopilot software, and claims by a Missouri man that the feature drove him 20 miles to a hospital after he suffered a heart attack, saving his life.

Or you might have heard of Apple spending $200 million to acquire machine learning and A.I. startup Turi. A smart drone defeated an experienced Air Force pilot in flight simulation tests. IBM’s Watson diagnosed a 60-year-old woman’s rare form of leukemia within 10 minutes, after doctors had been stumped for months.

 

But believe it or not, enterprise IT is also a fertile ground for A.I. In fact, some of the most immediate and profound use cases for A.I. will come as companies increasingly integrate it into their data centers and development organizations to automate processes that have been done manually for decades.

Here are five examples:

1. Predicting software failures

Research at Harvard University’s T.H. Chan School of Public Health shows it may one day be possible to use A.I. algorithms to evaluate a person’s risk of cardiovascular disease. The study is testing whether these algorithms can draw connections between how patients describe their symptoms and the likelihood that they have the disease. The algorithms could lead to the development of a lexicon to interpret symptoms better and make more accurate diagnoses faster.

In a similar way, A.I. algorithms will be able to review and understand log files throughout the IT infrastructure in ways that are impossible to do with traditional methods and which could be able to predict a system crash minutes or hours before a human might notice anything was wrong.

2. Detecting cybersecurity issues

The list of high-profile cyberattacks over the last couple of years keeps growing and includes the theft of the confidential data of tens of millions of Target customers during the height of the holiday shopping season in 2015, the server breach at the U.S. Office of Personnel Management that compromised the sensitive personal information of about 21.5 million people, and the recent infiltration of the computer network of the Democratic National Committee by Russian government hackers.

Artificial intelligence holds great promise with its ability to learn patterns of networks, devices, and systems and decode deviations that could reveal in-progress attacks. A crop of startups is focused on these approaches, including StackPath — founded by entrepreneur Lance Crosby, who sold his previous company, cloud infrastructure startup SoftLayer, to IBM in 2013 for $2 billion.

The DefenseAdvanced Research Projects Agency, or DARPA (the agency that helped create the internet), recently sponsored a contest in which seven fully autonomous A.I. bots found security vulnerabilities hiding in vast amounts of code. The next day, the winning bot was invited to compete against the best human hackers in the world. It beat some of the human teams at various points in the competition.

A.I. just might hold the key to finally beating the hackers.

3. Creating super-programmers

The fictional superhero Tony Stark in the “Iron Man” movies relies on a powered suit of armor to protect the world. A.I. could offer similar capabilities to just-out-of-college software developers.

We all know Siri. Under the hood, she’s an A.I. neural network trained on vast amounts of human language. When we ask her directions to McDonald’s (not that I would admit that I do that sort of thing), she “understands” the intention behind the English words.

Imagine a neural network trained to understand all the source code stored in GitHub. That’s tens of millions of lines of code. Or what about the entire history of projects as robust as the Linux operating system? What if Siri could “understand” the intention behind any piece of code?

Just as Tony Stark depends on technology to get his job done, ordinary programmers will be able to turn to A.I. to help them do their jobs far better than they could on their own.

4. Making sense of the Internet of Things

Recent research forecasts that A.I. and machine learning in Big Data and IoT will reach $18.5 billion by 2021. It’s no wonder. The idea of devices, buildings, and everyday objects being interconnected to make them smarter and more responsive brings with it unprecedented complexity.

It’s a data problem. As IoT progresses, the amount of unstructured machine data will far exceed our ability to make sense of it with traditional methods.

Organizations will have to turn to A.I. for help in culling these billions of data points for actionable insights.

5. Robots in data centers

Ever seen this cool video of robots working in an Amazon distribution center? The same is coming to large corporate data centers. Yes, physical robots will handle maintenance tasks such as swapping out server racks.

According to a news report, companies such as IBM and EMC are already using iRobot Create, a customizable version of Roomba, to deploy robots that zoom around data centers and keep track of environmental factors such as temperature, humidity, and airflow.

Self-driving cars are far from the only advances pushing A.I. boundaries. The innovations in enterprise IT may be happening behind the scenes, but they’re no less dramatic.


standard

Intel targets IoT machine vision firm Movidius

2016-09-06 - By 

Intel has moved to buy machine vision technology developer Movidius.

The attraction to Intel is Movidius capability to add low power vision process to IoT-enabled device and autonomous machines.

Movidius has European design centres in Dublin and Romania.

Josh Walden, general manager of Intel’s New Technology Group, writes:

“The ability to track, navigate, map and recognize both scenes and objects using Movidius’ low-power and high-performance SoCs opens opportunities in areas where heat, battery life and form factors are key. Specifically, we will look to deploy the technology across our efforts in augmented, virtual and merged reality (AR/VR/MR), drones, robotics, digital security cameras and beyond.”

Its Myriad 2 family of Vision Processor Units (VPUs) are based on a sub-1W processing architecture, backed by a memory subsystem capable of feeding the processor array as well as hardware acceleration to support large-scale operations.

Remi El-Ouazzane, CEO of Movidius, writes:

“As part of Intel, we’ll remain focused on this mission, but with the technology and resources to innovate faster and execute at scale. We will continue to operate with the same eagerness to invent and the same customer-focus attitude that we’re known for, and we will retain Movidius talent and the start-up mentality that we have demonstrated over the years.”

Its customers include DJI, FLIR, Google and Lenovo which use its IP and devices in drones, security cameras, AR/VR headsets.

“When computers can see, they can become autonomous and that’s just the beginning. We’re on the cusp of big breakthroughs in artificial intelligence. In the years ahead, we’ll see new types of autonomous machines with more advanced capabilities as we make progress on one of the most difficult challenges of AI: getting our devices not just to see, but also to think,” said El-Ouazzane.

The terms of the deal have no been published.

Original article here.


standard

Scientists look at how A.I. will change our lives by 2030

2016-09-05 - By 

Transportation, education, home security will all be affected by smart machines

By the year 2030, artificial intelligence (A.I.) will have changed the way we travel to work and to parties, how we take care of our health and how our kids are educated.

That’s the consensus from a panel of academic and technology experts taking part in Stanford University’s One Hundred Year Study on Artificial Intelligence.

Focused on trying to foresee the advances coming to A.I., as well as the ethical challenges they’ll bring, the panel yesterday released its first study.

The 28,000-word report, “Artificial Intelligence and Life in 2030,” looks at eight categories — from employment to healthcare, security, entertainment, education, service robots, transportation and poor communities — and tries to predict how smart technologies will affect urban life.

“We believe specialized A.I. applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts, said in a written statement. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of A.I. are broadly shared.”

A.I. has taken it on the chin in recent years, with industry figures like physicist Stephen Hawking and high-tech entrepreneur Elon Musk decrying the societal dangers of the technology.

Unlike Musk, who equated developing A.I. with summoning a demon, the A.I. report issued this week shows that scientists anticipate some problems but also numerous benefits with advancing the technology.

“A.I. technologies can be reliable and broadly beneficial,” said Barbara Grosz, a Harvard computer scientist and chair of the AI100 committee. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

In the study, researchers said that when it comes to A.I. and transportation, autonomous vehicles and even aerial delivery drones could change both travel and life patterns in cities. The study also notes that home service robots won’t just clean but will offer security, while smart sensors will monitor people’s blood sugar and organ functions, robotic tutors will augment human instruction and A.I. will lead to new ways to deliver media in more interactive ways.

And while the report also notes that A.I. could improve services like food distribution in poor neighborhoods and analyze crime patterns, it’s not all positive.

For instance, A.I. could lead to significant changes in the workforce as robots and smart systems take over base-level jobs, like moving inventory in warehouses, scheduling meetings and even offering some financial advice.

However, A.I. also will open up new jobs, such as data analysts who will be needed to make sense of all the new information computers are amassing.

The study noted that work should begin now figuring out how to help people adapt as the economy undergoes rapid changes and existing jobs are lost and new ones created.

“Until now, most of what is known about A.I. comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how A.I. technologies are likely to affect society.”

Original article here.

 

standard

Open-source CognizeR lets data scientists easily access IBM’s Watson tools.

2016-08-12 - By 

Data scientists using the programming language R will soon find it much easier to leverage the tools and services offered by IBM Watson. On Thursday, IBM Watson and Columbus Collaboratory partnered on the release of CognizeR, an open-source R extension that makes it easier to use Watson from a native development environment.

Typically, Watson services like Watson Language and Translation of Visual Recognition are leveraged via API calls that have to be coded in Java or Python. Now, using the CognizeR extension, data scientists can tap into Watson directly from their R environment.

“CognizeR offers easier access to a variety of Watson’s artificial intelligence services that can enhance the performance of predictive models developed in R, an open-source language widely used by data scientists for statistical and analytics applications,” a press release said.

Watson, of course, is IBM’s cognitive computing arm, and Columbus Collaboratory is a team of companies that work together on analytics and cybersecurity solutions. According to the press release, a big reason for the development of CognizeR was to help with the adoption of cognitive computing in the founding companies of Columbus Collaboratory, which include Battelle, CardinalHealth, Nationwide, American Electric Power, OhioHealth, Huntington, and Lbrands.

In the press release announcing the extension, IBM cited IDC figures that said that less than 1% of all data, especially unstructured data like chats, emails, social media, and images, is properly analyzed. The team is hoping that CognizeR can help improve predictive models and analyze more data.

Currently, data scientists will be able to use Watson Language Translation, Personality Insights, Tone Analyzer, Speech to Text, Text to Speech, and Visual Recognition through the CognizeR. However, more services will be offered based on responses.

“As we collect feedback, we’ll be able to continually improve the experience by adding the cognitive services that data scientists want and need the most,” said Shivakumar Vaithyanathan, IBM Fellow and Director, Watson Content Services.

R is one of the most widely used languages in statististics and big data today. By making Watson more readily available to data scientists using the languages, IBM is position Watson more strongly among its enterprise audience.

CognizeR can be downloaded on GitHub.

Original Source
standard

OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free

2016-08-07 - By 

THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December.

But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyonewas looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers—some made at the conference itself—were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a researcher who was joining OpenAI after internships at both Google andFacebook and was among those who received big offers at the eleventh hour.

How many dollars is “borderline crazy”? Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value.

OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk, Altman, and company aim to give away what may become the 21st century’s most transformative technology—and give it away for free.

Zaremba says those borderline crazy offers actually turned him off—despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.”

That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by—or at least not only by—the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.

This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning”—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.

But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.

AI Everywhere

Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machinesunderstand natural language—the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human.

But along with such promise comes deep anxiety. Musk and Altman worry that if people can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper-intelligent idealists to their new project.

OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks—artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything fromrecognizing photos to writing email messages to, well,carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.

Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.”

Musk was there because he’s an old friend of Altman’s—and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.

The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.”

Breaking the Cycle

Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers—Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.

Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project. “An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as Sutskever puts it: “the wine was secondary to the talk.”

By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent, because of its mission.”

The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”

As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empiresecret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.

The Limits of Openness

All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”

According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).

Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.”

Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.

But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.”

The Super-Intelligence Problem

When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. “We think its far more likely that many, many AIs will work to stop the occasional bad actors,” Altman says.

But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. “If you have a button that could do bad things to the world,” Bostrom says, “you don’t want to give it to everyone.” If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it’s different from a Google or a Facebook.

He does say that the not-for-profit status of OpenAI could change things—though not necessarily. The real power of the project, he says, is that it can indeed provide a check for the likes of Google and Facebook. “It can reduce the probability that super-intelligence would be monopolized,” he says. “It can remove one possible reason why some entity or group would have radically better AI than everyone else.”

But as the philosopher explains in a new paper, the primary effect of an outfit like OpenAI—an outfit intent on freely sharing its work—is that it accelerates the progress of artificial intelligence, at least in the short term. And it may speed progress in the long term as well, provided that it, for altruistic reasons, “opts for a higher level of openness than would be commercially optimal.”

“It might still be plausible that a philanthropically motivated R&D funder would speed progress more by pursuing open science,” he says.

Like Xerox PARC

In early January, Brockman’s nine AI researchers met up at his apartment in San Francisco’s Mission District. The project was so new that they didn’t even have white boards. (Can you imagine?) They bought a few that day and got down to work.

Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called “unsupervised learning”—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.

“If you have really good unsupervised learning, machines would be able to learn from all this knowledge on the Internet—just like humans learn by looking around—or reading books,” Brockman says.

He envisions OpenAI as the modern incarnation of Xerox PARC, the tech research lab that thrived in the 1970s. Just as PARC’s largely open and unfettered research gave rise to everything from the graphical user interface to the laser printer to object-oriented programing, Brockman and crew seek to delve even deeper into what we once considered science fiction. PARC was owned by, yes, Xerox, but it fed so many other companies, most notably Apple, because people like Steve Jobs were privy to its research. At OpenAI, Brockman wants to make everyone privy to its research.

This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers, including Ian Goodfellow, another former senior researcher on the Google Brain team. “The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”

Giving up control is the essence of the open source ideal. If enough people apply themselves to a collective goal, the end result will trounce anything you concoct in secret. But if AI becomes as powerful as promised, the equation changes. We’ll have to ensure that new AIs adhere to the same egalitarian ideals that led to their creation in the first place. Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.

Original article here.


standard

IBM Watson: Machine-Of-All-Trades

2016-07-28 - By 

From fashion to food to healthcare, IBM’s Watson has many guises across different industries. Here’s a look at some of the work IBM’s AI system has been doing since its Jeopardy! heyday.

After defeating Brad Rutter and Ken Jennings in a game of Jeopardy! in 2011, IBM’s Watson couldn’t survive on its $77,147 in winnings. Unlike Microsoft’s Cortana and Apple’s Siri, Watson lacked a parent willing to let it continue living in the basement rent-free, so it got a paying job in healthcare, helping insurer Wellpoint and doctors by providing treatment advice.

Since then, and following investments of more than $1 billion, Watson has become a machine-of-all-trades. Through a combination of machine learning, natural language processing, and a variety of other technologies, Watson is helping companies across a broad spectrum of businesses. Beyond healthcare, Watson earns its keep in fashion, hospitality, food, gaming, retail, financial services, and veterinary medicine.

Its latest engagement involves protecting computers from its own kind. On Tuesday, IBM announced Watson for Cyber Security, a role that comes with residence in the cloud, instead of the Power 750 systems it inhabits on corporate premises.

This fall, Watson, with the assistance of researchers at eight universities, will begin learning to recognize cyber-security threats in the hope that its cognitive capabilities will help identify malicious code and formulate mitigation strategies. The core of its training data will come from IBM’s X-Force research library, which includes data on 8 million spam and phishing attacks, as well as more than 100,000 vulnerabilities.

IBM believes that Watson’s ability to understand unstructured data makes it well-suited for malware hunting. The firm said that 80% of all Internet data is unstructured, and that the typical organization makes use of only about 8% of this data. Given AI’s already considerable role in fraud detection, it wouldn’t be surprising to see Watson excel in cyber-security.

Marc van Zadelhoff, general manager of IBM Security, sees Watson as an answer to the cyber-security talent shortage. “Even if the industry was able to fill the estimated 1.5 million open cyber-security jobs by 2020, we’d still have a skills crisis in security,” he said in a statement.

So robots, of a sort, are taking jobs. But this may be for the best, since the work of processing more than 15,000 security documents a month, according to IBM, would be rather a chore.

At the same time, Watson will benefit from free labor, in the form of job training provided by students at the eight universities involved with the project.

You, too, may see Watson or something similar working in your industry. It won’t be a particularly social colleague, but at least it will get you those reports you need on time.

To take a look at a several of the many faces of Watson, go here.

 

Original article here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);