Posted On:2017 - AppFerret

standard

The Root Cause of IoT Skepticism

2017-12-28 - By 

It’s healthy to be skeptical of new ideas, but let’s take a look at the philosophy that might be holding back the biggest advancement in technology in a century.

In about exactly 13 years (target as set by NASA) or rather 7 years (as set by Elon Musk’s Space-X) from now, we humans are going to set foot on Mars and become a truly space-faring race.

We live in pretty exciting times riding on a threshold of Continuous Imagination empowering Continuous Innovation. Every product in every domain is undergoing a sea change, adding new features, releasing them faster than their competitors, adapting to incremental rate of technological substitution. But most of these new feature improvements and product launches are not guided by new requirements from customers. In the face of stiff competition that gets stiffer by the day, evolution and adaptation is the only natural process of survival and winning. As Charles Darwin would put it, it’s, “Survival of the fittest.”

But as history suggests, there are and there always will be skeptics among us who will doubt every action that deviates from convention – like those who doubt climate change, the need to explore the unexplored, and the need to change.

A human mind exposed to scientific education exhibits skepticism and pragmatism over dogmatism and largely remains technology-agonistic. It validates everything agnostically with knowledge and reasoning before accepting new ideas. But human progress always comes from philosophical insights — imaginations that led to the discovery or invention of new things. Technological progress has only turned science fiction (read: philosophy) into scientific facts.

With the above premises in mind, in this article, we intend to explore the realm of IoT, its implications on our lives, and our own limitations in foreseeing the imminent future as companies and customers.

We Understand the Internet, but not IoT

IoT, the Internet of Things (or Objects), denote the entire network of Internet-connected devices – vehicles, home and office appliances, and machinery equipment embedded with electronics, software, sensors, actuators, and the wired/Wi-Fi and RFID network connectivity that enable these objects to connect and exchange data. The benefits of this new ubiquitous connectivity will be reaped by everyone, and we will, for the first time, be able to hear and feel the heartbeat of the Earth.

For example, as cows, pigs, water pipes, people, and even shoes, trees, and animals become connected to IoT, farmers will have greater control over diseases affecting milk and meat production through the availability of real-time data and analytics. It is estimated that, on average, each connected cow will generate around 200 MB of data every month.

According to Cisco, back in 2003, the penetration of the Internet and connected devices per person was really low — but that grew at an exponential rate, doubling after every 5.32 years. That’s similar to the properties of Moore’s Law. Between 2008 and 2009, with the advent of smartphones, these figures rocketed, and it was predicted that 50 billion connected devices shall be in use by the year 2020. Thus, IoT was born and is in its adolescent phase already.

 

Today, IoT is well underway, as seen in initiatives such as Cisco’s Planetary Skin, smart grid, and intelligent vehicles, HP’s central nervous system for the earth (CeNSE), and smart dust, have the potential to add millions — even billions — of sensors to the Internet.

But just like as in during the social media explosion, the new age of IoT, connected devices, connected machines, connected cars, connected patients, connected consumers, and connected networks of Things we will need new age collaboration tools, new software, new database technologies, and infrastructure to accommodate, store and analyze huge amounts of data that will be generated — like the host of emerging technologies including graph databases, Big Data, microservices, and so on.

But that’s not all.

The Internet of Things will also require IOE – Integration of Everything — for meaningful interaction between devices and provides.

But as Kai Wähner of TIBCO discusses in his presentation “Microservices: Death of the Enterprise Service Bus,” microservices and API-led connectivity are ideally matched to meet integration challenges in the foreseeable future. MuleSoft’s “Anypoint Platform for APIs” backed by Cisco, Bosch’s “IoT platform,” or the upcoming API management suite from Kovair is a pointer to all this and shall empower the IoT revolution.

The explosion of connected devices — each requiring a specific IP address — already exhausted what was available in 2010 under IPv4 and required IPv6’s implementation immediately. In addition to opening up more IP addresses, IPv6 will also suffice for intra-planetary communication for a much longer period. Governments and the World Wide Web Consortium have remained laggards and skeptical with IPv6 implementation and allowed the exhaustion of IP addresses.

But it hasn’t been just governments. Bureaucratic and large, technology-driven organizations like Amazon, Google, and Facebook can remain skeptics under disguise and continue to block movements like Net Neutrality/ ZeroNet, Blockchain technology, IPFS (Inter Planetary File Sharing protocol), the overly cumbersome HTTP as they fear their monopolies will be challenged.

Conclusion

We humans, engaged in different capacities as company executives, consumers, government officials, or technology evangelists, are facing the debate of skepticism vs. futurism and will continue to doubt IoT — embracing it only incrementally until we see true, widespread benefits from it.

And we can see how our skepticism has worked against recognition and advancement.

After remaining skeptical for 120 years, the IEEE finally recognized the pioneering work done in by the Indian Physicist J.C. Bose during colonial rule and conferred on him the designation of the “Father of Telecommunication”. The mm wavelength frequency that he invented in his experiment in 1895 in Kolkata is the foundation of 5G (Wi-Fi Mobile network) that scientists and technologists across the world are now trying to reinvent that will provide the backbone for IoT.

Finally, we leave it to the reader’s imagination about the not-so-distant future, when all the connected devices in IoT begin to pass the Turing Test.

 

Original article here.

 


standard

70 Amazing Free Data Sources You Should Know

2017-12-20 - By 

70 free data sources for 2017 on government, crime, health, financial and economic data, marketing and social media, journalism and media, real estate, company directory and review, and more to start working on your data projects.

Every great data visualization starts with good and clean data. Most of people believe that collecting big data would be a rough thing, but it’s simply not true. There are thousands of free data sets available online, ready to be analyzed and visualized by anyone. Here we’ve rounded up 70 free data sources for 2017 on government, crime, health, financial and economic data,marketing and social media, journalism and media, real estate, company directory and review, and more.

We hope you could enjoy this and save a lot time and energy searching blindly online.

Free Data Source: Government

  1. Data.gov: It is the first stage and acts as a portal to all sorts of amazing information on everything from climate to crime freely by the US Government.
  2. Data.gov.uk: There are datasets from all UK central departments and a number of other public sector and local authorities. It acts as a portal to all sorts of information on everything, including business and economy, crime and justice, defence, education, environment, government, health, society and transportation.
  3. US. Census Bureau: The website is about the government-informed statistics on the lives of US citizens including population, economy, education, geography, and more.
  4. The CIA World Factbook: Facts on every country in the world; focuses on history, government, population, economy, energy, geography, communications, transportation, military, and transnational issues of 267 countries.
  5. Socrata: Socratais a mission-driven software company that is another interesting place to explore government-related data with some visualization tools built-in. Its data as a service has been adopted by more than 1200 government agencies for open data, performance management and data-driven government.
  6. European Union Open Data Portal: It is the single point of access to a growing range of data from the institutions and other bodies of the European Union. The data boosts includes economic development within the EU and transparency within the EU institutions, including geographic, geopolitical and financial data, statistics, election results, legal acts, and data on crime, health, the environment, transport and scientific research. They could be reused in different databases and reports. And more, a variety of digital formats are available from the EU institutions and other EU bodies. The portal provides a standardised catalogue, a list of apps and web tools reusing these data, a SPARQL endpoint query editor and rest API access, and tips on how to make best use of the site.
  7. Canada Open Datais a pilot project with many government and geospatial datasets. It could help you explore how the Government of Canada creates greater transparency, accountability, increases citizen engagement, and drives innovation and economic opportunities through open data, open information, and open dialogue.
  8. Datacatalogs.org: It offers open government data from US, EU, Canada, CKAN, and more.
  9. U.S. National Center for Education Statistics: The National Center for Education Statistics (NCES) is the primary federal entity for collecting and analyzing data related to education in the U.S. and other nations.
  10. UK Data Service: The UK Data Service collection includes major UK government-sponsored surveys, cross-national surveys, longitudinal studies, UK census data, international aggregate, business data, and qualitative data.

Free Data Source: Crime

  1. Uniform Crime Reporting: The UCR Program has been the starting place for law enforcement executives, students, researchers, members of the media, and the public seeking information on crime in the US.
  2. FBI Crime Statistics: Statistical crime reports and publications detailing specific offenses and outlining trends to understand crime threats at both local and national levels.
  3. Bureau of Justice Statistics: Information on anything related to U.S. justice system, including arrest-related deaths, census of jail inmates, national survey of DNA crime labs, surveys of law enforcement gang units, etc.
  4. National Sex Offender Search: It is an unprecedented public safety resource that provides the public with access to sex offender data nationwide. It presents the most up-to-date information as provided by each Jurisdiction.

Free Data Source: Health

  1. U.S. Food & Drug Administration: Here you will find a compressed data file of the Drugs@FDA database. Drugs@FDA, is updated daily, this data file is updated once per week, on Tuesday.
  2. UNICEF: UNICEF gathers evidence on the situation of children and women around the world. The data sets include accurate, nationally representative data from household surveys and other sources.
  3. World Health Organisation:  statistics concerning nutrition, disease and health in more than 150 countries.
  4. Healthdata.gov: 125 years of US healthcare data including claim-level Medicare data, epidemiology and population statistics.
  5. NHS Health and Social Care Information Centre: Health data sets from the UK National Health Service. The organization produces more than 260 official and national statistical publications. This includes national comparative data for secondary uses, developed from the long-running Hospital Episode Statistics which can help local decision makers to improve the quality and efficiency of frontline care.

Free Data Source: Financial and Economic Data

  1. World Bank Open Data: Education statistics about everything from finances to service delivery indicators around the world.
  2. IMF Economic Data: An incredibly useful source of information that includes global financial stability reports, regional economic reports, international financial statistics, exchange rates, directions of trade, and more.
  3. UN Comtrade Database: Free access to detailed global trade data with visualizations. UN Comtrade is a repository of official international trade statistics and relevant analytical tables. All data is accessible through API.
  4. Global Financial Data: With data on over 60,000 companies covering 300 years, Global Financial Data offers a unique source to analyze the twists and turns of the global economy.
  5. Google Finance: Real-time stock quotes and charts, financial news, currency conversions, or tracked portfolios.
  6. Google Public Data Explorer: Google’s Public Data Explorer provides public data and forecasts from a range of international organizations and academic institutions including the World Bank, OECD, Eurostat and the University of Denver. These can be displayed as line graphs, bar graphs, cross sectional plots or on maps.
  7. U.S. Bureau of Economic Analysis: U.S. official macroeconomic and industry statistics, most notably reports about the gross domestic product (GDP) of the United States and its various units. They also provide information about personal income, corporate profits, and government spending in their National Income and Product Accounts (NIPAs).
  8. Financial Data Finder at OSU: Plentiful links to anything related to finance, no matter how obscure, including World Development Indicators Online, World Bank Open Data, Global Financial Data, International Monetary Fund Statistical Databases, and EMIS Intelligence.
  9. National Bureau of Economic Research: Macro data, industry data, productivity data, trade data, international finance, data, and more.
  10. U.S. Securities and Exchange Commission: Quarterly datasets of extracted information from exhibits to corporate financial reports filed with the Commission.
  11. Visualizing Economics: Data visualizations about the economy.
  12. Financial Times: The Financial Times provides a broad range of information, news and services for the global business community.

Free Data Source: Marketing and Social Media

  1. Amazon API: Browse Amazon Web Services’Public Data Sets by category for a huge wealth of information. Amazon API Gateway allows developers to securely connect mobile and web applications to APIs that run on Amazon Web(AWS) Lambda, Amazon EC2, or other publicly addressable web services that are hosted outside of AWS.
  2. American Society of Travel Agents: ASTA is the world’s largest association of travel professionals. It provides members information including travel agents and the companies whose products they sell such as tours, cruises, hotels, car rentals, etc.
  3. Social Mention: Social Mention is a social media search and analysis platform that aggregates user-generated content from across the universe into a single stream of information.
  4. Google Trends: Google Trends shows how often a particular search-term is entered relative to the total search-volume across various regions of the world in various languages.
  5. Facebook API: Learn how to publish to and retrieve data from Facebook using the Graph API.
  6. Twitter API: The Twitter Platform connects your website or application with the worldwide conversation happening on Twitter.
  7. Instagram API: The Instagram API Platform can be used to build non-automated, authentic, high-quality apps and services.
  8. Foursquare API: The Foursquare API gives you access to our world-class places database and the ability to interact with Foursquare users and merchants.
  9. HubSpot: A large repository of marketing data. You could find the latest marketing stats and trends here. It also provides tools for social media marketing, content management, web analytics, landing pages and search engine optimization.
  10. Moz: Insights on SEO that includes keyword research, link building, site audits, and page optimization insights in order to help companies to have a better view of the position they have on search engines and how to improve their ranking.
  11. Content Marketing Institute: The latest news, studies, and research on content marketing.

Free Data Source: Journalism and Media

  1. The New York Times Developer Network– Search Times articles from 1851 to today, retrieving headlines, abstracts and links to associated multimedia. You can also search book reviews, NYC event listings, movie reviews, top stories with images and more.
  2. Associated Press API: The AP Content API allows you to search and download content using your own editorial tools, without having to visit AP portals. It provides access to images from AP-owned, member-owned and third-party, and videos produced by AP and selected third-party.
  3. Google Books Ngram Viewer: It is an online search engine that charts frequencies of any set of comma-delimited search strings using a yearly count of n-grams found in sources printed between 1500 and 2008 in Google’s text corpora.
  4. Wikipedia Database: Wikipedia offers free copies of all available content to interested users.
  5. FiveThirtyEight: It is a website that focuses on opinion poll analysis, politics, economics, and sports blogging. The data and code on Github is behind the stories and interactives at FiveThirtyEight.
  6. Google Scholar: Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. It includes most peer-reviewed online academic journals and books, conference papers, theses and dissertations, preprints, abstracts, technical reports, and other scholarly literature, including court opinions and patents.

Free Data Source: Real Estate

  1. Castles: Castles are a successful, privately owned independent agency. Established in 1981, they offer a comprehensive service incorporating residential sales, letting and management, and surveys and valuations.
  2. Realestate.comRealEstate.com serves as the ultimate resource for first-time home buyers, offering easy-to-understand tools and expert advice at every stage in the process.
  3. Gumtree: Gumtree is the first site for free classifieds ads in the UK. Buy and sell items, cars, properties, and find or offer jobs in your area is all available on the website.
  4. James Hayward: It provides an innovative database approach to residential sales, lettings & management.
  5. Lifull Homes: Japan’s property website.
  6. Immobiliare.it: Italy’s property website.
  7. Subito: Italy’s property website.
  8. Immoweb: Belgium’s leading property website.

Free Data Source: Business Directory and Review

  1. LinkedIn: LinkedIn is a business- and employment-oriented social networking service that operates via websites and mobile apps. It has 500 million members in 200 countries and you could find the business directory here.
  2. OpenCorporates: OpenCorporates is the largest open database of companies and company data in the world, with in excess of 100 million companies in a similarly large number of jurisdictions. Our primary goal is to make information on companies more usable and more widely available for the public benefit, particularly to tackle the use of companies for criminal or anti-social purposes, for example corruption, money laundering and organised crime.
  3. Yellowpages: The original source to find and connect with local plumbers, handymen, mechanics, attorneys, dentists, and more.
  4. Craigslist: Craigslist is an American classified advertisements website with sections devoted to jobs, housing, personals, for sale, items wanted, services, community, gigs, résumés, and discussion forums.
  5. GAF Master Elite Contractor: Founded in 1886, GAF has become North America’s largest manufacturer of commercial and residential roofing (Source: Fredonia Group study). Our success in growing the company to nearly $3 billion in sales has been a result of our relentless pursuit of quality, combined with industry-leading expertise and comprehensive roofing solutions. Jim Schnepper is the President of GAF, an operating subsidiary of Standard Industries. When you are looking to protect the things you treasure most, here are just some of the reasons why we believe you should choose GAF.
  6. CertainTeed: You could find contractors, remodelers, installers or builders in the US or Canada on your residential or commercial project here.
  7. Companies in California: All information about companies in California.
  8. Manta: Manta is one of the largest online resources that deliver products, services and educational opportunities. The Manta directory boasts millions of unique visitors every month who search comprehensive database for individual businesses, industry segments and geographic-specific listings.
  9. EU-Startups: Directory about startups in EU.
  10. Kansas Bar Association: Directory for lawyers. The Kansas Bar Association (KBA) was founded in 1882 as a voluntary association for dedicated legal professionals and has more than 7,000 members, including lawyers, judges, law students, and paralegals.

Free Data Source: Other Portal Websites

  1. Capterra: Directory about business software and reviews.
  2. Monster: Data source for jobs and career opportunities.
  3. Glassdoor: Directory about jobs and information about inside scoop on companies with employee reviews, personalized salary tools, and more.
  4. The Good Garage Scheme: Directory about car service, MOT or car repair.
  5. OSMOZ: Information about fragrance.
  6. Octoparse: A free data extraction tool to collect all the web data mentioned above online.

Do you know some great data sources? Contact to let us know and help us share the data love.

More Related Sources:

Top 30 Big Data Tools for Data Analysis

Top 30 Free Web Scraping Software

 

Original article here.

 


standard

The Internet of Insecure Things

2017-12-14 - By 

From home appliances to health applications and security solutions, everything we use at home – and outside of it, is getting connected to the Internet, becoming the Internet of Things (IoT). Think about how many connected devices you have at home: tablets, laptops, e-readers, fitness devices, smart TVs – how about your thermostat, light bulbs, refrigerator and security system? Our home has effectively become a connected home, with an average of 12 things connecting to our home Wi-Fi network, transmitting data and delivering added value. But as connected home appliances continue to grow, so too will the cybersecurity risks.

Consumers have been fast to adopt IoT devices on the promise that they can improve our lifestyles. These things track and optimize our energy consumption, facilitate our daily tasks, improve our health and wellness, keep us secure and empower us with the freedom and data to do other things better. But from a security point of view, this unregulated, insecure and fragmented market represents a clear and present danger to individuals and society as a whole, from the cyber to the physical realm.

To protect connected homes, a multi-faceted approach is recommended, combining a firewall blocking mechanism with machine learning and artificial intelligence to detect network anomalies. Millions of IoT devices are already compromised and we recommend communication service providers (CSPs) to initiate deployment of cybersecurity solutions today in parallel to their own R&D plans. By providing cybersecurity solutions through partnerships, they can begin to protect their vulnerable clients today and establish a market leadership position.

Cyber-threats

The declining costs to manufacture chips that can store and transmit data through a network connection have enabled thousands of organizations and startups to bring IoT products to market. But the current lack of standards and security certifications, coupled with fierce market competition to deliver affordable IoT products, have made cybersecurity an expense that manufacturers prefer others to deal with.

The lack of experience and incentives in the IoT supply chain to provide secure devices has created a tremendously vulnerable IoT landscape. In fact, according to recent findings by Symantec, IoT devices can become compromised within two minutes of connecting to the Internet1. Legislation has been too slow to deal with the current threat, and although there are public initiatives to drive cyber awareness among consumers, we do not expect any tangible changes soon.

There are many attack vectors and vulnerabilities to worry about in the Connected Home. From poor design decisions and hard-coded passwords to coding flaws, everything with an IP address is a potential backdoor to cyber crimes. Traditional cybersecurity companies reacted slowly and failed to provide defense solutions to the expanding universe of IoT devices. However, novel approaches with Artificial Intelligence and Machine Learning – such as analyzing and understanding network behaviors to detect anomalies, are now available to defend against these new threats.

With all its challenges and opportunities, consumer IoT is destined to disrupt long-established industries, making it a space one cannot afford to ignore. One such long-established industry is precisely the one powering the revolution: the CSPs providing the broadband. By and large, telecommunication companies have failed to monetize the data running through their home gateways, missing out in big opportunities. We believe that the connected home, especially cybersecurity, is a low-hanging fruit that communication service providers can and should pick before it’s too late.

Home security and safety-related appliances are top revenue drivers in the connected home landscape, and telecom companies are well positioned to enter this market and rebrand themselves as innovative and secure companies interested in the well-being and privacy of their customers. By leveraging their existing assets, such as the home router, telecoms can provide holistic solutions that include cybersecurity, data management and customer support – giving them a unique advantage over their competitors. Consumers would much rather trust their CSPs to continue managing their data than giving it away to foreign or unknown companies. It is time for Internet Service Providers to reclaim their value as a Service Provider, else they risk missing out in this revolution as broadband continues to become commoditized.

Stories of hacked IoT devices abound, a quick search online will lead you to scary stories, from spying Barbie dolls2, to TV sets monitoring you3 and creeps accessing baby cameras4. Most ironic and worrying of all are the security threats inherent in best-selling security systems, which can allow hackers to control the whole system, due to lack of encryption and sufficient cybersecurity standards5.

The cyber and physical risks intensify the more devices we connect: The volume of granular data that all these connected things generate when combined can provide a very detailed profile of the user, which can be used for identity theft and blackmail.

Once an unprotected IoT device gets hacked, a skilled hacker can proceed to infect other devices in the network via “lateral movement”. By jumping from one device to another, a hacker can gain complete control of a connected home. Because this threat comes from within the network, it is important to have a security solution that provides network visibility, creates device profiles and detects anomalies through machine learning and artificial intelligence.

There have been enough stories in the news for the average consumer to be aware of cyber threats, they know security is important and that they don’t have it, but they lack the resources to properly protect themselves. IoT manufacturers should be held accountable to prioritize security, but until that happens, the responsibility and opportunity falls on CSPs to protect the consumers.

Structural Risks

What makes the IoT ecosystem a potentially deadly cyber threat is the combined computing and networking power of thousands of devices which, when operated together as a botnet, can execute massive Distributed Denial of Service (DDoS) attacks and shut down large swaths of the Internet through a fire hose of junk traffic. The IoT ecosystem represents a totally different level of complexity and scale in terms of security and privacy.

In October 2016, we got a taste of this structural risk when the infamous Mirai botnet attacked the DNS company Dyn with the biggest DDoS attack ever reported: more than 1 terabit per second (Tbps) flooded the service, temporarily blocking access to Netflix, Twitter, Amazon, PayPal, SoundCloud, New York Times and others. The Mirai botnet used enslaved IoT devices -nearly 150,000 hacked cameras, routers and smart appliances, to inadvertently do its criminal bidding, and most of the infected devices remain out there, with their users oblivious to the fact.

The way Mirai malware spreads and attacks is well known: it scans the web for open Telnet and SSH ports, browsing for vulnerable devices using factory default or hard-coded usernames and passwords, then uses an encrypted tunnel to communicate between the devices and command and control (C&C) servers that send instructions to them. Since Mirai uses encrypted traffic, it prevents security researchers from monitoring the command and data traffic.

The source code for Mirai was posted soon after on the Hackforums site6, enabling other criminals to create their own strains of the malware. It is not necessary to have an “army” of thousands of infected devices to cause harm. Mini-DDoS botnets, with hundreds of compromised nodes, are sufficient to cause temporary structural damage and reduce the chances of getting caught -expect more of these attacks in the future.

Capturing vulnerable devices to turn them into botnets has become a cyber crime gold rush, with an estimated 4000 vulnerable IoT devices becoming active each day7, and criminals selling and renting botnets in the dark net at competitive prices to cause harm. Although simple to understand, this sort of malware is hard to detect because it does not generally affect device performance, so the average user cannot know if their device is part of a botnet – and even if they did, it’s often difficult to interact with IoT devices without a user interface.

Stakeholders should take proactive steps that can prevent future incidents by addressing the lack of security-by-design in the IoT landscape. The Mirai malware was a warning shot, and organizations must be prepared for larger and potentially more devastating attacks. Because of market failures at play, regulation seems like the only way forward to incentivize device manufacturers to implement security in their design, but doing so could stifle innovation and prove disastrous to the ecosystem. It is because of this delicate balance that we believe service providers are perfectly positioned to seize this problem as an opportunity to become market leaders in the emerging field of IoT cybersecurity.

Looking Forward

The frequency of cyber threats is increasing as the IoT landscape continues to expand. Gartner predicts that by 2020, addressing compromises in IoT security will have increased security costs to 20% of annual security budgets, from less than one percent in 20158. The threats to consumers and society are numerous, but joint cybersecurity and cyber-hygiene efforts by manufacturers, legislators, service providers and end users, will mitigate the inherent risks discussed in this paper.

Until that happens, service providers are uniquely positioned and encouraged to begin offering cybersecurity services to their consumers through their home gateways: the main door of the home network. Communication Service Providers that provide home network security and management solutions today can become the preferred brand for Smart Home solutions and appliances, leading IoT market adoption while preventing the cyber risks associated with it.

Netonomy has developed a solution that is available today for service providers interested in providing a layer of security to their consumers and become a trusted market leader in the emerging IoT landscape. Because it is cloud-based, this solution can be instantly deployed across thousands of routers at a low cost and bring immediate peace of mind to consumers.

Netonomy’s Solution:
Netonomy provides a simple, reliable and secure network for the connected home. Through a minimal-footprint agent installed on the home router, we provide a holistic solution to manage the connected home network and protect it from internal and external security threats. Our unique technology can be deployed on virtually all the existing home gateways quickly and at a minimal cost, providing ISPs and router manufacturers with better visibility into home networks and a premium service that can be sold to customers to make their connected future simple, reliable and secure.
Original article here.

standard

Why 2018 Will be The Year of AI

2017-12-13 - By 

Artificial Intelligence, more commonly known as AI, isn’t a new topic and has been around for years; however, those who have tracked its’ progress have noted that 2017 is the year that has seen it accelerate than years previously.  This hot topic has made its’ way into the media, in boardrooms and within the government.  One reason for this is things that haven’t functioned for decades has suddenly began to work; this is going beyond embedded functions or just tools and expectations are high for 2018.

There are several reasons why this year has recorded the most progress when working with AI.  Before going into these four preconditions that allowed AI to progress over the past five years, it is important to understand what Artificial Intelligence means.  Then, we can take a closer look at each of the four preconditions and how they will shape what is to come next year.

What is Artificial Intelligence?

Basically, Artificial Intelligence is defined as the science of making computers do things that require intelligence when done by humans.  However, five decades have gone by since AI was born and progress in the field has moved very slowly; this has created a level of appreciation to the profound difficulty of this problem.  Fortunately, this year has seen considerable progress and has opened the probability of further advancement in 2018.

Preconditions That Allowed AI to Succeed in 2017 and Beyond

Everything is presently becoming connected with our devices, such as being able to start a project on your Desktop PC and then able to finish your work on a connected smartphone or tablet.  Ray Kurzweil believes that eventually, humans will be able to use sensors that connects our brains to the cloud.  Since the internet originally connected to computers and has advanced to connecting to our mobile devices, sensors that already enable buildings, homes and even our clothes to be linked to the internet can in our near future expand to be used to connect our minds into the cloud.

Another component for AI becoming more advanced is due to computing becoming freer to use.  Previously, it would be costly for new chips to come out during an eighteen-month-period at twice the speed; however, Marc Andreessen claims that new chips are being processed at the same speed but only at half of the cost.  The theory is that the future will see inexpensive processors at an inexpensive price so that a processor will be in everything; computing capacity will be able to solve problems that had no solution five years prior.

Another component for the advancement of AI is that data is becoming the new oil, which has been made available digitally over the past decade.  Since data can be retrieved through our mobile devices and can be tracked through sensors, new sources of data have appeared through video, social media and digital images.  Conditions that could only be modeled at an elevated level in the past can now be described more accurately due to the almost infinite set of real data that is available; this means accuracy will increase more soon.

Finally, the fourth component that has contributed towards AI advancement is that machine learning is transforming into the new combustion engine as it is accomplished through using mathematical models and algorithms to discover patterns that are implicit in data.  These complex patterns are used by machines to solve on their own whether a new data is similar, that it fits or can be used to predict future outcomes.  Virtual assistants use AI, such as Siri and Cortana, to solve equations and predict outcomes every day with great accuracy; virtual assistants will continue to be used in 2018 and beyond as well as what they will be able to accomplish the more AI continues to grow and evolve.

Artificial Intelligence has seen much improvements than in previous decades.  This year, many experts were amazed and excited about how AI has progressed from the many decades since its’ birth.  Now, we can expect 2018 to see advancements in work, school and possibly in self-driving cars that can result in up to ninety percent fewer car accidents; welcome to the future of Artificial Intelligence.

Original article here.


standard

100 cryptocurrencies described in four words or less

2017-11-20 - By 

This list describes cryptocurrencies. Each gets four words. There are many.

Some are landmarks. Some are scams.

Hopefully this provides orientation.

Name            | Sym.  | Description                              
----------------|-------|------------------------------------------
Bitcoin         | BTC   | Digital gold                             
Ethereum        | ETH   | Programmable contracts and money         
Bitcoin Cash    | BCH   | Bitcoin clone                            
Ripple          | XRP   | Enterprise payment settlement network    
Litecoin        | LTC   | Faster Bitcoin                           
Dash            | DASH  | Privacy-focused Bitcoin clone            
NEO             | NEO   | Chinese-market Ethereum                  
NEM             | XEM   | Batteries-included digital assets        
Monero          | XMR   | Private digital cash                     
Ethereum Classic| ETC   | Ethereum clone                           
IOTA            | MIOTA | Internet-of-things payments              
Qtum            | QTUM  | Ethereum contracts on Bitcoin            
OmiseGO         | OMG   | Banking, remittance, and exchange        
Zcash           | ZEC   | Private digital cash                     
BitConnect      | BCC   | Madoff-like investment fund              
Lisk            | LSK   | Decentralized applications in JavaScript 
Cardano         | ADA   | Layered currency and contracts           
Tether          | USDT  | Price = 1 USD                            
Stellar Lumens  | XLM   | Digital IOUs                             
EOS             | EOS   | Decentralized applications on WebAssembly
Hshare          | HSR   | Blockchain switchboard                   
Waves           | WAVES | Decentralized exchange and crowdfunding  
Stratis         | STRAT | Decentralized applications in C#         
Komodo          | KMD   | Decentralized ICOs                       
Ark             | ARK   | Blockchain switchboard                   
Electroneum     | ETN   | Monero clone                             
Bytecoin        | BCN   | Privacy-focused cryptocurrency           
Steem           | STEEM | Reddit with money voting                 
Ardor           | ARDR  | Blockchain for spawning blockchains      
Binance Coin    | BNB   | Pay Binance exchange fees                
Augur           | REP   | Decentralized prediction market          
Populous        | PPT   | Invoice trading futures                  
Decred          | DCR   | Bitcoin with alternative governance      
TenX            | PAY   | Cryptocurrency credit card               
MaidSafeCoin    | MAID  | Rent disk space                          
BitcoinDark     | BTCD  | Zcoin close                              
BitShares       | BTS   | Decentralized exchange                   
Golem           | GNT   | Rent other people's computers            
PIVX            | PIVX  | Inflationary Dash clone                  
Gas             | GAS   | Pay fees on Neo                          
TRON            | TRX   | In-app-purchases                         
Vertcoin        | VTC   | Bitcoin clone                            
MonaCoin        | MONA  | Japanese Dogecoin                        
Factom          | FCT   | Decentralized record keeping             
Basic Attention | BAT   | Decentralized ad network                 
SALT            | SALT  | Cryptocurrency-backed loans              
Kyber Network   | KNC   | Decentralized exchange                   
Dogecoin        | DOGE  | Serious meme bitcoin clone               
DigixDAO        | DGD   | Organisation manages tokenized gold      
Veritaseum      | VERI  | Vaporware                                
Walton          | WTC   | IoT Blockchain                           
SingularDTV     | SNGLS | Decentralized Netflix                    
Bytom           | BTM   | Physical assets as tokens                
Byteball Bytes  | GBYTE | Decentralized database and currency      
GameCredits     | GAME  | Video game currency                      
Metaverse ETP   | ETP   | Chinese Ethereum plus identity           
GXShares        | GXS   | Decentralized Chinese Equifax            
Syscoin         | SYS   | Decentralized marketplace                
Siacoin         | SC    | Rent disk space                          
Status          | SNT   | Decentralized application browser        
0x              | ZRX   | Decentralized exchange                   
Verge           | XVG   | Privacy Dogecoin                         
Lykke           | LKK   | Digital asset exchange                   
Civic           | CVC   | Identity and Authentication App          
Blocknet        | BLOCK | Decentralized exchange                   
Metal           | MTL   | Payments with rewards program            
Iconomi         | ICN   | Digital asset investment funds           
Aeternity       | AE    | Decentralized apps (prototype)           
DigiByte        | DGB   | Faster Bitcoin                           
Bancor          | BNT   | Token Index Funds                        
Ripio Credit    | RCN   | Co-signed Cryptocurrency Loans           
ATMChain        | ATM   | Advertising network                      
Gnosis          | GNO   | Decentralized prediction market          
VeChain         | VEN   | Supply chain item IDs                    
Pura            | PURA  | Cryptocurrency                           
Particl         | PART  | Privacy marketplace and chat             
KuCoin Shares   | KCS   | Profit-sharing exchange fees             
Bitquence       | BQX   | Mint for cryptocurrency investments      
FunFair         | FUN   | Decentralized casino                     
ChainLink       | LINK  | External data for contracts              
Power Ledger    | POWR  | Airbnb for electricity                   
Nxt             | NXT   | Cryptocurrency and marketplace           
Monaco          | MCO   | Cryptocurrency credit card               
Cryptonex       | CNX   | Zerocoin clone                           
MCAP            | MCAP  | Mining investment fund                   
Storj           | STORJ | Rent disk space                          
ZenCash         | ZEN   | Privacy-focused Bitcoin clone            
Nexus           | NXS   | Bitcoin clone                            
Neblio          | NEBL  | Decentralized application platform       
Zeusshield      | ZSC   | Decentralized insurance                  
Streamr DATAcoin| DATA  | Real-time data marketplace               
ZCoin           | XZC   | Private digital cash                     
NAV Coin        | NAV   | Bitcoin with private transactions        
AdEx            | ADX   | Advertising exchange                     
Open Trading    | OTN   | Decentralized exchange                   
SmartCash       | SMART | Zcoin clone with rewards                 
Bitdeal         | BDL   | Bitcoin clone                            
Loopring        | LRC   | Decentralized exchange                   
Edgeless        | EDG   | Decentralized casino                     
FairCoin        | FAIR  | Bitcoin that rewards savers

Coin ranking from coinmarketcap.com.

Inspired by Greg Wilson.

Original article here.


standard

2017 IoT Intelligence Update

2017-11-16 - By 

Manufacturing, Consulting, Business Services and Distribution & Logistics are the top four industries leading IoT adoption.
Location intelligence, streaming data analysis, and cognitive BI are the top three most valuable IoT use cases.

  • Manufacturing, Consulting, Business Services and Distribution & Logistics are the top four industries leading IoT adoption.
  • Growing revenue and increasing competitive advantage are the highest priority Business Intelligence (BI) objectives IoT advocates or early adopters are pursuing today.
  • Location intelligence, streaming data analysis, and cognitive BI are the top three most valuable IoT use cases.
    The higher the BI adoption, the greater the probability of success with IoT initiatives.
  • 53% of all respondents say that IoT is somewhat important with fewer than 15% saying it is critical or very important today.

These and many other insights are from Dresner Advisory Services’ 2017 Edition IoT Intelligence Wisdom of Crowds Series study. The study defines IoT as the network of physical objects, or “things,” embedded with electronics, software, sensors, and connectivity to enable objects to collect and exchange data. The study examines key related technologies such as location intelligence, end-user data preparation, cloud computing, advanced and predictive analytics, and big data analytics. Please see page 11 of the study for details regarding the methodology. For an excellent overview of Internet of Things (IoT) predictions for 2018, please see Gil Press’ post, 10 Predictions For The Internet Of Things (IoT) In 2018.

“Although still early days for IoT, we see this as a defining topic for the industry. IoT Intelligence, the means to understand and leverage IoT data, will likewise grow in importance and will elevate key technologies such as location intelligence, advanced and predictive analytics, and big data,” said Howard Dresner, founder and chief research officer at Dresner Advisory Services.

See full article here.


standard

AI-Generated Celebrity Faces Look Real (video)

2017-10-31 - By 

Researchers from NVIDIA published work with artificial intelligence algorithms, or more specifically, generative adversarial networks, to produce celebrity faces in high detail. Watch the results below.

Original article here.  Research PDF here.

 


standard

IBM say radical new ‘in-memory’ computing architecture is 200x faster

2017-10-26 - By 

IBM Research announced Tuesday (Oct. 24, 2017) that its scientists have developed the first “in-memory computing” or “computational memory” computer system architecture, which is expected to yield 200x improvements in computer speed and energy efficiency — enabling ultra-dense, low-power, massively parallel computing systems.

Their concept is to use one device (such as phase change memory or PCM*) for both storing and processing information. That design would replace the conventional “von Neumann” computer architecture, used in standard desktop computers, laptops, and cellphones, which splits computation and memory into two different devices. That requires moving data back and forth between memory and the computing unit, making them slower and less energy-efficient.

Especially useful in AI applications

The researchers believe this new prototype technology will enable ultra-dense, low-power, and massively parallel computing systems that are especially useful for AI applications. The researchers tested the new architecture using an unsupervised machine-learning algorithm running on one million phase change memory (PCM) devices, successfully finding temporal correlations in unknown data streams.

“This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures,” says Evangelos Eleftheriou, PhD, an IBM Fellow and co-author of an open-access paper in the peer-reviewed journal Nature Communications. “As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today’s computers.”

“Memory has so far been viewed as a place where we merely store information, said Abu Sebastian, PhD. exploratory memory and cognitive technologies scientist, IBM Research and lead author of the paper. But in this work, we conclusively show how we can exploit the physics of these memory devices to also perform a rather high-level computational primitive. The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes.” Sebastian also leads a European Research Council funded project on this topic.

* To demonstrate the technology, the authors chose two time-based examples and compared their results with traditional machine-learning methods such as k-means clustering:

  • Simulated Data: one million binary (0 or 1) random processes organized on a 2D grid based on a 1000 x 1000 pixel, black and white, profile drawing of famed British mathematician Alan Turing. The IBM scientists then made the pixels blink on and off with the same rate, but the black pixels turned on and off in a weakly correlated manner. This means that when a black pixel blinks, there is a slightly higher probability that another black pixel will also blink. The random processes were assigned to a million PCM devices, and a simple learning algorithm was implemented. With each blink, the PCM array learned, and the PCM devices corresponding to the correlated processes went to a high conductance state. In this way, the conductance map of the PCM devices recreates the drawing of Alan Turing.
  • Real-World Data: actual rainfall data, collected over a period of six months from 270 weather stations across the USA in one hour intervals. If rained within the hour, it was labelled “1” and if it didn’t “0”. Classical k-means clustering and the in-memory computing approach agreed on the classification of 245 out of the 270 weather stations. In-memory computing classified 12 stations as uncorrelated that had been marked correlated by the k-means clustering approach. Similarly, the in-memory computing approach classified 13 stations as correlated that had been marked uncorrelated by k-means clustering. 

Abstract of Temporal correlation detection using computational phase-change memory

Conventional computers based on the von Neumann architecture perform computation by repeatedly transferring data between their physically separated processing and memory units. As computation becomes increasingly data centric and the scalability limits in terms of performance and power are being reached, alternative computing paradigms with collocated computation and storage are actively being sought. A fascinating such approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. We present an experimental demonstration using one million phase change memory devices organized to perform a high-level computational primitive by exploiting the crystallization dynamics. Its result is imprinted in the conductance states of the memory devices. The results of using such a computational memory for processing real-world data sets show that this co-existence of computation and storage at the nanometer scale could enable ultra-dense, low-power, and massively-parallel computing systems.

Original article here.


standard

Western Digital plans 40TB drives, but it’s still not enough

2017-10-24 - By 

Data continues to grow faster than disk capacity.

Hard disk makers are using capacity as their chief bulwark against the rise of solid-state drives (SSDs), since they certainly can’t argue on performance, and Western Digital — the king of the hard drive vendors — has shown off a new technology that could lead to 40TB drives.

Western Digital already has the largest-capacity drive on the market. It recently introduced a 14TB drive, filled with helium to reduce drag on the spinning platters. But thanks to a new technology called microwave-assisted magnetic recording (MAMR), the company hopes to reach 40TB by 2025. The company promised engineering samples of drive by mid-2018.

MAMR technology is a new method of cramming more data onto the disk. Western Digital’s chief rival, Seagate, is working on a competitive product called HAMR, or heat-assisted magnetic recording. I’ll leave it to propeller heads like AnandTech to explain the electrical engineering of it all. What matters to the end user is that it should ship sometime in 2019, and that’s after 13 years of research and development.

That’s right, MAMR was first developed by a Carnegie Mellon professor in 2006 and work has gone on ever since.

The physics of hard drives

Just like semiconductors, hard drives are running into a brick wall called the laws of physics. Every year it gets harder and harder to shrink these devices while cramming more in them at the same time.

Western Digital believes MAMR should enable 15 percent decline in terabytes per dollar, another argument hard disk has on SSD. A hard disk will always be cheaper per terabyte than SSD because cramming more data into the same space is easier, relatively speaking, for hard drives than flash memory chips. MAMR and HAMR are expected to enable drive makers to pack as much as 4 terabits per square inch on a platter, well beyond the 1.1 terabits per square inch in today’s drives.

Data growing faster than hard disk capacity

The thing is, data is growing faster than hard disk capacity. According to research from IDC (sponsored by Seagate, it should be noted), by 2025 the global datasphere will grow to 163 zettabytes (a zettabyte is a trillion gigabytes). That’s 10 times the 16.1ZB of data generated in 2016. Much of it will come from Big Data and analytics, especially the Internet of Things (IoT), where sensor data will be picking up gigabytes of data per second.

And those data sets are so massive, many companies don’t use them all. They dump their accumulated data into what are called data lakes to be processed later, if ever. I’ve seen estimates that collected data is unused as high as 90 percent. But it has to sit somewhere, and that’s on a hard disk.

Mind you, that’s just Big Data. Individuals are generating massive quantities of data as well. Online backup and storage vendor BackBlaze, which has seen its profile rise after it began reporting on hard drive failures, uses hundreds of thousands of drives in its data centers. It just placed an order for 100 petabytes worth of disk storage, and it plans to deploy all of it in the fourth quarter of this year. And it has plans for another massive order for Q1. And that’s just one online storage vendor among dozens.

All of that is great news for Western Digital, Seagate and Toshiba — and the sales reps who work on commission.

Original article here.


standard

Reports Claim Kaspersky Knowingly Played Role in NSA Hack

2017-10-23 - By 

In 2015 a government contractor placed confidential, NSA data on his personal computer.  This computer was using the Russian-based security solution, Kaspersky Labs.  Allegations have been surrounding Kaspersky Labs, regarding inappropriate ties to the Russian government, as well as collusion with the hackers who conducted the NSA breach in 2015.

Recently, news broke of a modification to Kaspersky Labs security products, to search for not only malware but broad key words as well.  These broad key words can be used to identify specific documents located on a device.  Although the key words used in the NSA hack were not released, they were likely “top secret” or “confidential”.  It is believed this alteration within the security software, is what led to the successful breach of confidential data from the NSA contractor in 2015.

In a statement to Ars Technica, U.S. officials reported,

“There is no way, based on what the software was doing, that Kaspersky couldn’t have known about this.”

It is quite clear; these alterations must have been made by someone.  That particular person is likely a Kaspersky official.  Although, Kaspersky Labs continues to deny any involvement.

Not the First Suspicion…

However, this isn’t the first time U.S. government officials believed this could be possible.  The U.S. intelligence agencies reportedly spent months studying and experimenting Kaspersky software.  The goal was to see if they could trigger it into behaving as if it had discovered classified materials on a computer being monitored by U.S. spies.  It is because of those studies, officials were persuaded Kaspersky was being used to detect classified information.

Original article here.


standard

Cloud Computing Market Projected To Reach $411B By 2020

2017-10-22 - By 

Gartner’s latest worldwide public cloud services revenue forecast published earlier this month predicts Infrastructure-as-a-Service (IaaS), currently growing at a 23.31% Compound Annual Growth Rate (CAGR), will outpace the overall market growth of 13.38% through 2020. Software-as-a-Service (SaaS) revenue is predicted to grow from $58.6B in 2017 to $99.7B in 2020. Taking into account the entire forecast period of 2016 – 2020, SaaS is on pace to attain 15.65% compound annual growth throughout the forecast period, also outpacing the total cloud market. The following graphic compares revenue growth by cloud services category for the years 2016 through 2020. Please click on the graphic to expand it for easier reading.

Catalysts driving greater adoption and correspondingly higher CAGRs include a shift Gartner sees in infrastructure, middleware, application and business process services spending. In 2016, Gartner estimates approximately 17% of the total market revenue for these areas had shifted to the cloud. Gartner predicts by 2021, 28% of all IT spending will be for cloud-based infrastructure, middleware, application and business process services. Another factor is the adoption of Platform-as-a-Service (PaaS). Gartner notes that enterprises are confident that PaaS can be a secure, scalable application development platform in the future.  The following graphic compares the compound annual growth rates (CAGRs) of each cloud service area including the total market.

 

Original article here.


standard

KRACK Attacks Defeat Wi-Fi Security (video)

2017-10-16 - By 

Conventional wisdom has long held that locking down your router with WPA2 encryption protocol would protect your data from snooping. That was true for a long time, but maybe not for much longer. A massive security disclosure details vulnerabilities in WPA2 that could let an attacker intercept all your precious data, and virtually every device with Wi-Fiis affected.

The vulnerability has been dubbed a Key Reinstallation Attack (KRACK) by discoverers Mathy Vanhoef and Frank Piessens of KU Leuven. It’s not specific to any specific piece of hardware or device–it’s a flaw in the WPA2 standard itself. KRACK bears some resemblance to standard “man in the middle” attacks by impersonating an existing network.

INTRODUCTION

We discovered serious weaknesses in WPA2, a protocol that secures all modern protected Wi-Fi networks. An attacker within range of a victim can exploit these weaknesses using key reinstallation attacks (KRACKs). Concretely, attackers can use this novel attack technique to read information that was previously assumed to be safely encrypted. This can be abused to steal sensitive information such as credit card numbers, passwords, chat messages, emails, photos, and so on. The attack works against all modern protected Wi-Fi networks. Depending on the network configuration, it is also possible to inject and manipulate data. For example, an attacker might be able to inject ransomware or other malware into websites.

The weaknesses are in the Wi-Fi standard itself, and not in individual products or implementations. Therefore, any correct implementation of WPA2 is likely affected. To prevent the attack, users must update affected products as soon as security updates become available. Note that if your device supports Wi-Fi, it is most likely affected. During our initial research, we discovered ourselves that Android, Linux, Apple, Windows, OpenBSD, MediaTek, Linksys, and others, are all affected by some variant of the attacks. For more information about specific products, consult the database of CERT/CC, or contact your vendor.

The research behind the attack will be presented at the Computer and Communications Security (CCS) conference, and at the Black Hat Europe conference. Our detailed research paper can already be downloaded.

DEMONSTRATION

As a proof-of-concept we executed a key reinstallation attack against an Android smartphone. In this demonstration, the attacker is able to decrypt all data that the victim transmits. For an attacker this is easy to accomplish, because our key reinstallation attack is exceptionally devastating against Linux and Android 6.0 or higher. This is because Android and Linux can be tricked into (re)installing an all-zero encryption key (see below for more info). When attacking other devices, it is harder to decrypt all packets, although a large number of packets can nevertheless be decrypted. In any case, the following demonstration highlights the type of information that an attacker can obtain when performing key reinstallation attacks against protected Wi-Fi networks:

Our attack is not limited to recovering login credentials (i.e. e-mail addresses and passwords). In general, any data or information that the victim transmits can be decrypted. Additionally, depending on the device being used and the network setup, it is also possible to decrypt data sent towards the victim (e.g. the content of a website). Although websites or apps may use HTTPS as an additional layer of protection, we warn that this extra protection can (still) be bypassed in a worrying number of situations. For example, HTTPS was previously bypassed in non-browser software, in Apple’s iOS and OS X, in Android apps, in Android apps again, in banking apps, and even in VPN apps.

DETAILS

Our main attack is against the 4-way handshake of the WPA2 protocol. This handshake is executed when a client wants to join a protected Wi-Fi network, and is used to confirm that both the client and access point possess the correct credentials (e.g. the pre-shared password of the network). At the same time, the 4-way handshake also negotiates a fresh encryption key that will be used to encrypt all subsequent traffic. Currently, all modern protected Wi-Fi networks use the 4-way handshake. This implies all these networks are affected by (some variant of) our attack. For instance, the attack works against personal and enterprise Wi-Fi networks, against the older WPA and the latest WPA2 standard, and even against networks that only use AES. All our attacks against WPA2 use a novel technique called a key reinstallation attack (KRACK):

Key reinstallation attacks: high level description

In a key reinstallation attack, the adversary tricks a victim into reinstalling an already-in-use key. This is achieved by manipulating and replaying cryptographic handshake messages. When the victim reinstalls the key, associated parameters such as the incremental transmit packet number (i.e. nonce) and receive packet number (i.e. replay counter) are reset to their initial value. Essentially, to guarantee security, a key should only be installed and used once. Unfortunately, we found this is not guaranteed by the WPA2 protocol. By manipulating cryptographic handshakes, we can abuse this weakness in practice.

Key reinstallation attacks: concrete example against the 4-way handshake

As described in the introduction of the research paper, the idea behind a key reinstallation attack can be summarized as follows. When a client joins a network, it executes the 4-way handshake to negotiate a fresh encryption key. It will install this key after receiving message 3 of the 4-way handshake. Once the key is installed, it will be used to encrypt normal data frames using an encryption protocol. However, because messages may be lost or dropped, the Access Point (AP) will retransmit message 3 if it did not receive an appropriate response as acknowledgment. As a result, the client may receive message 3 multiple times. Each time it receives this message, it will reinstall the same encryption key, and thereby reset the incremental transmit packet number (nonce) and receive replay counter used by the encryption protocol. We show that an attacker can force these nonce resets by collecting and replaying retransmissions of message 3 of the 4-way handshake. By forcing nonce reuse in this manner, the encryption protocol can be attacked, e.g., packets can be replayed, decrypted, and/or forged. The same technique can also be used to attack the group key, PeerKey, TDLS, and fast BSS transition handshake.

Practical impact

In our opinion, the most widespread and practically impactful attack is the key reinstallation attack against the 4-way handshake. We base this judgement on two observations. First, during our own research we found that most clients were affected by it. Second, adversaries can use this attack to decrypt packets sent by clients, allowing them to intercept sensitive information such as passwords or cookies. Decryption of packets is possible because a key reinstallation attack causes the transmit nonces (sometimes also called packet numbers or initialization vectors) to be reset to zero. As a result, the same encryption key is used with nonce values that have already been used in the past. In turn, this causes all encryption protocols of WPA2 to reuse keystream when encrypting packets. In case a message that reuses keystream has known content, it becomes trivial to derive the used keystream. This keystream can then be used to decrypt messages with the same nonce. When there is no known content, it is harder to decrypt packets, although still possible in several cases (e.g. English text can still be decrypted). In practice, finding packets with known content is not a problem, so it should be assumed that any packet can be decrypted.

The ability to decrypt packets can be used to decrypt TCP SYN packets. This allows an adversary to obtain the TCP sequence numbers of a connection, and hijack TCP connections. As a result, even though WPA2 is used, the adversary can now perform one of the most common attacks against open Wi-Fi networks: injecting malicious data into unencrypted HTTP connections. For example, an attacker can abuse this to inject ransomware or malware into websites that the victim is visiting.

If the victim uses either the WPA-TKIP or GCMP encryption protocol, instead of AES-CCMP, the impact is especially catastrophic.Against these encryption protocols, nonce reuse enables an adversary to not only decrypt, but also to forge and inject packets. Moreover, because GCMP uses the same authentication key in both communication directions, and this key can be recovered if nonces are reused, it is especially affected. Note that support for GCMP is currently being rolled out under the name Wireless Gigabit (WiGig), and is expected to be adopted at a high rate over the next few years.

The direction in which packets can be decrypted (and possibly forged) depends on the handshake being attacked. Simplified, when attacking the 4-way handshake, we can decrypt (and forge) packets sent by the client. When attacking the Fast BSS Transition (FT) handshake, we can decrypt (and forge) packets sent towards the client. Finally, most of our attacks also allow the replay of unicast, broadcast, and multicast frames. For further details, see Section 6 of our research paper.

Note that our attacks do not recover the password of the Wi-Fi network. They also do not recover (any parts of) the fresh encryption key that is negotiated during the 4-way handshake.

Android and Linux

Our attack is especially catastrophic against version 2.4 and above of wpa_supplicant, a Wi-Fi client commonly used on Linux. Here, the client will install an all-zero encryption key instead of reinstalling the real key. This vulnerability appears to be caused by a remark in the Wi-Fi standard that suggests to clear the encryption key from memory once it has been installed for the first time. When the client now receives a retransmitted message 3 of the 4-way handshake, it will reinstall the now-cleared encryption key, effectively installing an all-zero key. Because Android uses wpa_supplicant, Android 6.0 and above also contains this vulnerability. This makes it trivial to intercept and manipulate traffic sent by these Linux and Android devices. Note that currently 41% of Android devices are vulnerable to this exceptionally devastating variant of our attack.

Assigned CVE identifiers

The following Common Vulnerabilities and Exposures (CVE) identifiers were assigned to track which products are affected by specific instantiations of our key reinstallation attack:

  • CVE-2017-13077: Reinstallation of the pairwise encryption key (PTK-TK) in the 4-way handshake.
  • CVE-2017-13078: Reinstallation of the group key (GTK) in the 4-way handshake.
  • CVE-2017-13079: Reinstallation of the integrity group key (IGTK) in the 4-way handshake.
  • CVE-2017-13080: Reinstallation of the group key (GTK) in the group key handshake.
  • CVE-2017-13081: Reinstallation of the integrity group key (IGTK) in the group key handshake.
  • CVE-2017-13082: Accepting a retransmitted Fast BSS Transition (FT) Reassociation Request and reinstalling the pairwise encryption key (PTK-TK) while processing it.
  • CVE-2017-13084: Reinstallation of the STK key in the PeerKey handshake.
  • CVE-2017-13086: reinstallation of the Tunneled Direct-Link Setup (TDLS) PeerKey (TPK) key in the TDLS handshake.
  • CVE-2017-13087: reinstallation of the group key (GTK) when processing a Wireless Network Management (WNM) Sleep Mode Response frame.
  • CVE-2017-13088: reinstallation of the integrity group key (IGTK) when processing a Wireless Network Management (WNM) Sleep Mode Response frame.

Note that each CVE identifier represents a specific instantiation of a key reinstallation attack. This means each CVE ID describes a specific protocol vulnerability, and therefore many vendors are affected by each individual CVE ID. You can also read vulnerability note VU#228519 of CERT/CC for additional details on which products are known to be affected.

Our research paper behind the attack is titled Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2 and will be presented at the Computer and Communications Security (CCS) conference on Wednesday 1 November 2017.

Although this paper is made public now, it was already submitted for review on 19 May 2017. After this, only minor changes were made. As a result, the findings in the paper are already several months old. In the meantime, we have found easier techniques to carry out our key reinstallation attack against the 4-way handshake. With our novel attack technique, it is now trivial to exploit implementations that only accept encrypted retransmissions of message 3 of the 4-way handshake. In particular this means that attacking macOS and OpenBSD is significantly easier than discussed in the paper.

We would like to highlight the following addendums and errata:

Addendum: wpa_supplicant v2.6 and Android 6.0+

Linux’s wpa_supplicant v2.6 is also vulnerable to the installation of an all-zero encryption key in the 4-way handshake. This was discovered by John A. Van Boxtel. As a result, all Android versions higher than 6.0 are also affected by the attack, and hence can be tricked into installing an all-zero encryption key. The new attack works by injecting a forged message 1, with the same ANonce as used in the original message 1, before forwarding the retransmitted message 3 to the victim.

Addendum: other vulnerable handshakes

After our initial research as reported in the paper, we discovered that the TDLS handshake and WNM Sleep Mode Response frame are also vulnerable to key reinstallation attacks.

Selected errata

  • In Figure 9 at stage 3 of the attack, the frame transmitted from the adversary to the authenticator should say a ReassoReq instead of ReassoResp.

TOOLS

We have made scripts to detect whether an implementation of the 4-way handshake, group key handshake, or Fast BSS Transition (FT) handshake is vulnerable to key reinstallation attacks. These scripts will be released once we had the time to clean up their usage instructions.

We also made a proof-of-concept script that exploits the all-zero key (re)installation present in certain Android and Linux devices. This script is the one that we used in the demonstration video. It will be released once everyone had a reasonable chance to update their devices (and we have had a chance to prepare the code repository for release). We remark that the reliability of our proof-of-concept script may depend on how close the victim is to the real network. If the victim is very close to the real network, the script may fail because the victim will always directly communicate with the real network, even if the victim is (forced) on a different Wi-Fi channel than this network.

Original article PLUS Q/A here.


standard

The Dark Secret at the Heart of AI

2017-10-09 - By 

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligenceThe car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

Original article here.

 


standard

State Of Machine Learning And AI, 2017

2017-10-01 - By 

AI is receiving major R&D investment from tech giants including Google, Baidu, Facebook and Microsoft.

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

Key takeaways from the study include the following:

  • Tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions. The current rate of AI investment is 3X the external investment growth since 2013. McKinsey found that 20% of AI-aware firms are early adopters, concentrated in the high-tech/telecom, automotive/assembly and financial services industries. The graphic below illustrates the trends the study team found during their analysis.
  • AI is turning into a race for patents and intellectual property (IP) among the world’s leading tech companies. McKinsey found that only a small percentage (up to 9%) of Venture Capital (VC), Private Equity (PE), and other external funding. Of all categories that have publically available data, M&A grew the fastest between 2013 And 2016 (85%).The report cites many examples of internal development including Amazon’s investments in robotics and speech recognition, and Salesforce on virtual agents and machine learning. BMW, Tesla, and Toyota lead auto manufacturers in their investments in robotics and machine learning for use in driverless cars. Toyota is planning to invest $1B in establishing a new research institute devoted to AI for robotics and driverless vehicles.
  • McKinsey estimates that total annual external investment in AI was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. Robotics and speech recognition are two of the most popular investment areas. Investors are most favoring machine learning startups due to quickness code-based start-ups have at scaling up to include new features fast. Software-based machine learning startups are preferred over their more cost-intensive machine-based robotics counterparts that often don’t have their software counterparts do. As a result of these factors and more, Corporate M&A is soaring in this area with the Compound Annual Growth Rate (CAGR) reaching approximately 80% from 20-13 to 2016. The following graphic illustrates the distribution of external investments by category from the study.
  • High tech, telecom, and financial services are the leading early adopters of machine learning and AI. These industries are known for their willingness to invest in new technologies to gain competitive and internal process efficiencies. Many startups have also had their start by concentrating on the digital challenges of this industries as well. The MGI Digitization Index is a GDP-weighted average of Europe and the United States. See Appendix B of the study for a full list of metrics and explanation of methodology. McKinsey also created an overall AI index shown in the first column below that compares key performance indicators (KPIs) across assets, usage, and labor where AI could make a contribution. The following is a heat map showing the relative level of AI adoption by industry and key area of asset, usage, and labor category.
  • McKinsey predicts High Tech, Communications, and Financial Services will be the leading industries to adopt AI in the next three years. The competition for patents and intellectual property (IP) in these three industries is accelerating. Devices, products and services available now and on the roadmaps of leading tech companies will over time reveal the level of innovative activity going on in their R&D labs today. In financial services, for example, there are clear benefits from improved accuracy and speed in AI-optimized fraud-detection systems, forecast to be a $3B market in 2020. The following graphic provides an overview of sectors or industries leading in AI addition today and who intend to grow their investments the most in the next three years.
  • Healthcare, financial services, and professional services are seeing the greatest increase in their profit margins as a result of AI adoption.McKinsey found that companies who benefit from senior management support for AI initiatives have invested in infrastructure to support its scale and have clear business goals achieve 3 to 15% percentage point higher profit margin. Of the over 3,000 business leaders who were interviewed as part of the survey, the majority expect margins to increase by up to 5% points in the next year.
  • Amazon has achieved impressive results from its $775 million acquisition of Kiva, a robotics company that automates picking and packing according to the McKinsey study. “Click to ship” cycle time, which ranged from 60 to 75 minutes with humans, fell to 15 minutes with Kiva, while inventory capacity increased by 50%. Operating costs fell an estimated 20%, giving a return of close to 40% on the original investment
  • Netflix has also achieved impressive results from the algorithm it uses to personalize recommendations to its 100 million subscribers worldwide. Netflix found that customers, on average, give up 90 seconds after searching for a movie. By improving search results, Netflix projects that they have avoided canceled subscriptions that would reduce its revenue by $1B annually.

 

Original article here.


standard

Smartphone users on Wi-Fi drive most website traffic

2017-09-26 - By 

Smartphones are responsible for significant web traffic growth, and a surprising amount of it is on Wi-Fi, not mobile networks.

Web visits from desktops and tablets have declined dramatically, says Adobe Digital Insights in Adobe Mobile Trends Refresh — Q2 2017.

The device people are using: their smartphone. And the majority of that device’s traffic is arriving via Wi-Fi connections, not mobile networks, the analytics-oriented research firm says. Adobe has been tracking over 150 billion visits to 400 websites and apps since 2015.

The sites these mobile users are visiting are large-organization national news, media and entertainment, and retail — with over 60 percent of those smartphone visits connecting through Wi-Fi.

Major travel, banking and investment, automotive, and insurance company sites are up there, too, with more than 50 percent of their public-deriving traffic, from smartphone devices, coming through Wi-Fi instead of via mobile networks.

Cisco bullish on Wi-Fi

Networking equipment vendor Cisco is also bullish on Wi-Fi. In research published in February, it says that by next year, “Wi-Fi traffic will even surpass fixed/wired traffic.” And by 2021, 63 percent of global mobile data traffic will be offloaded onto Wi-Fi networks and not use mobile.

“By 2021, 29 percent of the global IP traffic will be carried by Wi-Fi networks from dual mode [mobile] devices,” Cisco says.

Wi-Fi is also expected to handle mobile network offloading for many future Internet of Things (IoT) devices, the company says.

It’s seems reports Wi-Fi’s death, like Mark Twain’s, are greatly exaggerated.

Reasons for Wi-Fi’s hold and domination over mobile include speed, which is often faster than mobile networks, and cost. Wi-Fi costs less for consumers than mobile networks. Expect a reversal, though, if mobile networks get cut-rate enough.

Will 5G displace Wi-Fi?

What will happen when 5G mobile networks come along in or around 2020? Is Wi-Fi’s writing on the wall then? Maybe. For a possible answer, one may need to look at history.

“New cellular technologies with higher speeds than their predecessors tend to have lower offload rates,” Cisco says. That’s because of more capacity and advantageous data limits for the consumer. It’s designed to kick-start the tech, as was the case with 4G’s launch.

In any case, whatever way one looks at it, mobile internet of one kind or another that isn’t fixed is where it’s at. It’s responsible for web traffic growth. People want smartphones for consuming media.

“Bigger screens are losing share,” Adobe says in an article accompanying its report.

U.S. government websites corroborate Adobe’s mobile trend. In an August report, the General Services Administration (GSA) said mobile had grabbed 43 percent of all traffic to government websites in December 2016, compared to 36 percent a year before. It sees even more growth this year, it says.

“Most industries see more than half of their traffic from mobile devices,” Adobe concludes.

Original article here.

 


standard

Google Launches Public Beta of Cloud Dataprep

2017-09-24 - By 

Google recently announced that Google Cloud Dataprep—the new managed data  wrangling service developed in collaboration with Trifacta—is now available in public beta. This service enables analysts and data scientists to visually explore and prepare data for analysis in seconds within the Google Cloud Platform.

Now that the Google Cloud Dataprep beta is open to the public, more companies can experience the benefits of Trifacta’s data preparation platform. From predictive transformation to interactive exploration, Trifacta’s  intuitive workflow has accelerated the data preparation process for Google Cloud Dataprep customers who have tried it out within private beta.

In addition to the same functionality found in Trifacta, Google Cloud Dataprep users also benefit from features that are unique to the collaboration with Google:

True SaaS offering 
With Google Cloud Dataprep, there’s no software to install or manage. Unlike a marketplace offering that deploys into Google ecosystem, Cloud Dataprep is a fully-managed service that does not require configuration or administration.

Single Sign On through Google Cloud Identity & Access Management
All users can easily access Cloud Dataprep using the same login / credential that they already used for any other Google service. This ensures highly secure and consistent access to Google services and data based on the permissions and roles defined through Google IAM.

Integration to Google Cloud Storage and Google BigQuery (read & write)
Users can browse, preview,  import data from and publish results to Google Cloud Storage and Google BigQuery directly through Cloud Dataprep. This is a huge boon for the teams that rely upon Google-generated data. For example:

  • Marketing teams leveraging DoubleClick Ads data can make that data available in Google BigQuery, then use Cloud Dataprep to prepare and publish the result back into BigQuery for downstream analysis. Learn more here.
  • Telematics data scientists can connect Cloud Dataprep directly to raw log data (often in JSON format) stored on Google Cloud Storage, and then prepare it for machine learning models executed in TensorFlow.
  • Retail business analysts can upload Excel data from their desktop to Google Cloud Storage, parse and combine it with BigQuery data to augment the results (beyond the limits of Excel), and eventually making the data available to various analytic tools like Google Data Studio, Looker, Tableau, Qlik or Zoomdata.

Big data scale provided by Cloud Dataflow 
By leveraging a serverless, auto-scaling data processing engine (Google Cloud Dataflow), Cloud Dataprep can handle any size of data, located anywhere in the world. This means that users don’t have to worry about optimizing their logic as their data grows, nor have to choose where their jobs run. At the same time, IT can rely on Cloud Dataflow to efficiently scale resources only as needed. Finally, it allows for enterprise-grade monitoring and logging in Google Stackdriver.

World-class Google support
As a Google service, Cloud Dataprep is subject to the same standards as other Google Beta product &  services. These benefits include:

  • World class uptime and availability around the world
  • Official support provided by Google Cloud Platform
  • Centralized usage-based billing managed on a per project basis with quotas and detailed reports

Early Google Cloud Dataprep Customer Feedback

Although Cloud Dataprep has only been in private beta for a short amount of time, we’ve had substantial participation from thousands of early private beta users and we’re excited to share some of the great feedback. Here’s a sample of what early users are saying:

Merkle Inc. 
Cloud Dataprep allows us to quickly view and understand new datasets, and its flexibility supports our data transformation needs. The GUI is nicely designed, so the learning curve is minimal. Our initial data preparation work is now completed in minutes, not hours or days,” says Henry Culver, IT Architect at Merkle. “The ability to rapidly see our data, and to be offered transformation suggestions in data delivery, is a huge help to us as we look to rapidly assimilate new datasets.”

Venture Development Center

“We needed a platform that was versatile, easy to utilize and provided a migration path as our needs for data review, evaluation, hygiene, interlinking and analysis advanced. We immediately knew that Google Cloud Platform, with Cloud Dataprep and BigQuery, were exactly what we were looking for. As we develop our capability and movement into the data cataloging, QA and delivery cycle, Cloud Dataprep allows us to accomplish this quickly and adeptly,” says Matthew W. Staudt, President of Venture Development Center.

For more information on these customers check out Google’s blog on the public beta launch here.

Cloud Dataprep Public Beta: Furthering Wrangling Success

Now that the beta version of Google Cloud Dataprep is open to the public, we’re excited to see more organizations achieve data wrangling success from the launch of  the public beta of Google Cloud Dataprep. From multinational banks to consumer retail companies to government agencies, there’s a  growing number of customers using Trifacta’s consistent transformation logic, user experience, workflow, metadata management, and comprehensive data governance to reduce data preparation times and improve data quality.

If you’re interested in Google Dataprep, you can sign up with your own personal account for free access OR login using your company’s existing Google account. Visit cloud.google.com/dataprep to learn more.

For more information about how Trifacta interoperates with cloud providers like Google Cloud and with on-prem infrastructure, download our brief.

 

Original article here.

 


standard

New Theory Cracks Open the Black Box of Deep Learning

2017-09-22 - By 

A new idea called the “information bottleneck” is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.

Even as machines known as “deep neural networks” have learned to converse, drive cars, beat video games and Go champions, dream, paint pictures and help make scientific discoveries, they have also confounded their human creators, who never expected so-called “deep-learning” algorithms to work so well. No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either).

Like a brain, a deep neural network has layers of neurons — artificial ones that are figments of computer memory. When a neuron fires, it sends signals to connected neurons in the layer above. During deep learning, connections in the network are strengthened or weakened as needed to make the system better at sending signals from input data — the pixels of a photo of a dog, for instance — up through the layers to neurons associated with the right high-level concepts, such as “dog.” After a deep neural network has “learned” from thousands of sample dog photos, it can identify dogs in new photos as accurately as people can. The magic leap from special cases to general concepts during learning gives deep neural networks their power, just as it underlies human reasoning, creativity and the other faculties collectively termed “intelligence.” Experts wonder what it is about deep learning that enables generalization — and to what extent brains apprehend reality in the same way.

Last month, a YouTube videoof a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

Some researchers remain skeptical that the theory fully accounts for the success of deep learning, but Kyle Cranmer, a particle physicist at New York University who uses machine learning to analyze particle collisions at the Large Hadron Collider, said that as a general principle of learning, it “somehow smells right.”

Geoffrey Hinton, a pioneer of deep learning who works at Google and the University of Toronto, emailed Tishby after watching his Berlin talk. “It’s extremely interesting,” Hinton wrote. “I have to listen to it another 10,000 times to really understand it, but it’s very rare nowadays to hear a talk with a really original idea in it that may be the answer to a really major puzzle.”

According to Tishby, who views the information bottleneck as a fundamental principle behind learning, whether you’re an algorithm, a housefly, a conscious being, or a physics calculation of emergent behavior, that long-awaited answer “is that the most important part of learning is actually forgetting.”

The Bottleneck

Tishby began contemplating the information bottleneck around the time that other researchers were first mulling over deep neural networks, though neither concept had been named yet. It was the 1980s, and Tishby was thinking about how good humans are at speech recognition — a major challenge for AI at the time. Tishby realized that the crux of the issue was the question of relevance: What are the most relevant features of a spoken word, and how do we tease these out from the variables that accompany them, such as accents, mumbling and intonation? In general, when we face the sea of data that is reality, which signals do we keep?

“This notion of relevant information was mentioned many times in history but never formulated correctly,” Tishby said in an interview last month. “For many years people thought information theory wasn’t the right way to think about relevance, starting with misconceptions that go all the way to Shannon himself.”

Claude Shannon, the founder of information theory, in a sense liberated the study of information starting in the 1940s by allowing it to be considered in the abstract — as 1s and 0s with purely mathematical meaning. Shannon took the view that, as Tishby put it, “information is not about semantics.” But, Tishby argued, this isn’t true. Using information theory, he realized, “you can define ‘relevant’ in a precise sense.”

Imagine X is a complex data set, like the pixels of a dog photo, and Yis a simpler variable represented by those data, like the word “dog.” You can capture all the “relevant” information in X about Y by compressing X as much as you can without losing the ability to predict Y. In their 1999 paper, Tishby and co-authors Fernando Pereira, now at Google, and William Bialek, now at Princeton University, formulated this as a mathematical optimization problem. It was a fundamental idea with no killer application.

“I’ve been thinking along these lines in various contexts for 30 years,” Tishby said. “My only luck was that deep neural networks became so important.”

Eyeballs on Faces on People on Scenes

Though the concept behind deep neural networks had been kicked around for decades, their performance in tasks like speech and image recognition only took off in the early 2010s, due to improved training regimens and more powerful computer processors. Tishby recognized their potential connection to the information bottleneck principle in 2014 after reading a surprising paper by the physicists David Schwaband Pankaj Mehta.

The duo discovered that a deep-learning algorithm invented by Hinton called the “deep belief net” works, in a particular case, exactly like renormalization, a technique used in physics to zoom out on a physical system by coarse-graining over its details and calculating its overall state. When Schwab and Mehta applied the deep belief net to a model of a magnet at its “critical point,” where the system is fractal, or self-similar at every scale, they found that the network automatically used the renormalization-like procedure to discover the model’s state. It was a stunning indication that, as the biophysicist Ilya Nemenman said at the time, “extracting relevant features in the context of statistical physics and extracting relevant features in the context of deep learning are not just similar words, they are one and the same.”

The only problem is that, in general, the real world isn’t fractal. “The natural world is not ears on ears on ears on ears; it’s eyeballs on faces on people on scenes,” Cranmer said. “So I wouldn’t say [the renormalization procedure] is why deep learning on natural images is working so well.” But Tishby, who at the time was undergoing chemotherapy for pancreatic cancer, realized that both deep learning and the coarse-graining procedure could be encompassed by a broader idea. “Thinking about science and about the role of my old ideas was an important part of my healing and recovery,” he said.

In 2015, he and his student Noga Zaslavsky hypothesizedthat deep learning is an information bottleneck procedure that compresses noisy data as much as possible while preserving information about what the data represent. Tishby and Shwartz-Ziv’s new experiments with deep neural networks reveal how the bottleneck procedure actually plays out. In one case, the researchers used small networks that could be trained to label input data with a 1 or 0 (think “dog” or “no dog”) and gave their 282 neural connections random initial strengths. They then tracked what happened as the networks engaged in deep learning with 3,000 sample input data sets.

The basic algorithm used in the majority of deep-learning procedures to tweak neural connections in response to data is called “stochastic gradient descent”: Each time the training data are fed into the network, a cascade of firing activity sweeps upward through the layers of artificial neurons. When the signal reaches the top layer, the final firing pattern can be compared to the correct label for the image — 1 or 0, “dog” or “no dog.” Any differences between this firing pattern and the correct pattern are “back-propagated” down the layers, meaning that, like a teacher correcting an exam, the algorithm strengthens or weakens each connection to make the network layer better at producing the correct output signal. Over the course of training, common patterns in the training data become reflected in the strengths of the connections, and the network becomes expert at correctly labeling the data, such as by recognizing a dog, a word, or a 1.

In their experiments, Tishby and Shwartz-Ziv tracked how much information each layer of a deep neural network retained about the input data and how much information each one retained about the output label. The scientists found that, layer by layer, the networks converged to the information bottleneck theoretical bound: a theoretical limit derived in Tishby, Pereira and Bialek’s original paper that represents the absolute best the system can do at extracting relevant information. At the bound, the network has compressed the input as much as possible without sacrificing the ability to accurately predict its label.

Tishby and Shwartz-Ziv also made the intriguing discovery that deep learning proceeds in two phases: a short “fitting” phase, during which the network learns to label its training data, and a much longer “compression” phase, during which it becomes good at generalization, as measured by its performance at labeling new test data.

As a deep neural network tweaks its connections by stochastic gradient descent, at first the number of bits it stores about the input data stays roughly constant or increases slightly, as connections adjust to encode patterns in the input and the network gets good at fitting labels to it. Some experts have compared this phase to memorization.

Then learning switches to the compression phase. The network starts to shed information about the input data, keeping track of only the strongest features — those correlations that are most relevant to the output label. This happens because, in each iteration of stochastic gradient descent, more or less accidental correlations in the training data tell the network to do different things, dialing the strengths of its neural connections up and down in a random walk. This randomization is effectively the same as compressing the system’s representation of the input data. As an example, some photos of dogs might have houses in the background, while others don’t. As a network cycles through these training photos, it might “forget” the correlation between houses and dogs in some photos as other photos counteract it. It’s this forgetting of specifics, Tishby and Shwartz-Ziv argue, that enables the system to form general concepts. Indeed, their experiments revealed that deep neural networks ramp up their generalization performance during the compression phase, becoming better at labeling test data. (A deep neural network trained to recognize dogs in photos might be tested on new photos that may or may not include dogs, for instance.)

It remains to be seen whether the information bottleneck governs all deep-learning regimes, or whether there are other routes to generalization besides compression. Some AI experts see Tishby’s idea as one of many important theoretical insights about deep learning to have emerged recently. Andrew Saxe, an AI researcher and theoretical neuroscientist at Harvard University, noted that certain very large deep neural networks don’t seem to need a drawn-out compression phase in order to generalize well. Instead, researchers program in something called early stopping, which cuts training short to prevent the network from encoding too many correlations in the first place.

Tishby argues that the network models analyzed by Saxe and his colleagues differ from standard deep neural network architectures, but that nonetheless, the information bottleneck theoretical bound defines these networks’ generalization performance better than other methods. Questions about whether the bottleneck holds up for larger neural networks are partly addressed by Tishby and Shwartz-Ziv’s most recent experiments, not included in their preliminary paper, in which they train much larger, 330,000-connection-deep neural networks to recognize handwritten digits in the 60,000-image Modified National Institute of Standards and Technology database, a well-known benchmark for gauging the performance of deep-learning algorithms. The scientists saw the same convergence of the networks to the information bottleneck theoretical bound; they also observed the two distinct phases of deep learning, separated by an even sharper transition than in the smaller networks. “I’m completely convinced now that this is a general phenomenon,” Tishby said.

Humans and Machines

The mystery of how brains sift signals from our senses and elevate them to the level of our conscious awareness drove much of the early interest in deep neural networks among AI pioneers, who hoped to reverse-engineer the brain’s learning rules. AI practitioners have since largely abandoned that path in the mad dash for technological progress, instead slapping on bells and whistles that boost performance with little regard for biological plausibility. Still, as their thinking machines achieve ever greater feats — even stoking fears that AI could someday pose an existential threat — many researchers hope these explorations will uncover general insights about learning and intelligence.

Brenden Lake, an assistant professor of psychology and data science at New York University who studies similarities and differences in how humans and machines learn, said that Tishby’s findings represent “an important step towards opening the black box of neural networks,” but he stressed that the brain represents a much bigger, blacker black box. Our adult brains, which boast several hundred trillion connections between 86 billion neurons, in all likelihood employ a bag of tricks to enhance generalization, going beyond the basic image- and sound-recognition learning procedures that occur during infancy and that may in many ways resemble deep learning.

For instance, Lake said the fitting and compression phases that Tishby identified don’t seem to have analogues in the way children learn handwritten characters, which he studies. Children don’t need to see thousands of examples of a character and compress their mental representation over an extended period of time before they’re able to recognize other instances of that letter and write it themselves. In fact, they can learn from a single example. Lake and his colleagues’ models suggest the brain may deconstruct the new letter into a series of strokes — previously existing mental constructs — allowing the conception of the letter to be tacked onto an edifice of prior knowledge. “Rather than thinking of an image of a letter as a pattern of pixels and learning the concept as mapping those features” as in standard machine-learning algorithms, Lake explained, “instead I aim to build a simple causal model of the letter,” a shorter path to generalization.

Such brainy ideas might hold lessons for the AI community, furthering the back-and-forth between the two fields. Tishby believes his information bottleneck theory will ultimately prove useful in both disciplines, even if it takes a more general form in human learning than in AI. One immediate insight that can be gleaned from the theory is a better understanding of which kinds of problems can be solved by real and artificial neural networks. “It gives a complete characterization of the problems that can be learned,” Tishby said. These are “problems where I can wipe out noise in the input without hurting my ability to classify. This is natural vision problems, speech recognition. These are also precisely the problems our brain can cope with.”

Meanwhile, both real and artificial neural networks stumble on problems in which every detail matters and minute differences can throw off the whole result. Most people can’t quickly multiply two large numbers in their heads, for instance. “We have a long class of problems like this, logical problems that are very sensitive to changes in one variable,” Tishby said. “Classifiability, discrete problems, cryptographic problems. I don’t think deep learning will ever help me break cryptographic codes.”

Generalizing — traversing the information bottleneck, perhaps — means leaving some details behind. This isn’t so good for doing algebra on the fly, but that’s not a brain’s main business. We’re looking for familiar faces in the crowd, order in chaos, salient signals in a noisy world.

Original article here.


standard

AI will be Bigger than the Internet

2017-09-19 - By 

AI will be the next general purpose technology (GPT), according to experts. Beyond the disruption of business and data – things that many of us don’t have a need to care about – AI is going to change the way most people live, as well.

As a GPT, AI is predicted to integrate within our entire society in the next few years, and become entirely mainstream — like electricity and the internet.

The field of AI research has the potential to fundamentally change more technologies than, arguably, anything before it. While electricity brought illumination and the internet revolutionized communication, machine-learning is already disrupting financechemistrydiagnosticsanalytics, and consumer electronics – to name a few. This is going to bring efficiency to a world with more data than we know what to do with.

It’s also going to disrupt your average person’s day — in a good way — like other GPTs before AI did.

Anyone who lived in a world before Google and the internet may recall a time when people would actually have arguments about simple facts. There wasn’t an easy way, while riding in a car, to determine which band sang a song that was on the radio. If the DJ didn’t announce the name of the artist and track before a song played, you could be subject to anywhere from three to seven minutes of heated discussion over whether “The Four Tops” or “The Temptations” sang a particular song, for example.

Today we’re used to looking up things, and for many of us it’s almost second nature. We’re throwing cookbooks out, getting rid of encyclopedias, and libraries are mostly meeting places for fiction enthusiasts these days. This is what a general purpose technology does — it changes everything.

If your doctor told you they didn’t believe in the internet you’d get a new doctor. Imagine a surgeon who chose not to use electricity — would you let them operate on you?

The AI that truly changes the world beyond simply augmenting humans, like assisted steering does, is the one that starts removing other technology from our lives, like the internet did. With the web we’ve shrunken millions of books and videos down to the size of a single iPhone, at least as far as consumers are concerned.

AI is being layered into our everyday lives, as a general purpose technology, like electricity and the internet. And once it reaches its early potential we’ll be getting back seconds of time at first, then minutes, and eventually we’ll have devices smart enough to no longer need us to direct them at every single step, giving us back all the time we lost when we started splitting our reality between people and computers.

Siri and Cortana won’t need to be told what to do all the time, for example, once AI learns to start paying attention to the world outside of the smart phone.

Now, if only I could convince the teenager in my house to do the same …

Original article here.


standard

Bluetooth White Paper Identified 8 Vulnerabilities (video)

2017-09-13 - By 

If you ask two researchers what is the problem with Bluetooth they will have a simple answer.

“Bluetooth is complicated. Too complicated. Too many specific applications are defined in the stack layer, with endless replication of facilities and features.” Case in point: the WiFi specification (802.11) is only 450 pages long, they said, while the Bluetooth specification reaches 2822 pages.

Unfortunately, they added, the complexity has “kept researchers from auditing its implementations at the same level of scrutiny that other highly exposed protocols, and outwards-facing interfaces have been treated with.”

Lack of review can end up with vulnerabilities needing identification.

And that is a fitting segue to this week’s news about devices with Bluetooth capabilities.

At Armis Labs, Ben Seri and Gregory Vishnepolsky are the two researchers who discussed the vulnerabilities in modern Bluetooth stacks—and devices with Bluetooth capabilities were estimated at over 8.2 billion, according to the Armis site’s overview.

Seri and Vishnepolsky are the authors of a 42-page white paper detailing what is wrong and at stake in their findings. The discovery is being described as an “attack vector endangering major mobile, desktop, and IoT operating systems, including Android, iOS, Windows, and Linux, and the devices using them.”

They are calling the vector BlueBorne, as it spreads via the air and attacks devices via Bluetooth. Attackers can hack into cellphones and computers simply because they had Bluetooth on. “Just by having Bluetooth on, we can get malicious code on your device,” Nadir Izrael, CTO and cofounder of security firm Armis, told Ars Technica.

Let’s ponder this, as it highlights a troubling aspect of attack: Lorenzo Franceschi-Bicchierai at Motherboard:

“‘The user is not involved in the process, they don’t need to be in discoverable mode, they don’t have to have a Bluetooth connection active, just have Bluetooth on,’ Nadir Izrael, the co-founder and chief technology officer for Armis, told Motherboard.”

Their white paper identified eight vulnerabilities: (The authors thanked Alon Livne for the development of the Linux RCE exploit.)

Original article here.


standard

The Scale of the Internet 2017 (infograph)

2017-09-04 - By 

Just a month ago, it was revealed that Facebook has more than two billion active monthly users. That means that in any given month, more than 25% of Earth’s population logs in to their Facebook account at least once.

This kind of scale is almost impossible to grasp.

Here’s one attempt to put it in perspective: imagine Yankee Stadium’s seats packed with 50,000 people, and multiply this by a factor of 40,000.

That’s about how many different people log into Facebook every month worldwide.

A smaller window

The Yankee Stadium analogy sort of helps, but it’s still very hard to picture.

The scale of the internet is so great, that it doesn’t make sense to look at the information on a monthly basis, or even to use daily figures.

Instead, let’s drill down to just what happens in just one internet minute:

Created each year by Lori Lewis and Chadd Callahan of Cumulus Media, the above graphic shows the incredible scale of e-commerce, social media, email, and other content creation that happens on the web.

Content competition

If you’ve ever had a post on Facebook or Instagram fizzle out, it’s safe to say that the above proliferation of content in our social feeds is part of the cause.

In a social media universe where there are no barriers to entry and almost infinite amounts of competition, the content game has tilted to become a “winner take all” scenario. Since people don’t have the time to look at the 452,200 tweets sent every minute, they naturally gravitate to the things that already have social proof.

People look to the people they trust to see what’s already being talking about, which is why influencers are more important than ever to marketers.

Eyes on the prize

For those that are able to get the strategy and timing right, the potential spoils are salivating:

The never-ending challenge, however, is how to stand out from the crowd.

Original article here.


standard

AI detectives are cracking open the black box of deep learning (video)

2017-08-30 - By 

Jason Yosinski sits in a small glass box at Uber’s San Francisco, California, headquarters, pondering the mind of an artificial intelligence. An Uber research scientist, Yosinski is performing a kind of brain surgery on the AI running on his laptop. Like many of the AIs that will soon be powering so much of modern life, including self-driving Uber cars, Yosinski’s program is a deep neural network, with an architecture loosely inspired by the brain. And like the brain, the program is hard to understand from the outside: It’s a black box.

This particular AI has been trained, using a vast sum of labeled images, to recognize objects as random as zebras, fire trucks, and seat belts. Could it recognize Yosinski and the reporter hovering in front of the webcam? Yosinski zooms in on one of the AI’s individual computational nodes—the neurons, so to speak—to see what is prompting its response. Two ghostly white ovals pop up and float on the screen. This neuron, it seems, has learned to detect the outlines of faces. “This responds to your face and my face,” he says. “It responds to different size faces, different color faces.”

No one trained this network to identify faces. Humans weren’t labeled in its training images. Yet learn faces it did, perhaps as a way to recognize the things that tend to accompany them, such as ties and cowboy hats. The network is too complex for humans to comprehend its exact decisions. Yosinski’s probe had illuminated one small part of it, but overall, it remained opaque. “We build amazing models,” he says. “But we don’t quite understand them. And every year, this gap is going to get a bit larger.”

This video provides a high-level overview of the problem:

Each month, it seems, deep neural networks, or deep learning, as the field is also called, spread to another scientific discipline. They can predict the best way to synthesize organic molecules. They can detect genes related to autism risk. They are even changing how science itself is conducted. The AIs often succeed in what they do. But they have left scientists, whose very enterprise is founded on explanation, with a nagging question: Why, model, why?

That interpretability problem, as it’s known, is galvanizing a new generation of researchers in both industry and academia. Just as the microscope revealed the cell, these researchers are crafting tools that will allow insight into the how neural networks make decisions. Some tools probe the AI without penetrating it; some are alternative algorithms that can compete with neural nets, but with more transparency; and some use still more deep learning to get inside the black box. Taken together, they add up to a new discipline. Yosinski calls it “AI neuroscience.”

Opening up the black box

Loosely modeled after the brain, deep neural networks are spurring innovation across science. But the mechanics of the models are mysterious: They are black boxes. Scientists are now developing tools to get inside the mind of the machine.

Marco Ribeiro, a graduate student at the University of Washington in Seattle, strives to understand the black box by using a class of AI neuroscience tools called counter-factual probes. The idea is to vary the inputs to the AI—be they text, images, or anything else—in clever ways to see which changes affect the output, and how. Take a neural network that, for example, ingests the words of movie reviews and flags those that are positive. Ribeiro’s program, called Local Interpretable Model-Agnostic Explanations (LIME), would take a review flagged as positive and create subtle variations by deleting or replacing words. Those variants would then be run through the black box to see whether it still considered them to be positive. On the basis of thousands of tests, LIME can identify the words—or parts of an image or molecular structure, or any other kind of data—most important in the AI’s original judgment. The tests might reveal that the word “horrible” was vital to a panning or that “Daniel Day Lewis” led to a positive review. But although LIME can diagnose those singular examples, that result says little about the network’s overall insight.

New counterfactual methods like LIME seem to emerge each month. But Mukund Sundararajan, another computer scientist at Google, devised a probe that doesn’t require testing the network a thousand times over: a boon if you’re trying to understand many decisions, not just a few. Instead of varying the input randomly, Sundararajan and his team introduce a blank reference—a black image or a zeroed-out array in place of text—and transition it step-by-step toward the example being tested. Running each step through the network, they watch the jumps it makes in certainty, and from that trajectory they infer features important to a prediction.

Sundararajan compares the process to picking out the key features that identify the glass-walled space he is sitting in—outfitted with the standard medley of mugs, tables, chairs, and computers—as a Google conference room. “I can give a zillion reasons.” But say you slowly dim the lights. “When the lights become very dim, only the biggest reasons stand out.” Those transitions from a blank reference allow Sundararajan to capture more of the network’s decisions than Ribeiro’s variations do. But deeper, unanswered questions are always there, Sundararajan says—a state of mind familiar to him as a parent. “I have a 4-year-old who continually reminds me of the infinite regress of ‘Why?’”

The urgency comes not just from science. According to a directive from the European Union, companies deploying algorithms that substantially influence the public must by next year create “explanations” for their models’ internal logic. The Defense Advanced Research Projects Agency, the U.S. military’s blue-sky research arm, is pouring $70 million into a new program, called Explainable AI, for interpreting the deep learning that powers drones and intelligence-mining operations. The drive to open the black box of AI is also coming from Silicon Valley itself, says Maya Gupta, a machine-learning researcher at Google in Mountain View, California. When she joined Google in 2012 and asked AI engineers about their problems, accuracy wasn’t the only thing on their minds, she says. “I’m not sure what it’s doing,” they told her. “I’m not sure I can trust it.”

Rich Caruana, a computer scientist at Microsoft Research in Redmond, Washington, knows that lack of trust firsthand. As a graduate student in the 1990s at Carnegie Mellon University in Pittsburgh, Pennsylvania, he joined a team trying to see whether machine learning could guide the treatment of pneumonia patients. In general, sending the hale and hearty home is best, so they can avoid picking up other infections in the hospital. But some patients, especially those with complicating factors such as asthma, should be admitted immediately. Caruana applied a neural network to a data set of symptoms and outcomes provided by 78 hospitals. It seemed to work well. But disturbingly, he saw that a simpler, transparent model trained on the same records suggested sending asthmatic patients home, indicating some flaw in the data. And he had no easy way of knowing whether his neural net had picked up the same bad lesson. “Fear of a neural net is completely justified,” he says. “What really terrifies me is what else did the neural net learn that’s equally wrong?”

Today’s neural nets are far more powerful than those Caruana used as a graduate student, but their essence is the same. At one end sits a messy soup of data—say, millions of pictures of dogs. Those data are sucked into a network with a dozen or more computational layers, in which neuron-like connections “fire” in response to features of the input data. Each layer reacts to progressively more abstract features, allowing the final layer to distinguish, say, terrier from dachshund.

At first the system will botch the job. But each result is compared with labeled pictures of dogs. In a process called backpropagation, the outcome is sent backward through the network, enabling it to reweight the triggers for each neuron. The process repeats millions of times until the network learns—somehow—to make fine distinctions among breeds. “Using modern horsepower and chutzpah, you can get these things to really sing,” Caruana says. Yet that mysterious and flexible power is precisely what makes them black boxes.

Complete original article here.

 


standard

What Does Serverless Computing Mean?

2017-08-24 - By 

For developers, worrying about infrastructure is a chore they can do without. Serverless computing relieves that burden.

It’s always unfortunate to start the definition of a phrase by calling it a misnomer, but that’s where you have to begin with serverless computing: Of course there will always be servers. Serverless computing merely adds another layer of abstraction atop cloud infrastructure, so developers no longer need to worry about servers, including virtual ones in the cloud.

To explore this idea, I spoke with one of serverless computing’s most vocal proponents: Chad Arimura, CEO of the startup Iron.io, which develops software for microservices workload management. Arimura says serverless computing is all about the modern developer’s evolving frame of reference:

What we’ve seen is that the atomic unit of scale has been changing from the virtual machine to the container, and if you take this one step further, we’re starting to see something called a function … a single-purpose block of code. It’s something you can conceptualize very easily: Process an image. Transform a piece of data. Encode a piece of video.

To me this sounded a lot like microservices architecture, where instead of a building a monolithic application, you assemble an application from single-purpose services. What then is the difference between a microservice and a function?

A service has a common API that people can access. You don’t know what’s going on under the hood. That service may be powered by functions. So functions are the building block code aspect of it; the service itself is like the interface developers can talk to.

Just as developers use microservices to assemble applications and call services from functions, they can grab functions from a library to build the services themselves — without having to consider server infrastructure as they create an application.

AWS Lambda is the best-known example of serverless computing. As an Amazon instructional video explains, “once you upload your code to Lambda, the service handles all the capacity, scaling, patching, and administration of the infrastructure to run your code.” Both AWS Lambda and Iron.io offer function libraries to further accelerate development. Provisioning and autoscaling are on demand.

Keep in mind all of this is above the level of service orchestration — of the type offered by Mesos, Kubernetes, or Docker Swarm. Although Iron.io offers its own orchestration layer, which predated those solutions being generally available, it also plugs into them, “but we really play at the developer/API later,” says Arimura.

In fact, it’s fair to view the core of Iron.io’s functionality roughly equivalent to that of AWS Lambda, only deployable on all major public and private cloud platforms. Arimura sees the ability to deploy on premises as a particular Iron.io advantage because the hybrid cloud is becoming more and more essential to the enterprise approach to cloud computing. Think of the consistency and application portability enabled by the same serverless computing environment across public and private clouds.

Arimura even goes as far as to use the controversial “no-ops,” coined by former Netflix cloud architect Adrain Cockcroft. Again, just as there will always be servers, there will always be ops to run them. Again, no-ops and serverless computing take the developer’s point of view: Someone else has to worry about that stuff, but not me while I create software.

Serverless computing, then, represents yet another leap in developer efficiency, where even virtual infrastructure concerns melt away and libraries of services and functions reduce once again the amount of code developers need to write from scratch.

Enterprise dev shops have been slow to adopt agile, CICD, devops, and the like. But as we move up the stack to serverless computing levels of abstraction, the palpable benefits of modern development practices become more and more enticing.

Original article here.


standard

World’s top AI Companies Plead for Ban on Killer Robots

2017-08-21 - By 

A revolution in warfare where killer robots, or autonomous weapons systems, are common in battlefields is about to start.

Both scientists and industry are worried.

The world’s top artificial intelligence (AI) and robotics companies have used a conference in Melbourne to collectively urge the United Nations to ban killer robots or lethal autonomous weapons.

An open letter by 116 founders of robotics and artificial intelligence companies from 26 countries was launched at the world’s biggest artificial intelligence conference, the International Joint Conference on Artificial Intelligence (IJCAI), as the UN delays meeting until later this year to discuss the robot arms race.

Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, released the letter at the opening of the opening of the conference, the world’s pre-eminent gathering of experts in artificial intelligence and robotics.

The letter is the first time that AI and robotics companies have taken a joint stand on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal talks on autonomous weapons. Of these, 19 have already called for a ban.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter says.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Signatories of the 2017 letter include:

  • Elon Musk, founder of Tesla, SpaceX and OpenAI (US)
  • Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
  • Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
  • Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
  • Jü rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
  • Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)

Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons.

The 2015 letter was signed by thousands of researchers working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” says Walsh.

“It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.”

Ryan Gariepy, founder of Clearpath Robotics, says the number of prominent companies and individuals who have signed this letter reinforces the warning that this is not a hypothetical scenario but a very real and pressing concern.

“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he says.

“The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”

The letter:

An Open Letter to the United Nations Convention on Certain Conventional Weapons 
As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm. We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations. We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.

We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

FULL LIST OF SIGNATORIES (by country):

  • Tiberio Caetano, founder & Chief Scientist at Ambiata, Australia.
  • Mark Chatterton and Leo Gui, founders, MD & of Ingenious AI, Australia.
  • Charles Gretton, founder of Hivery, Australia. Brad Lorge, founder & CEO of Premonition.io, Australia
  • Brenton O’Brien, founder & CEO of Microbric, Australia.
  • Samir Sinha, founder & CEO of Robonomics AI, Australia.
  • Ivan Storr, founder & CEO, Blue Ocean Robotics, Australia.
  • Peter Turner, founder & MD of Tribotix, Australia.
  • Yoshua Bengio, founder of Element AI & Montreal Institute for Learning Algorithms, Canada.
  • Ryan Gariepy, founder & CTO, Clearpath Robotics, found & CTO of OTTO Motors, Canada.
  • James Chow, founder & CEO of UBTECH Rob otics, China.
  • Robert Li, founder & CEO of Sankobot, China.
  • Marek Rosa, founder & CEO of GoodAI, Czech Republic.
  • Søren Tranberg Hansen, founder & CEO of Brainbotics, Denmark.
  • Markus Järve, founder & CEO of Krakul, Estonia.
  • Harri Valpola, founder & CTO of ZenRobotics, founder & CEO of Curious AI Company, Finland.
  • Esben Østergaard, founder & CTO of Universal Robotics, Denmark.
  • Raul Bravo, founder & CEO of DIBOTICS, France.
  • Raphael Cherrier, founder & CEO of Qucit, France.
  • Jerome Monceaux, founder & CEO of Spoon.ai, founder & CCO of Aldebaran Robotics, France.
  • Charles Ollion, founder & Head of Research at Heuritech, France.
  • Anis Sahbani, founder & CEO of Enova Robotics, France.
  • Alexandre Vallette, founder of SNIPS & Ants Open Innovation Labs, France.
  • Marcus Frei, founder & CEO of NEXT.robotics, Germany
  • Kirstinn Thorisson, founder & Director of Icelandic Institute for Intelligence Machines, Iceland.
  • Fahad Azad, founder of Robosoft Systems, India.
  • Debashis Das, Ashish Tupate, Jerwin Prabu, founders (incl. CEO ) of Bharati Robotics, India.
  • Pulkit Gaur, founder & CTO of Gridbots Technologies, India.
  • Pranay Kishore, founder & CEO of Phi Robotics Research, India.
  • Shahid Memom, founder & CTO of Vanora Robots, India.
  • Krishnan Nambiar & Shahid Memon, founders, CEO & C TO of Vanora Robotics, India.
  • Achu Wilson, founder & CTO of Sastra Robotics, India.
  • Neill Gernon, founder & MD of Atrovate, founder of Dublin.AI, Ireland.
  • Parsa Ghaffari, founder & CEO of Aylien, Ireland.
  • Alan Holland, founder & CEO of Keelvar Systems, Ireland.
  • Alessandro Prest, founder & CTO of LogoGrab, Ireland.
  • Alessio Bonfietti, founder & CEO of MindIT, Italy.
  • Angelo Sudano, founder & CTO of ICan Robotics, Italy.
  • Shigeo Hirose, Michele Guarnieri, Paulo Debenest, & Nah Kitano, founders, CEO & Directors of HiBot
  • Corporation, Japan.
  • Luis Samahí García González, founder & CEO of QOLbotics, Mexico.
  • Koen Hindriks & Joachim de Greeff, founders, CEO & COO at Interactive Robotics, the Netherlands.
  • Maja Rudinac, founder and CEO of Robot Care Systems, the Netherlands.
  • Jaap van Leeuwen, founder and CEO Blue Ocean Robotics Benelux, the Netherlands.
  • Dyrkoren Erik, Martin Ludvigsen & Christine Spiten, founders, CEO, CTO & Head of Marketing at
  • BlueEye Robotics, Norway.
  • Sergii Kornieiev, founder & CEO of BaltRobotics, Poland.
  • Igor Kuznetsov, founder & CEO of NaviRobot, Russian Federation.
  • Aleksey Yuzhakov & Oleg Kivokurtsev, founders, CEO & COO of Promobot, Russian Federation.
  • Junyang Woon, founder & CEO, Infinium Robotics, former Branch Head & Naval Warfare Operations Officer, Singapore.
  • Jasper Horrell, founder of DeepData, South Africa.
  • Toni Ferrate, founder & CEO of RO – BOTICS, Spain.
  • José Manuel del Río, founder & CEO of Aisoy Robotics, Spain. Victor Martin, founder & CEO of Macco Robotics, Spain.
  • Timothy Llewellynn, founder & CEO of nViso, Switzerland.
  • Francesco Mondada, founder of K – Team, Switzerland.
  • Jurgen Schmidhuber, Faustino Gomez, Jan Koutník, Jonathan Masci & Bas Steunebrink, founders,
  • President & CEO of Nnaisense, Switzerland.
  • Satish Ramachandran, founder of AROBOT, United Arab Emirates.
  • Silas Adekunle, founder & CEO of Reach Robotics, UK.
  • Steve Allpress, founder & CTO of FiveAI, UK.
  • Joel Gibbard and Samantha Payne, founders, CEO & COO of Open Bionics, UK.
  • Richard Greenhill & Rich Walker, founders & MD of Shadow Robot Company, UK.
  • Nic Greenway, founder of React AI Ltd (Aiseedo), UK.
  • Daniel Hulme, founder & CEO of Satalia, UK.
  • Charlie Muirhead & Tabitha Goldstaub, founders & CEO of Cognitio nX, UK.
  • Geoff Pegman, founder & MD of R U Robots, UK.
  • Mustafa Suleyman, founder & Head of Applied AI, DeepMind, UK.
  • Donald Szeto, Thomas Stone & Kenneth Chan, founders, CTO, COO & Head of Engineering of PredictionIO, UK.
  • Antoine Biondeau, founder & CEO of Sentient Technologies, USA.
  • Brian Gerkey, founder & CEO of Open Source Robotics, USA.
  • Ryan Hickman & Soohyun Bae, founders, CEO & CTO of TickTock.AI, USA.
  • Henry Hu, founder & CEO of Cafe X Technologies, USA.
  • Alfonso Íñiguez, founder & CEO of Swarm Technology, USA.
  • Gary Marcus, founder & CEO of Geometric Intelligence (acquired by Uber), USA.
  • Brian Mingus, founder & CTO of Latently, USA.
  • Mohammad Musa, founder & CEO at Deepen AI, USA.
  • Elon Musk, founder, CEO & CTO of SpaceX, co-founder & CEO of Tesla Motor, USA.
  • Rosanna Myers & Dan Corkum, founders, CEO & CTO of Carbon Robotics, USA.
  • Erik Nieves, founder & CEO of PlusOne Robotics, USA.
  • Steve Omohundro, founder & President of Possibility Research, USA.
  • Jeff Orkin, founder & CEO, Giant Otter Technologies, USA.
  • Dan Reuter, found & CEO of Electric Movement, USA.
  • Alberto Rizzoli & Simon Edwardsson, founders & CEO of AIPoly, USA. Dan Rubins, founder & CEO of Legal Robot, USA.
  • Stuart Russell, founder & VP of Bayesian Logic Inc., USA.
  • Andrew Schroeder, founder of WeRo botics, USA.
  • Gabe Sibley & Alex Flint, founders, CEO & CPO of Zippy.ai, USA.
  • Martin Spencer, founder & CEO of GeckoSystems, USA.
  • Peter Stone, Mark Ring & Satinder Singh, founders, President/COO, CEO & CTO of Cogitai, USA.
  • Michael Stuart, founder & CEO of Lucid Holdings, USA.
  • Massimiliano Versace, founder, CEO & President, Neurala Inc, USA.

Original article here.


standard

IBM Sets New Tape Capacity Record of 330TB In A Single Cartridge

2017-08-07 - By 

IBM Research announced, with the help of Sony Storage Media Solutions, the have achieved a capacity breakthrough in tape storage. IBM was able to fit 201 Gb/in^2 (gigabits per square inch) in areal density on a prototype sputtered magnetic tape. This marks the fifth capacity record IBM has hit since 2006.

The current buzz in storage typically goes to faster media, like those that leverage the NVMe interface. StorageReview is guilty of focusing on these new emerging technologies without spending much time on tape; namely because tape is a fairly well known and not terribly exciting storage media. However, tape remains the most secure, energy efficient, and cost-effective solution for storing enormous amounts of back-up and archival data. And the deluge of unstructured data that is now being seen everywhere will need to go on something that has the capacity to store it.

This newly announced record for tape capacity would be 20 times the areal density of state of the art commercial tape drives such as the IBM TS1155 enterprise tape drive. The technology allows for 330TB of uncompressed data to be stored on a single tape cartridge. According to IBM this is the equivalent of having the texts of 330 million books in the palm of one’s hand.

Technologies used to hit this new density include:

  • Innovative signal-processing algorithms for the data channel, based on noise-predictive detection principles, which enable reliable operation at a linear density of 818,000 bits per inch with an ultra-narrow 48nm wide tunneling magneto-resistive (TMR) reader.
  • A set of advanced servo control technologies that when combined enable head positioning with an accuracy of better than 7 nanometers. This combined with a 48nm wide (TMR) hard disk drive read head enables a track density of 246,200 tracks per inch, a 13-fold increase over a state of the art TS1155 drive.
  • A novel low friction tape head technology that permits the use of very smooth tape media

This new technology marks a long list of tape storage innovation for IBM stretching back 60 years. Though the capacity today is 165 million times the capacity of their first tape product.

Original article here.

 


standard

Python Tops 2017’s Most Popular Programming Languages

2017-07-26 - By 

Trying to decide which programming languages to study, whether prior to college, during it, or in continuing professional development can have a significant impact on your employment prospects and opportunities thereafter. Given this, periodic efforts have been made to rank the most important and popular languages over time, to give more insight into where’s the best place to focus one’s efforts.

IEEE Spectrum has just put together its fourth interactive list of top programming languages. The group designed the list to allow users to weight their own interests and use-cases independently. You can access the full list and sort it by language type (Web, Mobile, Enterprise, Embedded), fastest growing markets, general trends in usage, and languages popular specifically for open source development. You can also implement your own customized sorting methods.

Programming language rankings and image by IEEE Spectrum

Python has been rising for the past few years, but last year it was as far back as #3, whereas this year, it wins overall with a rank of 100. Python, C, Java, and C++ round out the top four, with all well above 95, while the fifth place contestant, C# (Microsoft’s own language, developed as part of its .NET framework) sits at a solid 88.6. The drop-off in spots #5-10 is never as large as the gap between C++ and C#, and the tenth language, Apple’s Swift, makes the list for the first time at 75.3 overall rank.

Previously popular languages like Ruby have fallen dramatically, which is part of why Swift has had the opportunity to rise. Apple’s predecessor language to Swift, Object-C, has fallen to 26th place as Apple transitions itself and developers over to the newer language.

The rankings do change somewhat, depending on your market segment. In Embedded, for example, the top five ranks are occupied by C, C++, Arduino, Assembly, and Haskell. In Mobile, the Top 5 are C, Java, C++, C#, and JavaScript. For web development, the Top 5 are Python, Java, C#, JavaScript, and PHP.

How you adjust the languages and focus your criteria, in other words, leads to a fairly different distribution of languages. But while Python may have been IEEE’s overall top choice, it’s not necessarily the best choice if you’re trying to cover a lot of bases or hit broad targets. At least one variant of C is present in the Top 5 of every single category, and multiple categories have C, C++, and C# present in three of the Top 5 (the Web category is anomalous in this regard, as only C# makes it into the Top 5).

IEEE continues to refine its criteria and measurements and has applied these new weightings to the previous year’s results as well. If you want more information on how the company weights data or to see how languages compare year-on-year, all such information is available here.

Original article here.


standard

Gartner’s Hype Cycle: AI for Marketing

2017-07-24 - By 

Gartner’s 2017 Hype Cycle for Marketing and Advertising is out (subscription required) and, predictably, AI for Marketing has appeared as a new dot making a rapid ascent toward the Peak of Inflated Expectations. I say “rapid” but some may be surprised to see us projecting that it will take more than 10 years for AI in Marketing to reach the Plateau of Productivity. Indeed, the timeframe drew some skepticism and we deliberated on this extensively, as have many organizations and communities.

AI for Marketing on the 2017 Hype Cycle for Marketing and Advertising

First, let’s be clear about one thing: a long journey to the plateau is not a recommendation to ignore a transformational technology. However, it does raise questions of just what to expect in the nearer term.

Skeptics of a longer timeframe rightly point out the velocity with which digital leaders from Google to Amazon to Baidu and Alibaba are embracing these technologies today, and the impact they’re likely to have on marketing and advertising once they’ve cracked the code on predicting buying behavior and customer satisfaction and acting accordingly.

There’s no point in debating the seriousness of the leading digital companies when it comes to AI. The impact that AI will have on marketing is perhaps more debatable – some breakthrough benefits are already being realized, but – to use some AI jargon here – many problems at the heart of marketing exhibit high enough dimensionality to suggest they’re AI-complete. In other words, human behavior is influenced by a large number of variables which makes it hard to predict unless you’re human. On the other hand, we’ve seen dramatic lifts in conversion rates from AI-enhanced campaigns and the global scale of markets means that even modest improvements in matching people with products could have major effects. Net-net, we do believe AI that will have a transformational on marketing and that some of these transformational effects will be felt in fewer than ten years – in fact, they’re being felt already.

Still, in the words of Paul Saffo, “Never mistake a clear view for a short distance.” The magnitude of a technology’s impact is, if anything, a sign it will take longer than expected to reach some sort of equilibrium. Just look at the Internet. I still vividly recall the collective expectation that many of us held in 1999 that business productivity was just around the corner. The ensuing descent into the Trough of Disillusionment didn’t diminish the Internet’s ultimate impact – it just delayed it. But the delay was significant enough to give a few companies that kept the faith, like Google and Amazon, an insurmountable advantage when Internet at last plateaued, about 10 years later.

Proponents of faster impact point out that AI has already been through a Trough of Disillusionment maybe ten times as long as the Internet – the “AI Winter” that you can trace to the 1980s. By this reckoning, productivity is long overdue. This may be true for a number of domains – such as natural language processing and image recognition – but it’s hardly the case for the kinds of applications we’re considering in AI for Marketing. Before we could start on those we needed massive data collection on the input side, a cloud-based big data machine learning infrastructure, and real-time operations on the output side to accelerate the learning process to the point where we could start to frame the optimization problem in AI. Some of the algorithms may be quite old, but their real-time marketing context is certainly new.

More importantly, consider the implications of replacing the way marketing works today with true lights-out AI-driven operations. Even when machines do outperform human counterparts in making the kinds of judgments marketers pride themselves on, the organizational and cultural resistance they will face from the enterprise is profound….with important exceptions: disruptive start-ups and the digital giants who are developing these technologies and currently dominate digital media.

And enterprises aren’t the only source of resistance. The data being collected in what’s being billed as “people-based marketing” – the kind that AI will need to predict and influence behavior – is the subject of privacy concerns that stem from the “people’s” notable lack of an AI ally in the data collection business. See more comments here.

Then consider this: In 2016, P&G spent over $4B media. Despite their acknowledgment of the growing importance of the Internet to their marketing (20 years in), they still spend orders of magnitude more on TV (see Ad Age, ubscription required). As we know, Marc Pritchard, P&G’s global head of brands, doesn’t care much for the Internet’s way of doing business and has demanded fundamental changes in what he calls its “corrupt and non-transparent media supply chain.”

Well, if Marc and his colleagues don’t like the Internet’s media supply chain, wait until they get a load of the emerging AI marketing supply chain. Here’s a market where the same small group of gatekeepers own the technology, the data, the media, the infrastructure – even some key physical distribution channels – and their business models are generally based on extracting payment from suppliers, not consumers who enjoy their services for “free.” The business impulses of these companies are clear: just ask Alexa. What they haven’t perfected yet is that shopping concierge that gets you exactly what you want, but they’re working on it. If their AI can solve that, then two of P&G’s most valuable assets – its legacy media-based brand power and its retail distribution network – will be neutralized. Does this mean the end of consumer brands? Not necessarily, but our future AI proxies may help us cultivate different ideas about brand loyalty.

This brings us to the final argument against putting AI for Marketing too far out on the hype cycle: it will encourage complacency in companies that need to act. By the time established brands recognize what’s happened, it will be too late.

Business leaders have told me they use Gartner’s Hype Cycles in two ways. One is to help build urgency behind initiatives that are forecast to have a large, near-term impact, especially ones tarnished by disillusionment. The second is to subdue people who insist on drawing attention to seductive technologies on the distant horizon. Neither use is appropriate for AI for Marketing. In this case, the long horizon is neither a cause for contentment nor is a reason to go shopping.

First, brands need a plan. And the plan has to anticipate major disruptions, not just in marketing, but in the entire consumer-driven, AI-mediated supply chain in which brands – or their AI agents – will find themselves negotiating with a lot of very smart algorithms. I feel confident in predicting that this will take a long time. But that doesn’t mean it’s time to ignore AI. On the contrary, it’s time to put learning about and experiencing AI at the top of the strategic priority list, and to consider what role your organization will play when these technologies are woven into our markets and brand experiences.

Original article here.


standard

Kaspersky Lab and Russian Intelligence FSB

2017-07-17 - By 

Yesterday, the Trump Administration released a statement indicating that Kaspersky Lab, one of the largest security companies in the world, would no longer be allowed to sell its products or services to the federal government. At the time, it wasn’t clear why the government had taken this step, and the CEO of Kaspersky Lab, Eugene Kaspersky, has strenuously argued that his company is being treated as a pawn in a game of chess between the US and Russia.

Kaspersky told ABC News that any concerns about his product were based in “ungrounded speculation and all sorts of other made-up things,” before adding that he and his company “have no ties to any government, and we have never helped nor will help any government in the world with their cyberespionage efforts.”

Now last claim looks particularly dubious. According to emails obtained by Bloomberg Businessweek (and confirmed by Kaspersky Lab as genuine), Kaspersky’s ties to the Russian FSB (the successor to the KGB) are much tighter than have previously been reported. It has allegedly worked with the government to develop security software and worked on joint projects that “the CEO knew would be embarrassing if made public.”

It’s common — in fact, it’s practically essential — for security firms to work closely with their own governments, both in terms of providing security solutions and in actively monitoring for threats or suspicious activity. But there’s a difference between working with the federal government of your nation and acting as an agent working on behalf of that government. These leaked emails, seem to show the company slipping over that line.

The first part of the described project was a contract to build a better DDoS defense system that could be used by both the Russian government and other Kaspersky clients. Nothing unusual about that. But Kaspersky went farther, and agreed to some extremely unusual conditions. According to ABC News’ report, Kaspersky wrote that the project contained technology to protect against filter attacks, as well as implementing what researchers call “Active Countermeasures.”

But there’s more to the story. Kaspersky also provided the FSB with real-time intelligence on the hackers location and and sends experts to accompany the FSB on its investigations and raids. ABC’s source described the situation as, “They weren’t just hacking the hackers; they were banging on the doors.”

Certain members of Congress and US government intelligence agencies have both warned against using Kaspersky Lab in any sensitive government or business setting. This could easily explain why. Installing software that can phone home to a company affiliated with the FSB could be a major problem should hackers come calling. Kaspersky also sells a secure operating system, KasperskyOS, designed to run on critical infrastructure, factories, pipelines, and even self-driving cars. The US Defense Intelligent Agency has reportedly circulated internal memos warning of the risks of using Kaspersky’s system, even as the company continues to deny that any connection between itself and Russia actually exists.

One More Thing…

Some will argue that this is mere political theater. After all, didn’t AT&T, Yahoo, Microsoft, Google, and a number of other companies comply with onerous requests made in dubious circumstances from the NSA and FBI? The answer, of course, is yes. But there are meaningful differences here: To the best of our knowledge, no one from Microsoft or AT&T ever did a ride-along on a raid to capture a suspect. It’s also a fact that more than one company fought hard against being forced to provide such evidence, capitulating only when all of the court cases and appeals had failed.

There may not be much practical difference between the end product delivered by a company that takes a job willingly and one that takes it only under duress, but there is a moral difference. Whether its Tim Cook going to court to protect user privacy or Google promptly encrypting all of its traffic, including within the data center, more than a few US companies have taken (or tried to take) strong stances against such spying. That doesn’t make them perfect. It may not even make them worthy of praise. But it does highlight a meaningful difference between what happened in Russia and what’s happened in the United States.

Original article here.

 


standard

Big Data Analytics in Healthcare: Fuelled by Wearables and Apps

2017-07-11 - By 

Driven by specialised analytics systems and software, big data analytics has decreased the time required to double medical knowledge by half, thus compressing healthcare innovation cycle period, shows the much discussed Mary Meeker study titled Internet Trends 2017.

The presentation of the study is seen as an evidence of the proverbial big data-enabled revolution, that was predicted by experts like McKinsey and Company. “A big data revolution is under way in health care. Over the last decade pharmaceutical companies have been aggregating years of research and development data into medical data bases, while payors and providers have digitised their patient records,” the McKinsey report had said four years ago.

The Mary Meeker study shows that in the 1980s it took seven years to double medical knowledge which has been decreased to only 3.5 years after 2010, on account of massive use of big data analytics in healthcare. Though most of the samples used in the study were US based, the global trends revealed in it are well visible in India too.

“Medicine and underlying biology is now becoming a data-driven science where large amounts of structured and unstructured data relating to biological systems and human health is being generated,” says Dr Rohit Gupta of MedGenome, a genomics driven research and diagnostics company based in Bengaluru.

Dr Gupta told Firstpost that big data analytics has made it possible for MedGenome, which focuses on improving global health by decoding genetic information contained in an individual genome, to dive deeper into genetics research.

“While any individual’s genome information is useful for detecting the known mutations for diseases, underlying new patterns of complicated diseases and their progression requires genomics data from many individuals across populations — sometimes several thousands to even few millions amounting to exabytes of information,” he said.

All of which would have been a cumbersome process without the latest data analytics tools that big data analytics has brought forth.

The company that started work on building India-specific baseline data to develop more accurate gene-based diagnostic testing kits in the year 2015 now conducts 400 genetic tests across all key disease areas.

What is Big Data

According to Mitali Mukerji, senior principal scientist, Council of Scientific and Industrial Research when a large number of people and institutions digitally record health data either in health apps or in digitised clinics, these information become big data about health. The data acquired from these sources can be analysed to search for patterns or trends enabling a deeper insight into the health conditions for early actionable interventions.

Big data is growing bigger
But big data analytics require big data. And proliferation of Information technology in the health sector has enhanced flow of big data exponentially from various sources like dedicated wearable health gadgets like fitness trackers and hospital data base. Big data collection in the health sector has also been made possible because of the proliferation of smartphones and health apps.

The Meeker study shows that the download of health apps have increased worldwide in 2016 to nearly 1,200 million from nearly 1,150 million in the last year and 36 percent of these apps belong to the fitness and 24 percent to the diseases and treatment ones.

Health apps help the users monitor their health. From watching calorie intake to fitness training — the apps have every assistance required to maintain one’s health. 7 minute workout, a health app with three million users helps one get that flat tummy, lose weight and strengthen the core with 12 different exercises. Fooducate, another app, helps keep track of what one eats. This app not only counts the calories one is consuming, but also shows the user a detailed breakdown of the nutrition present in a packaged food.

For Indian users, there’s Healthifyme, which comes with a comprehensive database of more than 20,000 Indian foods. It also offers an on-demand fitness trainer, yoga instructor and dietician. With this app, one can set goals to lose weight and track their food and activity. There are also companies like GOQii, which provide Indian customers with subscription-based health and fitness services on their smartphones using fitness trackers that come free.

Dr Gupta of MedGenome explains that data accumulated in wearable devices can either be sent directly to the healthcare provider for any possible intervention or even predict possible hospitalisation in the next few days.

The Meeker study shows that global shipment of wearable gadgets grew from 26 million in 2014 to 102 million in 2016.

Another area that’s shown growth is electronic health records. In the US, electronic health records in office-based physicians in United States have soared from 21 percent in 2004 to 87 percent in 2015. In fact, every hospital with 500 beds (in the US) generate 50 petabytes of health data.

Back home, the Ministry of Electronics and Information Technology, Government of India, runs Aadhar-based Online Registration System, a platform to help patients book appointments in major government hospitals. The portal has the potential to emerge into a source if big data offering insights on diseases, age groups, shortcomings in hospitals and areas to improve. The website claims to have already been used to make 8,77,054 appointments till date in 118 hospitals.

On account of permeation of digital technology in health care, data growth has recorded 48% growth year on year, the Meeker study says. The accumulated mass of data, according to it, has provided deeper insights in health conditions. The study shows drastic increase of citations from 5 million in 1977 to 27 million in 2017. Easy access to big data has ensured that scientists can now direct their investigations following patterns analysed from such information and less time is required to arrive at conclusion.

“If a researcher has huge sets of data at his disposal, he/she can also find out patterns and simulate it through machine learning tools, which decreases the time required to arrive at a conclusion. Machine learning methods become more robust when they are fed with results analysed from big data,” says Mukerji.

She further adds, “These data simulation models, rely on primary information generated from a study to build predictive models that can help assess how human body would respond to a given perturbation,” says Mukerji.

The Meeker also study shows that Archimedes data simulation models can conduct clinical trials from data related to 50,000 patients collected over a period of 30 years, in just a span of two months. In absence of this model it took seven years to conduct clinical trials on data related to 2,838 patients collected over a period of seven years.

As per this report in 2016 results of 25,400 number of clinical trial was publically available against 1,900 in 2009.

The study also shows that data simulation models used by laboratories have drastically decreased time required for clinical trials. Due to emergence of big data, rise in number of publically available clinical trials have also increased, it adds.

Big data in scientific research

The developments grown around big-data in healthcare has broken the silos in scientific research. For example, the field of genomics has taken a giant stride in evolving personalised and genetic medicine with the help of big data.

A good example of how big data analytics can help modern medicine is the Human Genome Project and the innumerous researches on genetics, which paved way for personalised medicine, would have been difficult without the democratisation of data, which is another boon of big data analytics. The study shows that in the year 2008 there were only 5 personalised medicines available and it has increased to 132 in the year 2016.

In India, a Bangalore-based integrated biotech company recently launched ‘Avestagenome’, a project to build a complete genetic, genealogical and medical database of the Parsi community. Avestha Gengraine Technologies (Avesthagen), which launched the project believes that the results from the Parsi genome project could result in disease prediction and accelerate the development of new therapies and diagnostics both within the community as well as outside.

MedGenome has also been working on the same direction. “We collaborate with leading hospitals and research institutions to collect samples with research consent, generate sequencing data in our labs and analyse it along with clinical data to discover new mutations and disease causing perturbations in genes or functional pathways. The resultant disease models and their predictions will become more accurate as and when more data becomes available.”

Mukerji says that democratisation of data fuelled by proliferation of technology and big data has also democratised scientific research across geographical boundaries. “Since data has been made easily accessible, any laboratory can now proceed with research,” says Mukerji.

“We only need to ensure that our efforts and resources are put in the right direction,” she adds.

Challenges with big data

But Dr Gupta warns that big-data in itself does not guarantee reliability for collecting quality data is a difficult task.

Moreover, he said, “In medicine and clinical genomics, domain knowledge often helps and is almost essential to not only understand but also finding ways to effectively use the knowledge derived from the data and bring meaningful insights from it.”

Besides, big data gathering is heavily dependent on adaptation of digital health solutions, which further restricts the data to certain age groups. As per the Meeker report, 40 percent of millennial respondents covered in the study owned a wearable. On the other hand 26 percent and 10 percent of the Generation X and baby boomers, respectively, owned wearables.

Similarly, 48 percent millennials, 38 percent Generation X and 23 percent baby boomers go online to find a physician. The report also shows that 10 percent of the people using telemedicine and wearable proved themselves super adopters of the new healthcare technology in 2016 as compared to 2 percent in 2015.
Collection of big data.

Every technology brings its own challenges, with big data analytics secure storage and collection of data without violating the privacy of research subjects, is an added challenge. Something, even the Meeker study does not answer.

“Digital world is really scary,” says Mukerji.

“Though we try to secure our data with passwords in our devices, but someone somewhere has always access to it,” she says.

The health apps which are downloaded in mobile phones often become the source of big-data not only for the company that has produced it but also to the other agencies which are hunting for data in the internet. “We often click various options while browsing internet and thus knowingly or unknowingly give a third party access to some data stored in the device or in the health app,” she adds.

Dimiter V Dimitrov a health expert makes similar assertions in his report, ‘Medical Internet of Things and Big Data in Healthcare‘. He reports that even wearables often have a server which they interact to in a different language providing it with required information.

“Although many devices now have sensors to collect data, they often talk with the server in their own language,” he said in his report.

Even though the industry is still at a nascent stage, and privacy remains a concern, Mukerji says that agencies possessing health data can certainly share them with laboratories without disclosing patient identity.

Original article here.

 


standard

The Internet is Important to Everyone (Infographic)

2017-07-10 - By 

The Internet is not a luxury.  The Internet is important to everybody – Individuals, Companies, Governments and institutions of all kinds – 100%.  This infographic shows the relationship of each group to the Internet and how some groups are being left behind.

Full Infographic:

Original infographic here.


standard

The Internet of Things Is the Next Digital Evolution — What Will It Mean? (video)

2017-06-28 - By 

As digital technology infuses everyday life, it will change human behavior—raising new challenges about equality and fairness.

In a single generation, this has become the new normal: Nearly all adult Americans use the internet, with three-fourths of them having broadband access in their homes. And the internet travels with them in their pockets—95 percent have a cellphone, 81 percent have a smartphone. This ability to constantly connect has changed how people interact, especially in their social networks—more than two-thirds of adults are on Facebook or Twitter or another social media platform.

Digital innovations have made it easier for people to find more information than ever before, and made it easier to create and share material with others. From smartphone-delivered directions to voice-driven queries to on-demand news, people’s lives have been transformed by these technologies. Yet today’s inventions and innovations mark only the start, and tomorrow’s digital disruption, which is already underway, will probably dwarf them in impact.

The next digital evolution is the rise of the internet of things—sometimes now called the “internet on things.” This refers to the growing phenomenon of building connectivity into vehicles, wearable devices, appliances and other household items such as thermostats, as well as goods moving through business supply chains. It also covers the rapid spread of data-emitting or tracking sensors in the physical environment that give readouts on everything from crop conditions to pollution levels to where there are open parking spaces to babies’ breathing rates in their cribs.

The Pew Research Center and Elon University in North Carolina invited hundreds of technology experts in 2014 to predict the future of the internet by the year 2025, and the overriding theme of their answers addressed this reality. They predicted that the growth of the internet of things will soon make the internet like electricity—less visible, yet more deeply embedded in people’s lives, for good and for ill.

The internet of things will have literally life-changing impact on innovation and the application of knowledge in the coming years. Here are four major developments to anticipate.

The emergence of the ‘datacosm’

The spread of the internet of things will accelerate the digitization of data, spawning creation of record amounts of information. Data and connectivity will be ubiquitous in an environment sometimes called the “datacosm”— a term used to describe the importance of data, analytics, and algorithms in technology’s evolution. As previous information revolutions have taught us, once people—and things—get more connected, their very nature changes.

“When we are connected, power shifts. It changes who we are, what we might expect, how we might be manipulated, attacked, or enriched,” writes Joshua Cooper Ramo in his new book, The Seventh Sense. Networks of constant connection “destroy the nature of even the most solid-looking objects.” Connected things and connected people become more useful, more powerful, but also more hair-trigger and more destructive because their power is multiplied by a networking effect. The more connections they have, the more capacity they have for good and harmful purposes.

On the human level, the datacosm arising from the internet of things could function like a “fifth limb,” an extra brain lobe, and another layer of “skin” because it will be enveloping and omnipresent. People will have unparalleled self-awareness via their “lifestreams”: their genome, their current physical condition, their memories, and other trackable aspects of their well-being. Data ubiquity will allow reality to be augmented in helpful—and creepy—ways.

For instance, people will be able to look at others and, thanks to facial recognition and digital profiling, simultaneously browse their digital dossiers through an app that could display the data on “smart” contact lenses or a nearby wall surface. They will gaze at artifacts such as paintings or movies and be able to download material about how the art was created and the life story of the creator. They will take in landscapes and cityscapes and be able to learn quickly what transpired in these places long ago or what kinds of environmental problems threaten them. They will size up buildings and have an overlay of insight about what takes place inside them.

Part of the reason that data will be infused into so much is that the interfaces of connectivity and the ability to summon data will be radically enhanced. Human voices, haptic interfaces that can be manipulated by finger movements (think of the movie “Minority Report”), real-time language translators, data dashboards that give readouts on a user’s personally designed webpage, even, eventually, brain-initiated commands will make it possible for people to bring data into whatever surroundings they find themselves. Not only will this allow people to apply knowledge of all kinds to their immediate circumstances, but it will also advance analysts’ understanding of entire populations as their “data exhaust” is captured by their GPS-enabled devices and web clickstream activity.

Many experts in the Pew Research Center’s canvassings expect major benefits to emerge from this growth and spread of data, starting with the fact that knowledge will be ever-easier to apply to real-time decisions such as which custom-designed medicine a person should receive, or which commuting route to take to work. Beyond that, this data overlay and growing analytic power will allow swifter interventions when public health problems arise, weather emergencies threaten, environmental stressors mount, educational programs are introduced, and products are brought to the market.

This new reality will also cause major hardships. When information is superabundant, what is the best way to find the best knowledge and apply it to decisions? When so much personal data is captured, how can people retain even a sliver of privacy? What mechanisms can be created to overcome polarizing propaganda that can weaken societies? What are the right ways to avoid “fake news,” disinformation, and distracting sideshows in a world of info-glut?

Struggles over people’s “right relationship” to information will be one of the persistent realities of the 21st

Growing reliance on algorithms

The explosion of data has given prominence to algorithms as tools for finding meaning in data and using it to shape decisions, predict humans’ behavior, and anticipate their needs. Analysts such as Aneesh Aneesh of the University of Wisconsin, Milwaukee, foresee algorithms taking over public and private activities in a new era of “algocratic governance” that supplants the way current “bureaucratic hierarchies” make government decisions. Others, like Harvard University’s Shoshana Zuboff, describe the emergence of “surveillance capitalism” that gains profits from monetizing data captured through surveillance and organizes economic behavior in an “information civilization.”

The experts’ views compiled by the Pew Research Center and Elon University offer several broad predictions about the algorithmic age. They predicted that algorithms will continue to spread everywhere and agreed that the benefits of computer codes can lead to greater human insights into the world, less waste, and major safety advantages. A share of respondents said data-driven approaches to problem-solving will often improve on human approaches to addressing issues because the computer codes will be refined at much greater speeds. Many predicted that algorithms will be effective tools to make up for human shortcomings.

But respondents also expressed concerns about algorithms.

They worried that humanity and human judgment are lost when data and predictive modeling become paramount. These experts argued that algorithms are primarily created in pursuit of profits and efficiencies and that this can be a threat; that algorithms can manipulate people and outcomes; that a somewhat flawed yet inescapable “logic-driven society” could emerge; that code will supplant humans in decision-making and that, in the process, humans will lose skills and specialized, local intelligence in a world where decisions are based on more homogenized algorithms; and that respect for individuals could diminish.

Just as grave a concern is that biases exist in algorithmically organized systems that could worsen social divisions. Many in the expert sampling said that algorithms reflect the biases of programmers and that the data sets they use are often limited, deficient, or incorrect. This can deepen societal divides. Those who are disadvantaged could be even more so in an algorithm-organized future, especially if algorithms are shaped by corporate data collectors. That could limit people’s exposure to a wider range of ideas and eliminate serendipitous encounters with information.

A new relationship with machines and complementary intelligence

As data and algorithms permeate daily life, people will have to renegotiate the way they use and think about machines, which now are in a state of accelerating learning. Many experts see a new equilibrium emerging as people take advantage of artificial intelligence that can be consulted in an instant, context-aware gadgets that “read” a situation and assemble relevant information, robotic devices that serve their needs, smart assistants or bots (possibly in the form of holograms) that help people navigate the world or help represent them to others, and device-based enhancements to their bodies and brains. “Basically, it is the Metaverse from Snow Crash,” predicts futurist Stowe Boyd, referring to Neal Stephenson’s sci-fi vision of a world where people and their avatars seamlessly interact with other people, their avatars, and independent artificial intelligence agents developed by third parties, including corporations.

Even if it does not fully reach that state, there will be a great re-sorting of the roles people play in the world and the functions machines assume. Now that IBM’s supercomputer Watson has beaten the world’s best chess and “Jeopardy” players, and Google’s AI system has vanquished the world’s Go champion, there is strong incentive to bring these masterful machines into hospital operating rooms and have them help assess radiology readouts; to outsource them to stock trading and insurance risk analysts; to use them in self-driving cars and drones; to let them aid people’s capacity to move around smart homes and smart cities.

The creation and application of all this knowledge has vast implications for basic human activity—starting with cognition. The very act of thinking is already undergoing significant change as people learn how to tap into all this information and cope with processing it. That impact will expand in the future. The quality of “being” will change as people are able to be “with” each other via lifelike telepresence. People’s capacities are likely to expand as digital devices, prostheses, and brain-enhancing chips become available. Human behavior itself could change as an overlay of data gives people enhanced situational and self-awareness. The way people allocate their time and attention will be restructured as options proliferate. For instance, the manner in which they spend their leisure time is likely to be radically recast as people are able to amuse themselves in compelling new virtual worlds and enrich themselves with vivid new learning experiences.

Greater innovation in social norms, collective action, credentials, and laws

With so much upheaval ahead, people, groups, and organizations will be forced to adjust. At the level of social norms, it is easy to envision social environments in which people must constantly negotiate what information can be shared, what kinds of interruptions are tolerable, what balance of fact-checking and gossip is acceptable, and what personal multitasking is harmful. In other words, much of what constitutes civil behavior will be up for grabs.

At a more formal level, some primary aspects of collective action and power are already altered as social networks become a societal force, both as pathways of knowledge sharing and as mechanisms for mobilizing others to do something. There are new ways for people to collaborate and solve problems. Moreover, there are a growing number of group structures that address problems ranging from microniche matters (my neighbors and I respond to a local issue) to macroglobal wicked problems (multinational alliances tackle climate change and pandemics).

Shifts in labor markets in the knowledge economy, which are constantly pressing workers to acquire new skills, will probably refashion some of the features of higher education and prompt change in work-related training efforts. Fully 87 percent of current U.S. workers believe it will be important or essential for them to pursue new skills during their work lives. Not many believe the existing certification and licensing systems are up to that job. A notable number of experts in another Pew Research Center-Elon University canvassing are convinced that the training system will begin breaking into several parts: one that specializes in basic work preparation education to coach students in lifelong learning strategies; another that upgrades the capacity of workers inside their existing fields; and yet another that is more designed to handle the elaborate work of schooling those whose skills are obsolete.

At the most structured level, new laws and court battles are inevitable. They are likely to address questions such as: Who owns what information and can use it and profit from it? When something goes wrong with an information-processing system (say, a self-driving car propels itself off a bridge), who is responsible? Where is the right place to draw the line between data capture—that is, surveillance—and privacy? Can a certain level of privacy be maintained as an equal right for all, or is it no longer possible? What kinds of personal information are legitimate to consider in assessing someone’s employment, creditworthiness, or insurance status? Where should libel laws apply in an age when everyone can be a “publisher” or “broadcaster” via social media and when people’s reputations can rise and fall depending on the tone of a tweet? Can information transparency regimes be applied to those who amass data and create profiles from it? Who’s overseeing the algorithms that will be making so many decisions about what happens in society? (Several experts in the Pew Research Center canvassing called for new governmental regulations relating to the development and deployment of algorithms.) Which entities should define what is appropriate out-of-bounds speech for a community, a culture, a nation?

The information revolution in the digital age is magnitudes faster than those of previous ages. Much greater movement is occurring in technology innovation than in social innovation—and this potentially dangerous gap seems to be expanding. As we grapple with this, it would be useful to keep in mind the Enlightenment sensibility of Thomas Jefferson. He wrote in 1816: “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.”

We are likely to have to depend on our machines to help us figure out how to avoid being crushed by this avalanche.

 

Original article here.


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);