Posted On:Google Archives - AppFerret

standard

Nigerian ISP’s configuration error disrupted Google services

2018-11-13 - By 

A Nigerian internet service provider said Tuesday that a configuration error it made during a network upgrade caused a disruption of key Google services, routing traffic to China and Russia.

Prior to MainOne’s explanation Tuesday, there was speculation that Monday’s 74-minute data hijacking might have been intentional. Google’s search, cloud hosting and collaborative business tools were among services disrupted.

“Everyone is pretty confident that nothing untoward took place,” MainOne spokeman Tayo Ashiru said.

The type of traffic misdirection involved can knock essential services offline and facilitate espionage and financial theft. They can also be used to block access to information by sending data into internet black holes. Experts say China, in particular, has systematically hijacked and diverted U.S. internet traffic.

But the problem can also result from human error. That’s what Ashiru said happened to MainOne, a major west African ISP. He said engineers mistakenly forwarded to China Telecom addresses for Google services that were supposed to be local. The Chinese company, in turn, sent along the bad data to Russia’s TransTelecom, a major internet presence. Ashiru said MainOne did not yet understand why China Telecom did that, as the state-run company normally doesn’t allow Google traffic on its network.

The traffic diversion into China created a detour with a dead end, preventing users from accessing the affected Google services, said Alex Henthorn-Iwane, an executive at the network-intelligence company ThousandEyes.

He said Monday’s incident offered yet another lesson in the internet’s susceptibility to “unpredictable and destabilizing events. If this could happen to a company with the scale and resources available that Google has, realize it could happen to anyone.”

The diversion, known as gateway protocol hijacking, is built into the internet, which was designed for collaboration by trusted parties—not competition by hostile nation-states. Experts say it is fixable but that would require investments in encrypted routers that the industry has resisted .

ThousandEyes said the diversion at minimum made Google’s search and business collaboration tools difficult or impossible to reach and “put valuable Google traffic in the hands of ISPs in countries with a long history of Internet surveillance.”

However, most network traffic to Google services—94 percent as of Oct. 27—is encrypted, which shields it from prying eyes even if diverted. Google said in a statement that “access to some Google services was impacted” but did not further quantify the disruption.

Google said it had no reason to believe the traffic hijacking was malicious.

Indeed, the phenomenon has occurred before. Google was briefly afflicted in 2015 when an Indian provider stumbled. In perhaps the best-known case, Pakistan Telecom inadvertently hijacked YouTube’s global traffic in 2008 for a few hours when it was trying to enforce a domestic ban. It sent all YouTube traffic into a virtual ditch in Pakistan.

In two recent cases, such rerouting has affected financial sites. In April 2017, one affected MasterCard and Visa among other sites. This past April, another hijacking enabled cryptocurrency theft .

Original article here.

 

 


standard

Google’s AutoML is a Machine Learning Game-Changer

2018-05-24 - By 

Google’s AutoML is a new up-and-coming (alpha stage) cloud software suite of Machine Learning tools. It’s based on Google’s state-of-the-art research in image recognition called Neural Architecture Search (NAS). NAS is basically an algorithm that, given your specific dataset, searches for the most optimal neural network to perform a certain task on that dataset. AutoML is then a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge of deep learning or AI; all you need is labelled data! Google will use NAS to then find the best network for your specific dataset and task. They’ve already shown how their methods can achieve performance that is far better than that of hand-designed networks.

AutoML totally changes the whole machine learning game because for many applications, specialised skills and knowledge won’t be required. Many companies only need deep networks to do simpler tasks, such as image classification. At that point they don’t need to hire 5 machine learning PhDs; they just need someone who can handle moving around and organising their data.

There’s no doubt that this shift in how “AI” can be used by businesses will create change. But what kind of change are we looking at? Whom will this change benefit? And what will happen to all of the people jumping into the machine learning field? In this post, we’re going to breakdown what Google’s AutoML, and in general the shift towards Software 2.0, means for both businesses and developers in the machine learning field.

More development, less research for businesses

A lot of businesses in the AI space, especially start-ups, are doing relatively simple things in the context of deep learning. Most of their value is coming from their final put-together product. For example, most computer vision start-ups are using some kind of image classification network, which will actually be AutoML’s first tool in the suite. In fact, Google’s NASNet, which achieves the current state-of-the-art in image classification is already publicly available in TensorFlow! Businesses can now skip over this complex experimental-research part of the product pipeline and just use transfer learning for their task. Because there is less experimental-research, more business resources can be spent on product design, development, and the all important data.

Speaking of which…

It becomes more about product

Connecting from the first point, since more time is being spent on product design and development, companies will have faster product iteration. The main value of the company will become less about how great and cutting edge their research is and more about how well their product/technology is engineered. Is it well designed? Easy to use? Is their data pipeline set up in such a way that they can quickly and easily improve their models? These will be the new key questions for optimising their products and being able to iterate faster than their competition. Cutting edge research will also become less of a main driver of increasing the technology’s performance.

Now it’s more like…

Data and resources become critical

Now that research is a less significant part of the equation, how can companies stand out? How do you get ahead of the competition? Of course sales, marketing, and as we just discussed, product design are all very important. But the huge driver of the performance of these deep learning technologies is your data and resources. The more clean and diverse yet task-targeted data you have (i.e both quality and quantity), the more you can improve your models using software tools like AutoML. That means lots of resources for the acquisition and handling of data. All of this partially signifies us moving away from the nitty-gritty of writing tons of code.

It becomes more of…

Software 2.0: Deep learning becomes another tool in the toolbox for most

All you have to do to use Google’s AutoML is upload your labelled data and boom, you’re all set! For people who aren’t super deep (ha ha, pun) into the field, and just want to leverage the power of the technology, this is big. The application of deep learning becomes more accessible. There’s less coding, more using the tool suite. In fact, for most people, deep learning because just another tool in their toolbox. Andrej Karpathy wrote a great article on Software 2.0 and how we’re shifting from writing lots of code to more design and using tools, then letting AI do the rest.

But, considering all of this…

There’s still room for creative science and research

Even though we have these easy-to-use tools, the journey doesn’t just end! When cars were invented, we didn’t just stop making them better even though now they’re quite easy to use. And there’s still many improvements that can be made to improve current AI technologies. AI still isn’t very creative, nor can it reason, or handle complex tasks. It has the crutch of needing a ton of labelled data, which is both expensive and time consuming to acquire. Training still takes a long time to achieve top accuracy. The performance of deep learning models is good for some simple tasks, like classification, but does only fairly well, sometimes even poorly (depending on task complexity), on things like localisation. We don’t yet even fully understand deep networks internally.

All of these things present opportunities for science and research, and in particular for advancing the current AI technologies. On the business side of things, some companies, especially the tech giants (like Google, Microsoft, Facebook, Apple, Amazon) will need to innovate past current tools through science and research in order to compete. All of them can get lots of data and resources, design awesome products, do lots of sales and marketing etc. They could really use something more to set them apart, and that can come from cutting edge innovation.

That leaves us with a final question…

Is all of this good or bad?

Overall, I think this shift in how we create our AI technologies is a good thing. Most businesses will leverage existing machine learning tools, rather than create new ones since they don’t have a need for it. Near-cutting-edge AI becomes accessible to many people, and that means better technologies for all. AI is also quite an “open” field, with major figures like Andrew Ng creating very popular courses to teach people about this important new technology. Making things more accessible helps people transition with the fast-paced tech field.

Such a shift has happened many times before. Programming computers started with assembly level coding! We later moved on to things like C. Many people today consider C too complicated so they use C++. Much of the time, we don’t even need something as complex as C++, so we just use the super high level languages of Python or R! We use the tool that is most appropriate at hand. If you don’t need something super low-level, then you don’t have to use it (e.g C code optimisation, R&D of deep networks from scratch), and can simply use something more high-level and built-in (e.g Python, transfer learning, AI tools).

At the same time, continued efforts in the science and research of AI technologies is critical. We can definitely add tremendous value to the world by engineering new AI-based products. But there comes a point where new science is needed to move forward. Human creativity will always be valuable.

Conclusion

Thanks for reading! I hope you enjoyed this post and learned something new and useful about the current trend in AI technology! This is a partially opinionated piece, so I’d love to hear any responses you may have below!

Original article here.


standard

Google’s Duplex AI Demo Just Passed the Turing Test (video)

2018-05-11 - By 

Yesterday, at I/O 2018, Google showed off a new digital assistant capability that’s meant to improve your life by making simple boring phone calls on your behalf. The new Google Duplex feature is designed to pretend to be human, with enough human-like functionality to schedule appointments or make similarly inane phone calls. According to Google CEO Sundar Pichai, the phone calls the company played were entirely real. You can make an argument, based on these audio clips, that Google actually passed the Turing Test.

If you haven’t heard the audio of the two calls, you should give the clip a listen. We’ve embedded the relevant part of Pichai’s presentation below.

I suspect the calls were edited to remove the place of business, but apart from that, they sound like real phone calls. If you listen to both segments, the male voice booking the restaurant sounds a bit more like a person than the female does, but the gap isn’t large and the female voice is still noticeably better than a typical AI. The female speaker has a rather robotic “At 12PM” at one point that pulls the overall presentation down, but past that, Google has vastly improved AI speech. I suspect the same technologies at work in Google Duplex are the ones we covered about six weeks ago.

So what’s the Turing Test and why is passing it a milestone? The British computer scientist, mathematician, and philosopher Alan Turing devised the Turing test as a means of measuring whether a computer was capable of demonstrating intelligent behavior equivalent to or indistinguishable from that of a human. This broad formulation allows for the contemplation of many such tests, though the general test case presented in discussion is a conversation between a researcher and a computer in which the computer responds to questions. A third person, the evaluator, is tasked with determining which individual in the conversation is human and which is a machine. If the evaluator cannot tell, the machine has passed the Turing test.

The Turing test is not intended to be the final word on whether an AI is intelligent and, given that Turing conceived it in 1950, obviously doesn’t take into consideration later advances or breakthroughs in the field. There have been robust debates for decades over whether passing the Turing test would represent a meaningful breakthrough. But what sets Google Duplex apart is its excellent mimicry of human speech. The original Turing test supposed that any discussion between computer and researcher would take place in text. Managing to create a voice facsimile close enough to standard human to avoid suspicion and rejection from the company in question is a significant feat.

As of right now, Duplex is intended to handle rote responses, like asking to speak to a representative, or simple, formulaic social interactions. Even so, the program’s demonstrated capability to deal with confusion (as on the second call), is still a significant step forward for these kinds of voice interactions. As artificial intelligence continues to improve, voice quality will improve and the AI will become better at answering more and more types of questions. We’re obviously still a long way from creating a conscious AI, but we’re getting better at the tasks our systems can handle — and faster than many would’ve thought possible.

 

Original article here.

 


standard

Google Publishes a JavaScript Style Guide. Here are Key Lessons.

2018-03-30 - By 

For anyone who isn’t already familiar with it, Google puts out a style guide for writing JavaScript that lays out (what Google believes to be) the best stylistic practices for writing clean, understandable code.

These are not hard and fast rules for writing valid JavaScript, only proscriptions for maintaining consistent and appealing style choices throughout your source files. This is particularly interesting for JavaScript, which is a flexible and forgiving language that allows for a wide variety of stylistic choices.

Google and Airbnb have two of the most popular style guides out there. I’d definitely recommend you check out both of them if you spend much time writing JS.

The following are thirteen of what I think are the most interesting and relevant rules from Google’s JS Style Guide.

They deal with everything from hotly contested issues (tabs versus spaces, and the controversial issue of how semicolons should be used), to a few more obscure specifications which surprised me. They will definitely change the way I write my JS going forward.

For each rule, I’ll give a summary of the specification, followed by a supporting quote from the style guide that describes the rule in detail. Where applicable, I’ll also provide an example of the style in practice, and contrast it with code that does not follow the rule.

Use spaces, not tabs

Aside from the line terminator sequence, the ASCII horizontal space character (0x20) is the only whitespace character that appears anywhere in a source file. This implies that… Tab characters are not used for indentation.

The guide later specifies you should use two spaces (not four) for indentation.

// bad
function foo() {
∙∙∙∙let name;
}

// bad
function bar() {
∙let name;
}

// good
function baz() {
∙∙let name;
}

Semicolons ARE required

Every statement must be terminated with a semicolon. Relying on automatic semicolon insertion is forbidden.

Although I can’t imagine why anyone is opposed to this idea, the consistent use of semicolons in JS is becoming the new ‘spaces versus tabs’ debate. Google’s coming out firmly here in the defence of the semicolon.

// bad
let luke = {}
let leia = {}
[luke, leia].forEach(jedi => jedi.father = 'vader')
// good
let luke = {};
let leia = {};
[luke, leia].forEach((jedi) => {
  jedi.father = 'vader';
});

Don’t use ES6 modules (yet)

Do not use ES6 modules yet (i.e. the export and importkeywords), as their semantics are not yet finalized. Note that this policy will be revisited once the semantics are fully-standard.

// Don't do this kind of thing yet:
//------ lib.js ------
export function square(x) {
 return x * x;
}
export function diag(x, y) {
 return sqrt(square(x) + square(y));
}

//------ main.js ------
import { square, diag } from 'lib';

Horizontal alignment is discouraged (but not forbidden)

This practice is permitted, but it is generally discouraged by Google Style. It is not even required to maintain horizontal alignment in places where it was already used.

Horizontal alignment is the practice of adding a variable number of additional spaces in your code, to make certain tokens appear directly below certain other tokens on previous lines.

// bad
{
  tiny:   42,  
  longer: 435, 
};
// good
{
  tiny: 42, 
  longer: 435,
};

Don’t use var anymore

Declare all local variables with either const or let. Use const by default, unless a variable needs to be reassigned. The var keyword must not be used.

I still see people using var in code samples on StackOverflow and elsewhere. I can’t tell if there are people out there who will make a case for it, or if it’s just a case of old habits dying hard.

// bad
var example = 42;
// good
let example = 42;

Arrow functions are preferred

Arrow functions provide a concise syntax and fix a number of difficulties with this. Prefer arrow functions over the function keyword, particularly for nested functions

I’ll be honest, I just thought that arrow functions were great because they were more concise and nicer to look at. Turns out they also serve a pretty important purpose.

// bad
[1, 2, 3].map(function (x) {
  const y = x + 1;
  return x * y;
});

// good
[1, 2, 3].map((x) => {
  const y = x + 1;
  return x * y;
});

Use template strings instead of concatenation

Use template strings (delimited with `) over complex string concatenation, particularly if multiple string literals are involved. Template strings may span multiple lines.

// bad
function sayHi(name) {
  return 'How are you, ' + name + '?';
}

// bad
function sayHi(name) {
  return ['How are you, ', name, '?'].join();
}

// bad
function sayHi(name) {
  return `How are you, ${ name }?`;
}

// good
function sayHi(name) {
  return `How are you, ${name}?`;
}

Don’t use line continuations for long strings

Do not use line continuations (that is, ending a line inside a string literal with a backslash) in either ordinary or template string literals. Even though ES5 allows this, it can lead to tricky errors if any trailing whitespace comes after the slash, and is less obvious to readers.

Interestingly enough, this is a rule that Google and Airbnb disagree on (here’s Airbnb’s spec).

While Google recommends concatenating longer strings (as shown below) Airbnb’s style guide recommends essentially doing nothing, and allowing long strings to go on as long as they need to.

// bad (sorry, this doesn't show up well on mobile)
const longString = 'This is a very long string that \
    far exceeds the 80 column limit. It unfortunately \
    contains long stretches of spaces due to how the \
    continued lines are indented.';
// good
const longString = 'This is a very long string that ' + 
    'far exceeds the 80 column limit. It does not contain ' + 
    'long stretches of spaces since the concatenated ' +
    'strings are cleaner.';

“for… of” is the preferred type of ‘for loop’

With ES6, the language now has three different kinds of forloops. All may be used, though forof loops should be preferred when possible.

This is a strange one if you ask me, but I thought I’d include it because it is pretty interesting that Google declares a preferred type of for loop.

I was always under the impression that for... in loops were better for objects, while for... of were better suited to arrays. A ‘right tool for the right job’ type situation.

While Google’s specification here doesn’t necessarily contradict that idea, it is still interesting to know they have a preference for this loop in particular.

Don’t use eval()

Do not use eval or the Function(...string) constructor (except for code loaders). These features are potentially dangerous and simply do not work in CSP environments.

The MDN page for eval() even has a section called “Don’t use eval!”

// bad
let obj = { a: 20, b: 30 };
let propName = getPropName();  // returns "a" or "b"
eval( 'var result = obj.' + propName );
// good
let obj = { a: 20, b: 30 };
let propName = getPropName();  // returns "a" or "b"
let result = obj[ propName ];  //  obj[ "a" ] is the same as obj.a

Constants should be named in ALL_UPPERCASE separated by underscores

Constant names use CONSTANT_CASE: all uppercase letters, with words separated by underscores.

If you’re absolutely sure that a variable shouldn’t change, you can indicate this by capitalizing the name of the constant. This makes the constant’s immutability obvious as it gets used throughout your code.

A notable exception to this rule is if the constant is function-scoped. In this case it should be written in camelCase.

// bad
const number = 5;
// good
const NUMBER = 5;

One variable per declaration

Every local variable declaration declares only one variable: declarations such as let a = 1, b = 2; are not used.

// bad
let a = 1, b = 2, c = 3;
// good
let a = 1;
let b = 2;
let c = 3;

Use single quotes, not double quotes

Ordinary string literals are delimited with single quotes ('), rather than double quotes (").

Tip: if a string contains a single quote character, consider using a template string to avoid having to escape the quote.

// bad
let directive = "No identification of self or mission."
// bad
let saying = 'Say it ain\u0027t so.';
// good
let directive = 'No identification of self or mission.';
// good
let saying = `Say it ain't so`;

A final note

As I said in the beginning, these are not mandates. Google is just one of many tech giants, and these are just recommendations.

That said, it is interesting to look at the style recommendations that are put out by a company like Google, which employs a lot of brilliant people who spend a lot of time writing excellent code.

You can follow these rules if you want to follow the guidelines for ‘Google compliant source code’ — but, of course, plenty of people disagree, and you’re free to brush any or all of this off.

I personally think there are plenty of cases where Airbnb’s spec is more appealing than Google’s. No matter the stance you take on these particular rules, it is still important to keep stylistic consistency in mind when write any sort of code.

Original article here.


standard

Google’s Accelerated Mobile Pages (AMP)

2018-01-18 - By 

Starting AMP from scratch is great, but what if you already have an existing site? Learn how you can convert your site to AMP using AMP HTML.

“What’s Allowed in AMP and What Isn’t”: https://goo.gl/ugMhHc

Tutorial on how to convert HTML to AMP: https://goo.gl/JwUVyG

Reach out with your AMP related questions: https://goo.gl/UxCWfz

Watch all Amplify episodes: https://goo.gl/B9CCl4

Subscribe to the The AMP Channel and never miss an Amplify episode: https://goo.gl/g2Y8h7

 

 

Original video here.


standard

Spectre, Meltdown: Critical CPU Security Flaws Explained

2018-01-04 - By 

Over the past few days we’ve covered major new security risks that struck at a number of modern microprocessors from Intel and to a much lesser extent, ARM and AMD. Information on the attacks and their workarounds initially leaked out slowly, but Google has pushed up its timeline for disclosing the problems and some vendors, like AMD, have issued their own statements. The two flaws in question are known as Spectre and Meltdown, and they both relate to one of the core capabilities of modern CPUs, known as speculative execution.

Speculative execution is a performance-enhancing technique virtually all modern CPUs include to one degree or another. One way to increase CPU performance is to allow the core to perform calculations it may need in the future. The different between speculative execution and “execution” is that the CPU performs these calculations before it knows whether it’ll actually be able to use the results.

Here’s how Google’s Project Zero summarizes the problem: “We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts.”

Meltdown is Variant 3 in ARMAMD, and Google parlance. Spectre accounts for Variant 1 and Variant 2.

Meltdown

“On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer.”

Intel is badly hit by Meltdown because its speculative execution methods are fairly aggressive. Specifically, Intel CPUs are allowed to access kernel memory when performing speculative execution, even when the application in question is running in user memory space. The CPU does check to see if an invalid memory access occurs, but it performs the check after speculative execution, not before. Architecturally, these invalid branches never execute — they’re blocked — but it’s possible to read data from affected cache blocks even so.

The various OS-level fixes going into macOS, Windows, and Linux all concern Meltdown. The formal PDF on Meltdown notes that the software patches Google, Apple, and Microsoft are working on are a good start, but that the problem can’t be completely fixed in software. AMD and ARM appear largely immune to Meltdown, though ARM’s upcoming Cortex-A75 is apparently impacted.

Spectre

Meltdown is bad, but Meltdown can at least be ameliorated in software (with updates), even if there’s an associated performance penalty. Spectre is the name given to a set of attacks that “involve inducing a victim to speculatively perform operations that would not occur during correct program execution, and which leak the victim’s confidential information via a side channel to the adversary.”

Unlike Meltdown, which impacts mostly Intel CPUs, Spectre’s proof of concept works against everyone, including ARM and AMD. Its attacks are pulled off differently — one variant targets branch prediction — and it’s not clear there are hardware solutions to this class of problems, for anyone.

What Happens Next

Intel, AMD, and ARM aren’t going to stop using speculative execution in their processors; it’s been key to some of the largest performance improvements we’ve seen in semiconductor history. But as Google’s extensive documentation makes clear, these proof-of-concept attacks are serious. Neither Spectre nor Meltdown relies on any kind of software bug to work. Meltdown can be solved through hardware design and software rearchitecting; Spectre may not.

When reached for comment on the matter, Linux creator Linux Torvalds responded with the tact that’s made him legendary. “I think somebody inside of Intel needs to really take a long hard look at their CPU’s, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed,” Torvalds writes. “And that really means that all these mitigation patches should be written with ‘not all CPU’s are crap’ in mind. Or is Intel basically saying ‘We are committed to selling you shit forever and ever, and never fixing anything? Because if that’s the case, maybe we should start looking towards the ARM64 people more.”

It does appear, as of this writing, that Intel is disproportionately exposed on these security flaws. While Spectre-style attacks can affect all CPUs, Meltdown is pretty Intel-specific. Thus far, user applications and games don’t seem much impacted, but web servers and potentially other workloads that access kernel memory frequently could run markedly slower once patched.

 

Original article here.

 


standard

The Dark Secret at the Heart of AI

2017-10-09 - By 

No one really knows how the most advanced algorithms do what they do. That could be a problem.

Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands required to operate the steering wheel, the brakes, and other systems. The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligenceThe car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will. That’s one reason Nvidia’s car is still experimental.

Already, mathematical models are being used to help determine who makes parole, who’s approved for a loan, and who gets hired for a job. If you could get access to these mathematical models, it would be possible to understand their reasoning. But banks, the military, employers, and others are now turning their attention to more complex machine-learning approaches that could make automated decision-making altogether inscrutable. Deep learning, the most common of these approaches, represents a fundamentally different way to program computers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This raises mind-boggling questions. As the technology advances, we might soon cross some threshold beyond which using AI requires a leap of faith. Sure, we humans can’t always truly explain our thought processes either—but we find ways to intuitively trust and gauge people. Will that also be possible with machines that think and make decisions differently from the way a human would? We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable? These questions took me on a journey to the bleeding edge of research on AI algorithms, from Google to Apple and many places in between, including a meeting with one of the great philosophers of our time.

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”

At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”

Artificial intelligence hasn’t always been this way. From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself.

At first this approach was of limited practical use, and in the 1960s and ’70s it remained largely confined to the fringes of the field. Then the computerization of many industries and the emergence of large data sets renewed interest. That inspired the development of more powerful machine-learning techniques, especially new versions of one known as the artificial neural network. By the 1990s, neural networks could automatically digitize handwritten characters.

But it was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”—neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond.

The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system. This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly dark black box.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal. These outputs are fed, in a complex web, to the neurons in the next layer, and so on, until an overall output is produced. Plus, there is a process known as back-propagation that tweaks the calculations of individual neurons in a way that lets the network learn to produce a desired output.

The many layers in a deep network enable it to recognize things at different levels of abstraction. In a system designed to recognize dogs, for instance, the lower layers recognize simple things like outlines or color; higher layers recognize more complex stuff like fur or eyes; and the topmost layer identifies it all as a dog. The same approach can be applied, roughly speaking, to other inputs that lead a machine to teach itself: the sounds that make up words in speech, the letters and words that create sentences in text, or the steering-wheel movements required for driving.

Ingenious strategies have been used to try to capture and thus explain in more detail what’s happening in such systems. In 2015, researchers at Google modified a deep-learning-based image recognition algorithm so that instead of spotting objects in photos, it would generate or modify them. By effectively running the algorithm in reverse, they could discover the features the program uses to recognize, say, a bird or building. The resulting images, produced by a project known as Deep Dream, showed grotesque, alien-like animals emerging from clouds and plants, and hallucinatory pagodas blooming across forests and mountain ranges. The images proved that deep learning need not be entirely inscrutable; they revealed that the algorithms home in on familiar visual features like a bird’s beak or feathers. But the images also hinted at how different deep learning is from human perception, in that it might make something out of an artifact that we would know to ignore. Google researchers noted that when its algorithm generated images of a dumbbell, it also generated a human arm holding it. The machine had concluded that an arm was part of the thing.

Further progress has been made using ideas borrowed from neuroscience and cognitive science. A team led by Jeff Clune, an assistant professor at the University of Wyoming, has employed the AI equivalent of optical illusions to test deep neural networks. In 2015, Clune’s group showed how certain images could fool such a network into perceiving things that aren’t there, because the images exploit the low-level patterns the system searches for. One of Clune’s collaborators, Jason Yosinski, also built a tool that acts like a probe stuck into a brain. His tool targets any neuron in the middle of the network and searches for the image that activates it the most. The images that turn up are abstract (imagine an impressionistic take on a flamingo or a school bus), highlighting the mysterious nature of the machine’s perceptual abilities.

We need more than a glimpse of AI’s thinking, however, and there is no easy solution. It is the interplay of calculations inside a deep neural network that is crucial to higher-level pattern recognition and complex decision-making, but those calculations are a quagmire of mathematical functions and variables. “If you had a very small neural network, you might be able to understand it,” Jaakkola says. “But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.”

In the office next to Jaakkola is Regina Barzilay, an MIT professor who is determined to apply machine learning to medicine. She was diagnosed with breast cancer a couple of years ago, at age 43. The diagnosis was shocking in itself, but Barzilay was also dismayed that cutting-edge statistical and machine-learning methods were not being used to help with oncological research or to guide patient treatment. She says AI has huge potential to revolutionize medicine, but realizing that potential will mean going beyond just medical records. She envisions using more of the raw data that she says is currently underutilized: “imaging data, pathology data, all this information.”

After she finished cancer treatment last year, Barzilay and her students began working with doctors at Massachusetts General Hospital to develop a system capable of mining pathology reports to identify patients with specific clinical characteristics that researchers might want to study. However, Barzilay understood that the system would need to explain its reasoning. So, together with Jaakkola and a student, she added a step: the system extracts and highlights snippets of text that are representative of a pattern it has discovered. Barzilay and her students are also developing a deep-learning algorithm capable of finding early signs of breast cancer in mammogram images, and they aim to give this system some ability to explain its reasoning, too. “You really need to have a loop where the machine and the human collaborate,” -Barzilay says.

The U.S. military is pouring billions into projects that will use machine learning to pilot vehicles and aircraft, identify targets, and help analysts sift through huge piles of intelligence data. Here more than anywhere else, even more than in medicine, there is little room for algorithmic mystery, and the Department of Defense has identified explainability as a key stumbling block.

David Gunning, a program manager at the Defense Advanced Research Projects Agency, is overseeing the aptly named Explainable Artificial Intelligence program. A silver-haired veteran of the agency who previously oversaw the DARPA project that eventually led to the creation of Siri, Gunning says automation is creeping into countless areas of the military. Intelligence analysts are testing machine learning as a way of identifying patterns in vast amounts of surveillance data. Many autonomous ground vehicles and aircraft are being developed and tested. But soldiers probably won’t feel comfortable in a robotic tank that doesn’t explain itself to them, and analysts will be reluctant to act on information without some reasoning. “It’s often the nature of these machine-learning systems that they produce a lot of false alarms, so an intel analyst really needs extra help to understand why a recommendation was made,” Gunning says.

This March, DARPA chose 13 projects from academia and industry for funding under Gunning’s program. Some of them could build on work led by Carlos Guestrin, a professor at the University of Washington. He and his colleagues have developed a way for machine-learning systems to provide a rationale for their outputs. Essentially, under this method a computer automatically finds a few examples from a data set and serves them up in a short explanation. A system designed to classify an e-mail message as coming from a terrorist, for example, might use many millions of messages in its training and decision-making. But using the Washington team’s approach, it could highlight certain keywords found in a message. Guestrin’s group has also devised ways for image recognition systems to hint at their reasoning by highlighting the parts of an image that were most significant.

One drawback to this approach and others like it, such as Barzilay’s, is that the explanations provided will always be simplified, meaning some vital information may be lost along the way. “We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Guestrin. “We’re a long way from having truly interpretable AI.”

It doesn’t have to be a high-stakes situation like cancer diagnosis or military maneuvers for this to become an issue. Knowing AI’s reasoning is also going to be crucial if the technology is to become a common and useful part of our daily lives. Tom Gruber, who leads the Siri team at Apple, says explainability is a key consideration for his team as it tries to make Siri a smarter and more capable virtual assistant. Gruber wouldn’t discuss specific plans for Siri’s future, but it’s easy to imagine that if you receive a restaurant recommendation from Siri, you’ll want to know what the reasoning was. Ruslan Salakhutdinov, director of AI research at Apple and an associate professor at Carnegie Mellon University, sees explainability as the core of the evolving relationship between humans and intelligent machines. “It’s going to introduce trust,” he says.

Just as many aspects of human behavior are impossible to explain in detail, perhaps it won’t be possible for AI to explain everything it does. “Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Clune, of the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”

If that’s so, then at some stage we may have to simply trust AI’s judgment or do without using it. Likewise, that judgment will have to incorporate social intelligence. Just as society is built upon a contract of expected behavior, we will need to design AI systems to respect and fit with our social norms. If we are to create robot tanks and other killing machines, it is important that their decision-making be consistent with our ethical judgments.

To probe these metaphysical concepts, I went to Tufts University to meet with Daniel Dennett, a renowned philosopher and cognitive scientist who studies consciousness and the mind. A chapter of Dennett’s latest book, From Bacteria to Bach and Back, an encyclopedic treatise on consciousness, suggests that a natural part of the evolution of intelligence itself is the creation of systems capable of performing tasks their creators do not know how to do. “The question is, what accommodations do we have to make to do this wisely—what standards do we demand of them, and of ourselves?” he tells me in his cluttered office on the university’s idyllic campus.

He also has a word of warning about the quest for explainability. “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” he says. But since there may be no perfect answer, we should be as cautious of AI explanations as we are of each other’s—no matter how clever a machine seems. “If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”

Original article here.

 


standard

State Of Machine Learning And AI, 2017

2017-10-01 - By 

AI is receiving major R&D investment from tech giants including Google, Baidu, Facebook and Microsoft.

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

Key takeaways from the study include the following:

  • Tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions. The current rate of AI investment is 3X the external investment growth since 2013. McKinsey found that 20% of AI-aware firms are early adopters, concentrated in the high-tech/telecom, automotive/assembly and financial services industries. The graphic below illustrates the trends the study team found during their analysis.
  • AI is turning into a race for patents and intellectual property (IP) among the world’s leading tech companies. McKinsey found that only a small percentage (up to 9%) of Venture Capital (VC), Private Equity (PE), and other external funding. Of all categories that have publically available data, M&A grew the fastest between 2013 And 2016 (85%).The report cites many examples of internal development including Amazon’s investments in robotics and speech recognition, and Salesforce on virtual agents and machine learning. BMW, Tesla, and Toyota lead auto manufacturers in their investments in robotics and machine learning for use in driverless cars. Toyota is planning to invest $1B in establishing a new research institute devoted to AI for robotics and driverless vehicles.
  • McKinsey estimates that total annual external investment in AI was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. Robotics and speech recognition are two of the most popular investment areas. Investors are most favoring machine learning startups due to quickness code-based start-ups have at scaling up to include new features fast. Software-based machine learning startups are preferred over their more cost-intensive machine-based robotics counterparts that often don’t have their software counterparts do. As a result of these factors and more, Corporate M&A is soaring in this area with the Compound Annual Growth Rate (CAGR) reaching approximately 80% from 20-13 to 2016. The following graphic illustrates the distribution of external investments by category from the study.
  • High tech, telecom, and financial services are the leading early adopters of machine learning and AI. These industries are known for their willingness to invest in new technologies to gain competitive and internal process efficiencies. Many startups have also had their start by concentrating on the digital challenges of this industries as well. The MGI Digitization Index is a GDP-weighted average of Europe and the United States. See Appendix B of the study for a full list of metrics and explanation of methodology. McKinsey also created an overall AI index shown in the first column below that compares key performance indicators (KPIs) across assets, usage, and labor where AI could make a contribution. The following is a heat map showing the relative level of AI adoption by industry and key area of asset, usage, and labor category.
  • McKinsey predicts High Tech, Communications, and Financial Services will be the leading industries to adopt AI in the next three years. The competition for patents and intellectual property (IP) in these three industries is accelerating. Devices, products and services available now and on the roadmaps of leading tech companies will over time reveal the level of innovative activity going on in their R&D labs today. In financial services, for example, there are clear benefits from improved accuracy and speed in AI-optimized fraud-detection systems, forecast to be a $3B market in 2020. The following graphic provides an overview of sectors or industries leading in AI addition today and who intend to grow their investments the most in the next three years.
  • Healthcare, financial services, and professional services are seeing the greatest increase in their profit margins as a result of AI adoption.McKinsey found that companies who benefit from senior management support for AI initiatives have invested in infrastructure to support its scale and have clear business goals achieve 3 to 15% percentage point higher profit margin. Of the over 3,000 business leaders who were interviewed as part of the survey, the majority expect margins to increase by up to 5% points in the next year.
  • Amazon has achieved impressive results from its $775 million acquisition of Kiva, a robotics company that automates picking and packing according to the McKinsey study. “Click to ship” cycle time, which ranged from 60 to 75 minutes with humans, fell to 15 minutes with Kiva, while inventory capacity increased by 50%. Operating costs fell an estimated 20%, giving a return of close to 40% on the original investment
  • Netflix has also achieved impressive results from the algorithm it uses to personalize recommendations to its 100 million subscribers worldwide. Netflix found that customers, on average, give up 90 seconds after searching for a movie. By improving search results, Netflix projects that they have avoided canceled subscriptions that would reduce its revenue by $1B annually.

 

Original article here.


standard

Google Launches Public Beta of Cloud Dataprep

2017-09-24 - By 

Google recently announced that Google Cloud Dataprep—the new managed data  wrangling service developed in collaboration with Trifacta—is now available in public beta. This service enables analysts and data scientists to visually explore and prepare data for analysis in seconds within the Google Cloud Platform.

Now that the Google Cloud Dataprep beta is open to the public, more companies can experience the benefits of Trifacta’s data preparation platform. From predictive transformation to interactive exploration, Trifacta’s  intuitive workflow has accelerated the data preparation process for Google Cloud Dataprep customers who have tried it out within private beta.

In addition to the same functionality found in Trifacta, Google Cloud Dataprep users also benefit from features that are unique to the collaboration with Google:

True SaaS offering 
With Google Cloud Dataprep, there’s no software to install or manage. Unlike a marketplace offering that deploys into Google ecosystem, Cloud Dataprep is a fully-managed service that does not require configuration or administration.

Single Sign On through Google Cloud Identity & Access Management
All users can easily access Cloud Dataprep using the same login / credential that they already used for any other Google service. This ensures highly secure and consistent access to Google services and data based on the permissions and roles defined through Google IAM.

Integration to Google Cloud Storage and Google BigQuery (read & write)
Users can browse, preview,  import data from and publish results to Google Cloud Storage and Google BigQuery directly through Cloud Dataprep. This is a huge boon for the teams that rely upon Google-generated data. For example:

  • Marketing teams leveraging DoubleClick Ads data can make that data available in Google BigQuery, then use Cloud Dataprep to prepare and publish the result back into BigQuery for downstream analysis. Learn more here.
  • Telematics data scientists can connect Cloud Dataprep directly to raw log data (often in JSON format) stored on Google Cloud Storage, and then prepare it for machine learning models executed in TensorFlow.
  • Retail business analysts can upload Excel data from their desktop to Google Cloud Storage, parse and combine it with BigQuery data to augment the results (beyond the limits of Excel), and eventually making the data available to various analytic tools like Google Data Studio, Looker, Tableau, Qlik or Zoomdata.

Big data scale provided by Cloud Dataflow 
By leveraging a serverless, auto-scaling data processing engine (Google Cloud Dataflow), Cloud Dataprep can handle any size of data, located anywhere in the world. This means that users don’t have to worry about optimizing their logic as their data grows, nor have to choose where their jobs run. At the same time, IT can rely on Cloud Dataflow to efficiently scale resources only as needed. Finally, it allows for enterprise-grade monitoring and logging in Google Stackdriver.

World-class Google support
As a Google service, Cloud Dataprep is subject to the same standards as other Google Beta product &  services. These benefits include:

  • World class uptime and availability around the world
  • Official support provided by Google Cloud Platform
  • Centralized usage-based billing managed on a per project basis with quotas and detailed reports

Early Google Cloud Dataprep Customer Feedback

Although Cloud Dataprep has only been in private beta for a short amount of time, we’ve had substantial participation from thousands of early private beta users and we’re excited to share some of the great feedback. Here’s a sample of what early users are saying:

Merkle Inc. 
Cloud Dataprep allows us to quickly view and understand new datasets, and its flexibility supports our data transformation needs. The GUI is nicely designed, so the learning curve is minimal. Our initial data preparation work is now completed in minutes, not hours or days,” says Henry Culver, IT Architect at Merkle. “The ability to rapidly see our data, and to be offered transformation suggestions in data delivery, is a huge help to us as we look to rapidly assimilate new datasets.”

Venture Development Center

“We needed a platform that was versatile, easy to utilize and provided a migration path as our needs for data review, evaluation, hygiene, interlinking and analysis advanced. We immediately knew that Google Cloud Platform, with Cloud Dataprep and BigQuery, were exactly what we were looking for. As we develop our capability and movement into the data cataloging, QA and delivery cycle, Cloud Dataprep allows us to accomplish this quickly and adeptly,” says Matthew W. Staudt, President of Venture Development Center.

For more information on these customers check out Google’s blog on the public beta launch here.

Cloud Dataprep Public Beta: Furthering Wrangling Success

Now that the beta version of Google Cloud Dataprep is open to the public, we’re excited to see more organizations achieve data wrangling success from the launch of  the public beta of Google Cloud Dataprep. From multinational banks to consumer retail companies to government agencies, there’s a  growing number of customers using Trifacta’s consistent transformation logic, user experience, workflow, metadata management, and comprehensive data governance to reduce data preparation times and improve data quality.

If you’re interested in Google Dataprep, you can sign up with your own personal account for free access OR login using your company’s existing Google account. Visit cloud.google.com/dataprep to learn more.

For more information about how Trifacta interoperates with cloud providers like Google Cloud and with on-prem infrastructure, download our brief.

 

Original article here.

 


standard

AWS dominates cloud computing, bigger than IBM/Google/Microsoft combined

2017-02-12 - By 

Amazon’s cloud provider is the biggest player in the rapidly growing cloud infrastructure market, according to new data.

Amazon Web Services (AWS) accounts for one third of the cloud infrastructure market, more than the value generated by its next three biggest rivals combined.

AWS dominates, with a 33.8 percent global market share, while its three nearest competitors — Microsoft, Google, and IBM — together accounted for 30.8 percent of the market, according to calculations by analyst Canalys.

The four leading service providers were followed by Alibaba and Oracle, which made up 2.4 percent and 1.7 percent of the total respectively, with rest of the market made up of a number of smaller players.

According to the researchers, total spending on cloud infrastructure services, which stood at $10.3bn in the fourth quarter of last year (up 49 percent year-on-year) will hit $55.8bn in 2017 — up 46 percent on 2016’s total of $38.1bn.

Continuing demand is leading the cloud companies to accelerate their data centre expansion. Canalys said AWS launched 11 new availability zones globally in 2016, four of which were established in Canada and the UK in the past quarter. IBM also opened its new data centre in the UK, bringing its total cloud data centres to 50 worldwide, while Microsoft also added with new facilities in the UK and Germany.

Google and Oracle set up their first infrastructure in Japan and China respectively, aiming at expanding their footprint in the Asia Pacific region, while Alibaba also unveiled the availability of its four new data centres in Australia, Japan, Germany, and the United Arab Emirates.

Strict data sovereignty laws — under which personal data has to be stored in servers that are physically located within the country — mean cloud service providers have to build data centres in key markets, such as Germany, Canada, Japan, the UK, China, and the Middle East, said Canalys research analyst Daniel Liu.

Original article here.


standard

Cloud market valued at $148bn for 2016 & growing 25% annually

2017-01-05 - By 

Operator and vendor revenues across the main cloud services and infrastructure market segments hit $148 billion (£120.5bn) in 2016 growing at 25% annually, according to the latest note from analyst firm Synergy Research.

Infrastructure as a service (IaaS) and platform as a service (PaaS) experienced the highest growth rates at 53%, followed by hosted private cloud infrastructure services, at 35%, and enterprise SaaS, at 34%. Amazon Web Services (AWS) and Microsoft lead the way in IaaS and PaaS, with IBM and Rackspace on top for hosted private cloud.

In the four quarters ending September (Q3) 2016, total spend on hardware and software to build cloud infrastructure exceeded $65bn, according to the researchers. Spend on private cloud accounts for more than half of the overall total, but public cloud spend is growing much more rapidly. The note also argues unified comms as a service (UCaaS) is growing ‘steadily’.

“We tagged 2015 as the year when cloud became mainstream and I’d say that 2016 is the year that cloud started to dominate many IT market segments,” said Jeremy Duke, Synergy Research Group founder and chief analyst in a statement. “Major barriers to cloud adoption are now almost a thing of the past, especially on the public cloud side.

“Cloud technologies are now generating massive revenues for technology vendors and cloud service providers and yet there are still many years of strong growth ahead,” Duke added.

The most recent examination of the cloud infrastructure market by Synergy back in August argued AWS, Microsoft, IBM and Google continue to grow more quickly than their smaller competitors and, between them, own more than half of the global cloud infrastructure service market. 

Original article here.


standard

All the Big Players Are Remaking Themselves Around AI

2017-01-02 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.

 


standard

Identity Theft Password Pitfalls

2016-12-10 - By 

We’re constantly reminded of the risks that come with bad passwords, yet many people persist in using obvious and easy-to-crack names, words, and patterns. Want to know if you’re at risk?

Identity theft is a serious problem: Millions of Americans are falling prey to cybercrime every year, and with more and more of our lives online the risk only increases. The key to protecting your online identity starts with the most commonly used part of accessing internet services: The password.

Using secure passwords can be difficult—I know I’m guilty of using the same password over and over again, something that has recently come back to bite me as I get email after email telling me someone has tried logging in to my accounts.

My current problem could have been far worse if I had been guilty of using some of the most common passwords that were uncovered recently by online IT training firm CBT Nuggets. It just published that and some other startling password facts that every internet user needs to know about.

Which words are widespread?

One of the fundamental rules of good password creation is to use words that other people don’t. The study found that of 50,000 passwords surveyed, there were several that were far more common than others. Love, star, girl, angel, and rock came in at the top five: If you’re guilty of including one of them it’s time to make some changes.

Dictionary attacks remain one of the most common ways hackers crack passwords in systems that don’t lock accounts out after a few tries. They simply compile lists of the most commonly used passwords and brute force accounts until they come up with a match.

It’s not just common words that are causing leaks: 42 percent of the passwords surveyed contained usernames, real names, or other publically available information. The most common offenders of name usage in passwords? Lisa, Amy, Scott, and Mark.

The demographics of getting hacked

Using your own name, your username, a pet’s name, or any other identifying feature is the perfect way to ensure you’re a target, but there are several other risk factors that can make you an easy mark.

Men are more likely to be hacked, but only by a few points (male = 53 percent; female =47 percent). Perhaps surprisingly, the most common age group of password hacking victims is 25- to 34-year olds. The study says that a possible cause is that this age group grew up along with the internet and in the earliest years weren’t taught the importance of good password use.

Predictably, Yahoo users are the most likely to have their passwords leaked—nearly half of hacked password surveyed came from Yahoo. Many of these probably came from this year’s leak of 500 million Yahoo passwords.

Wondering which website has the least secure users? AOL, Yahoo, and Hotmail are the most likely places to find passwords containing a username or real name.

How to stay safe

The password is a ubiquitous, and entirely unreliable, security method. Cracking methods are constantly becoming more sophisticated, machines used to perform brute force attacks keep getting faster, and there’s no solution for the weakest part of the system: The humans using it.

Truly secure passwords need to be long, random, and changed frequently. The best way to do that is by using an encrypted password management app. These apps store credentials to any number of websites, can create secure randomized passwords, and use a single sign-on to unlock your accounts.

You can remove all the Amy, love, Scott, star, and 123s from your passwords you want but if you make them out of names and words you’re still a predictable human. Security means using a machine to trick a machine.

The 3 big takeaways for TechRepublic readers

  1. Nearly half of passwords surveyed contained a username or real name. The most common were Amy, Lisa, Scott, and Mark.
  2. The most commonly hacked age group is the 25-34 year old range, which many may find surprising. Growing up in the early days of the internet, the study argues, has led many people to become complacent.
  3. The most effective way to secure internet accounts is with a randomized password containing upper- and lowercase letters, numbers, and symbols. This is best done using a password manager that can generate and securely store passwords.

Original article here.


standard

Cloud compute pricing bakeoff: Google vs. AWS vs. Microsoft Azure

2016-12-04 - By 

Like everything in enterprise technology, pricing can be a bit complicated. Here’s an analysis from RightScale looking at how discounts alter the cloud pricing equation. Google comes out cheapest in most scenarios.

With Amazon Web Services hosting its annual conference this week, talk about the price for performance and agility equation will be everywhere.

Knowing AWS’ re:Invent is kicking off this week, the largest cloud service provider has been busy cutting prices for various instances. Rest assured that Google and Microsoft are likely to toss in their own price cuts, as AWS speaks to its base.

But the cloud pricing equation is getting complicated for compute instances. Not so shockingly, these price discussions have to include discounts. Like everything in enterprise technology, there’s the street price and your price. Comparing the cloud providers on pricing is tricky given Microsoft, Google, and AWS all have different approaches to discounts.

Also: IaaS: What you need to know about pricing, options, best practices | Eight questions to ask before choosing an IaaS vendor | IaaS checklist: Best practices for picking an IaaS vendor

Fortunately, RightScale on Monday will outline a study on cloud compute prices. Generally speaking, AWS won’t be your cheapest option for compute. AWS typically lands in the middle between Microsoft Azure and Google Cloud.

The bottom line is that AWS uses reserved instances in one-year and three-year terms to offer discounts. Microsoft requires an enterprise agreement for its Azure discounts. Google has sustained usage discounts that are relatively easy to follow.

Overall, RightScale found that Google will be cheapest in most scenarios because sustained usage discounts are automatically applied. Among the key takeaways:

  • If you need solid state drive performance instead of attached storage, Google will charge you a premium.
  • Azure matches or beats AWS for on-demand pricing consistently.
  • AWS won’t be the cheapest alternative in many scenarios. Then again — AWS has a bigger menu, more advanced cloud services, and the customer base where it doesn’t have to go crazy on pricing. AWS just has to be fair.
  • Your results will vary based on the level of your Microsoft enterprise agreement and what reserved instances were purchased on AWS.

Here are three slides to ponder from RightScale.

 

 

Add it up and you’d be advised to make your own comparisons; check out RightScale’s SlideShare, and then crunch some numbers. In the end, enterprises may have to have all three cloud providers in their company — if only to play them off each other.

Original article here.


standard

Google, Facebook, and Microsoft Are Remaking Themselves Around AI

2016-11-24 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.


standard

The Race For AI: Google, Twitter, Intel, Apple In A Rush To Grab Artificial Intelligence Startups

2016-10-10 - By 

Nearly 140 private companies working to advance artificial intelligence technologies have been acquired since 2011, with over 40 acquisitions taking place in 2016 alone (as of 10/7/2016). Corporate giants like Google, IBM, Yahoo, Intel, Apple and Salesforce, are competing in the race to acquire private AI companies, with Samsung emerging as a new entrant this month with its acquisition of startup Viv Labs, which is developing a Siri-like AI assistant.

Google has been the most prominent global player, with 11 acquisitions in the category under its belt (follow all of Google’s M&A activity here through our real-time Google acquisitions tracker).

In 2013, the corporate giant picked up deep learning and neural network startup DNNresearchfrom the computer science department at the University of Toronto. This acquisition reportedly helped Google make major upgrades to its image search feature. In 2014 Googleacquired British company DeepMind Technologies for some $600M (Google DeepMind’s program recently beat a human world champion in the board game “Go”). This year, it acquired visual search startup Moodstock, and bot platform Api.ai.

Intel and Apple are tied for second place. The former acquired 3 startups this year alone: Itseez, Nervana Systems, and Movidius, while Apple acquired Turi and Tuplejump recently.

Twitter ranks third, with 4 major acquisitions, the most recent being image-processing startupMagic Pony.

Salesforce, which joined the race last year with the acquisition of Tempo AI, has already made two major acquisitions this year: Khosla Ventures-backed MetaMind and open-source machine-learning server PredictionIO.

We updated this timeline on 10/7/2016 to include acquirers who have made atleast 2 acquisitions since 2011.

Major Acquirers In Artificial Intelligence Since 2011
CompanyDate of AcquisitionAcquirer
Hunch11/21/2011eBay
Cleversense12/14/2011Google
Face.com5/29/2012Facebook
DNNresearch3/13/2013Google
Netbreeze3/20/2013Microsoft
Causata8/7/2013NICE
Indisys8/25/2013Yahoo
IQ Engines9/13/2013Intel
LookFlow10/23/2013Yahoo
SkyPhrase12/2/2013Yahoo
Gravity1/23/2014AOL
DeepMind1/27/2014Google
Convertro5/6/2014AOL
Cogenea5/20/2014IBM
Desti5/30/2014Nokia
Medio Systems6/12/2014Nokia
Madbits7/30/2014Twitter
Emu8/6/2014Google
Jetpac8/16/2014Google
Dark Blue Labs10/23/2014DeepMind
Vision Factory10/23/2014DeepMind
Wit.ai1/5/2015Facebook
Equivio1/20/2015Microsoft
Granata Decision Systems1/23/2015Google
AlchemyAPI3/4/2015IBM
Explorys4/13/2015IBM
TellApart4/28/2015Twitter
Timeful5/4/2015Google
Tempo AI5/29/2015Salesforce
Sociocast6/9/2015AOL
Whetlab6/17/2015Twitter
Orbeus10/1/2015Amazon
Vocal IQ10/2/2015Apple
Perceptio10/6/2015Apple
Saffron10/26/2015Intel
Emotient1/7/2016Apple
Nexidia1/11/2016NICE
PredictionIO2/19/2016Salesforce
MetaMind4/4/2016Salesforce
Crosswise4/14/2016Oracle
Expertmaker5/5/2016eBay
Itseez5/27/2016Intel
Magic Pony6/20/2016Twitter
Moodstocks7/6/2016Google
SalesPredict7/11/2016eBay
Turi8/5/2016Apple
Nervana Systems8/9/2016Intel
Genee8/22/2016Microsoft
Movidius9/6/2016Intel
Palerra9/19/2016Oracle
Api.ai9/19/2016Google
Angel.ai9/20/2016Amazon
tuplejump9/22/2016Apple

Original article here.


standard

IBM, Google, Facebook, Microsoft, Amazon form enormous AI partnership

2016-09-29 - By 

On Wednesday, the world learned of a new industry association called the Partnership on Artificial Intelligence, and it includes some of the biggest tech companies in the world. IBM, Google, Facebook, Microsoft, and Amazon have all signed on as marquis members, though the group hopes to expand even further over time. The goal is to create a body that can provide a platform for discussions among stakeholders and work out best practices for the artificial intelligence industry. Not directly mentioned, but easily seen on the horizon, is its place as the primary force lobbying for smarter legislation on AI and related future-tech issues.

Best practices can be boring or important, depending on the context, and in this case they are very, very important. Best practices could provide a framework for accurate safety testing, which will be important as researchers ask people to put more and more of their lives in the hands of AI and AI-driven robots. This sort of effort might also someday work toward a list of inherently dangerous and illegitimate actions or AI “thought” processes. One of its core goals is to produce thought leadership on the ethics of AI development.

So, this could end up being the bureaucracy that produces our earliest laws of robotics, if not the one that enforces them. The world “law” is usually used metaphorically in robotics. But with access to the lobbying power of companies like Google and Microsoft, we should expect the Partnership on AI to wade into discussions of real laws soon enough. For instance, the specifics of regulations governing self-driving car technology could still determine which would-be software standard will hit the market first. With the founding of this group, Google has put itself in a position to perhaps direct that regulation for its own benefit.

But, boy, is that ever not how they want you to see it. The group is putting in a really ostentatious level of effort to assure the world it’s not just a bunch of technology super-corps determining the future of mankind, like some sort of cyber-Bilderberg Group. The group’s website makes it clear that it will have “equal representation for corporate and non-corporate members on the board,” and that it “will share leadership with independent third-parties, including academics, user group advocates, and industry domain experts.”

Well, it’s one thing to say that, and quite another to live it. It remains to be seen if the group will actually comport itself as it will need to if it wants real support from the best minds in open source development. Below, the Elon Musk-associated non-profit research company OpenAI responds to the announcement with a rather passive-aggressive word of encouragement.

The effort to include non-profits and other non-corporate bodies makes perfect sense. There aren’t many areas in software engineering where you can claim to be the definitive authority if you don’t have the public on-board. Microsoft, in particular, is painfully aware of how hard it is to push a proprietary standard without the support of the open-source community. Not only will its own research be stronger and more diverse for incorporating the “crowd,” any recommendations it makes will carry more weight with government and far more weight with the public.

That’s why it’s so notable that some major players are absent from this early roll coll — most notably Apple and Intel. Apple haslong been known to be secretive about its AI research, even to the point of hurting its own competitiveness, while Intel has a history of treating AI as an unwelcome distraction. Neither approach is going to win the day, though there is an argument to be made that by remaining outside the group, Apple can still selfishly consume any insights it releases to the public.

Leaving such questions of business ethics aside, robot ethics remains a pressing problem. Self-driving cars illustrate exactly why, and the classic thought experiment involves a crowded freeway tunnel, with no room to swerve or time to brake. Seeing a crash ahead, your car must decide whether to swerve left and crash you into a pillar, or swerve right and save itself while forcing the car beside you right off the road itself. What is moral, in this situation? Would your answer change if the other car was carrying a family of five?

Right now these questions are purely academic. The formation of groups like this show they might not remain so for long.

Original article here.


standard

The 100 Best Free Google Chrome Extensions

2016-08-25 - By 

It’s been an up and down couple of years for Google’s Chrome Web browser.

When we first did a version of this story in January 2015, Chrome had about 22.65 percent of the browser market worldwide, according to Net Applications. As of July 2016, Chrome has 50.95 percent—it crossed paths with Microsoft Internet Explorer in March, when both hit 39 percent. IE continues to dwindle, as does Firefox and Safari. Only Chrome and Microsoft’s new Edge browser have gained, but Edge is only at 5.09 percent of the market.

Then it lost some kudos—from us. After several years as PCMag’s favorite browser, a resurgent Firefox took our Editors’ Choice award. The reason: Chrome lags in graphics hardware acceleration, and it isn’t exactly known for respecting user privacy (just like its parent company).

That said, Chrome remains a four-star tour de force for Web surfing, with full HTML5 support and speedy JavaScript performance. Obviously, there is no denying its popularity. And, like Firefox before it, it’s got support for extensions that make it even better. Its library of extras, found at the Chrome Web Store, is more than rival Firefox has had for years. Also, the store has add-ons to provide quick access to just about every Web app imaginable.

Rather than having you stumble blindly through the store to find the best add-ons, we’ve compiled a list of 100 you should consider. Several are unique to Google and its services (such as Gmail), which isn’t surprising considering who made Chrome. Most extensions work across operating systems, so you can try them on any desktop platform; there may be some versions that work on the mobile Chrome, too.

All of these extensions are free; there’s no harm in giving them all a try—you can easily disable or remove them by typing chrome://extensions/ into the Chrome address bar and right-click an extension’s icon in the toolbar to remove it. As of Chrome version 49, every extension must have a toolbar icon; you can hide them without uninstalling the extension with a right-click and selecting “Hide in Chrome Menu.” You can’t get rid of the icons for good.

Read on for our favorites here, and let us know if we missed a great one!


standard

Google has made a big shift in its plan to give everybody faster internet: from wired to wireless

2016-08-16 - By 

People loved the idea of Google Fiber when it was first announced in 2010. Superfast internet that’s 100 times faster than the norm — and it’s cheap? It sounded too good to be true.

But maybe that initial plan was a little too ambitious.

Over the last several years, Google has worked with dozens of cities and communities to build fiber optic infrastructure that can deliver gigabit speeds to homes and neighborhoods — this would let you stream videos instantly or download entire movies in seconds.

But right now, introducing Google Fiber in any town is a lengthy, expensive process. Google first needs to work with city leaders to lay the groundwork for construction, and then it needs to lay cables underground, along telephone lines, and in houses and buildings.

This all takes time and money: Google has spent hundreds of millions of dollars on these projects, according to The Wall Street Journal, and the service is available in just six metro areas, an average of one per year.

Given these barriers, Google Fiber is reportedly working on a way to make installation quicker, cheaper, and more feasible. According to a new filing with the Federal Communications Commission earlier this month, Google has been testing a new wireless-transmission technology that “relies on newly available spectrum” to roll out Fiber much more quickly.

“The project is in early stages today, but we hope this technology can one day help deliver more abundant internet access to consumers,” a Google spokesperson told Business Insider.

And, according to The Journal, Google is looking to use this wireless technology in “about a dozen new metro areas, including Los Angeles, Chicago and Dallas.”

Right now, Google Fiber customers can pay $70 a month for 1-gigabit-per-second speeds and an extra $60 a month for the company’s TV service. It’s unclear if this wireless technology would change the pricing, but at the very least, it ought to help accelerate Fiber’s expansion and cut down on installation costs.

One of the company’s recent acquisitions could help this transition. In June, Google Fiber bought Webpass, a company that knows how to wirelessly transmit internet service from fiber-connected antennas to antennas mounted on buildings. It’s a concept that’s pretty similar to Starry, another ambitious company that wowed us earlier this year with its plan for a superfast, inexpensive internet service.

Original article here.


standard

Magic Quadrant for Cloud Infrastructure as a Service, Worldwide

2016-08-10 - By 

Summary

The market for cloud IaaS has consolidated significantly around two leading service providers. The future of other service providers is increasingly uncertain and customers must carefully manage provider-related risks.

Market Definition/Description

Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using internet technologies. Cloud infrastructure as a service (IaaS) is a type of cloud computing service; it parallels the infrastructure and data center initiatives of IT. Cloud compute IaaS constitutes the largest segment of this market (the broader IaaS market also includes cloud storage and cloud printing). Only cloud compute IaaS is evaluated in this Magic Quadrant; it does not cover cloud storage providers, platform as a service (PaaS) providers, SaaS providers, cloud service brokerages (CSBs) or any other type of cloud service provider, nor does it cover the hardware and software vendors that may be used to build cloud infrastructure. Furthermore, this Magic Quadrant is not an evaluation of the broad, generalized cloud computing strategies of the companies profiled.

In the context of this Magic Quadrant, cloud compute IaaS (hereafter referred to simply as “cloud IaaS” or “IaaS”) is defined as a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near real time, and metered by use. Self-service interfaces are exposed directly to the customer, including a web-based UI and an API. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s data center. Thus, this Magic Quadrant covers both public and private cloud IaaS offerings.

Cloud IaaS includes not just the resources themselves, but also the automated management of those resources, management tools delivered as services, and cloud software infrastructure services. The last category includes middleware and databases as a service, up to and including PaaS capabilities. However, it does not include full stand-alone PaaS capabilities, such as application PaaS (aPaaS) and integration PaaS (iPaaS).

We draw a distinction between cloud infrastructure as a service , and cloud infrastructure as an enabling technology ; we call the latter “cloud-enabled system infrastructure” (CESI). In cloud IaaS, the capabilities of a CESI are directly exposed to the customer through self-service. However, other services, including noncloud services, may be delivered on top of a CESI; these cloud-enabled services may include forms of managed hosting, data center outsourcing and other IT outsourcing services. In this Magic Quadrant, we evaluate only cloud IaaS offerings; we do not evaluate cloud-enabled services.

Gartner’s clients are mainly enterprises, midmarket businesses and technology companies of all sizes, and the evaluation focuses on typical client requirements. This Magic Quadrant covers all the common use cases for cloud IaaS, including development and testing, production environments (including those supporting mission-critical workloads) for both internal and customer-facing applications, batch computing (including high-performance computing [HPC]) and disaster recovery. It encompasses both single-application workloads and virtual data centers (VDCs) hosting many diverse workloads. It includes suitability for a wide range of application design patterns, including both cloud-native application architectures and enterprise application architectures.

Customers typically exhibit a bimodal IT sourcing pattern for cloud IaaS (see “Bimodal IT: How to Be Digitally Agile Without Making a Mess” and “Best Practices for Planning a Cloud Infrastructure-as-a-Service Strategy — Bimodal IT, Not Hybrid Infrastructure” ). Most cloud IaaS is bought for Mode 2 agile IT, emphasizing developer productivity and business agility, but an increasing amount of cloud IaaS is being bought for Mode 1 traditional IT, with an emphasis on cost reduction, safety and security. Infrastructure and operations (I&O) leaders typically lead the sourcing for Mode 1 cloud needs. By contrast, sourcing for Mode 2 offerings is typically driven by enterprise architects, application development leaders and digital business leaders. This Magic Quadrant considers both sourcing patterns and their associated customer behaviors and requirements.

This Magic Quadrant strongly emphasizes self-service and automation in a standardized environment. It focuses on the needs of customers whose primary need is self-service cloud IaaS, although this may be supplemented by a small amount of colocation or dedicated servers. In self-service cloud IaaS, the customer retains most of the responsibility for IT operations (even if the customer subsequently chooses to outsource that responsibility via third-party managed services).

Organizations that need significant customization or managed services for a single application, or that are seeking cloud IaaS as a supplement to a traditional hosting solution (“hybrid hosting”), should consult the Magic Quadrants for managed hosting instead ( “Magic Quadrant for Cloud-Enabled Managed Hosting, North America,” “Magic Quadrant for Managed Hybrid Cloud Hosting, Europe” and “Magic Quadrant for Cloud-Enabled Managed Hosting, Asia/Pacific” ). Organizations that want a fully custom-built solution, or managed services with an underlying CESI, should consult the Magic Quadrants for data center outsourcing and infrastructure utility services ( “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, North America,” “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, Europe” and “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, Asia/Pacific” ).

This Magic Quadrant evaluates all industrialized cloud IaaS solutions, whether public cloud (multitenant or mixed-tenancy), community cloud (multitenant but limited to a particular customer community), or private cloud (fully single-tenant, hosted by the provider or on-premises). It is not merely a Magic Quadrant for public cloud IaaS. To be considered industrialized, a service must be standardized across the customer base. Although most of the providers in this Magic Quadrant do offer custom private cloud IaaS, we have not considered these nonindustrialized offerings in our evaluations. Organizations that are looking for custom-built, custom-managed private clouds should use our Magic Quadrants for data center outsourcing and infrastructure utility services instead (see above).

Understanding the Vendor Profiles, Strengths and Cautions

Cloud IaaS providers that target enterprise and midmarket customers generally offer a high-quality service, with excellent availability, good performance, high security and good customer support. Exceptions will be noted in this Magic Quadrant’s evaluations of individual providers. Note that when we say “all providers,” we specifically mean “all the evaluated providers included in this Magic Quadrant,” not all cloud IaaS providers in general. Keep the following in mind when reading the vendor profiles:

  • All the providers have a public cloud IaaS offering. Many also have an industrialized private cloud offering, where every customer is on standardized infrastructure and cloud management tools, although this may or may not resemble the provider’s public cloud service in either architecture or quality. A single architecture and feature set and cross-cloud management, for both public and private cloud IaaS, make it easier for customers to combine and migrate across service models as their needs dictate, and enable the provider to use its engineering investments more effectively. Most of the providers also offer custom private clouds.

  • Most of the providers have offerings that can serve the needs of midmarket businesses and enterprises, as well as other companies that use technology at scale. A few of the providers primarily target individual developers, small businesses and startups, and lack the features needed by larger organizations, although that does not mean that their customer base is exclusively small businesses.

  • Most of the providers are oriented toward the needs of Mode 1 traditional IT, especially IT operations organizations, with an emphasis on control, governance and security; many such providers have a “rented virtualization” orientation, and are capable of running both new and legacy applications, but are unlikely to provide transformational benefits. A much smaller number of providers are oriented toward the needs of Mode 2 agile IT; these providers typically emphasize capabilities for new applications and a DevOps orientation, but are also capable of running legacy applications and being managed in a traditional fashion.

  • All the providers offer basic cloud IaaS — compute, storage and networking resources as a service. A few of the providers offer additional value-added capabilities as well, notably cloud software infrastructure services — typically middleware and databases as a service — up to and including PaaS capabilities. These services, along with IT operations management (ITOM) capabilities as a service (especially DevOps-related services) are a vital differentiator in the market, especially for Mode 2 agile IT buyers.

  • We consider an offering to be public cloud IaaS if the storage and network elements are shared; the compute can be multitenant, single-tenant or both. Private cloud IaaS uses single-tenant compute and storage, but unless the solution is on the customer’s premises, the network is usually still shared.

  • In general, monthly compute availability SLAs of 99.95% and higher are the norm, and they are typically higher than availability SLAs for managed hosting. Service credits for outages in a given month are typically capped at 100% of the monthly bill. This availability percentage is typically non-negotiable, as it is based on an engineering estimate of the underlying infrastructure reliability. Maintenance windows are normally excluded from the SLA.

  • Some providers have a compute availability SLA that requires the customer to use compute capabilities in at least two fault domains (sometimes known as “availability zones” or “availability sets”); an SLA violation requires both fault domains to fail. Providers with an SLA of this type are explicitly noted as having a multi-fault-domain SLA.

  • Very few of the providers have an SLA for compute or storage performance. However, most of the providers do not oversubscribe compute or RAM resources; providers that do not guarantee resource allocations are noted explicitly.

  • Many providers have additional SLAs covering network availability and performance, customer service responsiveness and other service aspects.

  • Infrastructure resources are not normally automatically replicated into multiple data centers, unless otherwise noted; customers are responsible for their own business continuity. Some providers offer optional disaster recovery solutions.

  • All providers offer, at minimum, per-hour metering of virtual machines (VMs), and some can offer shorter metering increments, which can be more cost-effective for short-term batch jobs. Providers charge on a per-VM basis, unless otherwise noted. Some providers offer either a shared resource pool (SRP) pricing model or are flexible about how they price the service. In the SRP model, customers contract for a certain amount of capacity (in terms of CPU and RAM), but can allocate that capacity to VMs in an arbitrary way, including being able to oversubscribe that capacity voluntarily; additional capacity can usually be purchased on demand by the hour.

  • Some of the providers are able to offer bare-metal physical servers on a dynamic basis. Due to the longer provisioning times involved for physical equipment (two hours is common), the minimum billing increment for such servers is usually daily, rather than hourly. Providers with a bare-metal option are noted as such.

  • All the providers offer an option for colocation, unless otherwise noted. Many customers have needs that require a small amount of supplemental colocation in conjunction with their cloud — most frequently for a large-scale database, but sometimes for specialized network equipment, software that cannot be licensed on virtualized servers, or legacy equipment. Colocation is specifically mentioned only when a service provider actively sells colocation as a stand-alone service; a significant number of midmarket customers plan to move into colocation and then gradually migrate into that provider’s IaaS offering. If a provider does not offer colocation itself but can meet such needs via a partner exchange, this is explicitly noted.

  • All the providers claim to have high security standards. The extent of the security controls provided to customers varies significantly, though. All the providers evaluated can offer solutions that will meet common regulatory compliance needs, unless otherwise noted. All the providers have SSAE 16 audits for their data centers (see Note 1). Some may have security-specific third-party assessments such as ISO 27001 or SOC 2 for their cloud IaaS offerings (see Note 2), both of which provide a relatively high level of assurance that the providers are adhering to generally accepted practices for the security of their systems, but do not address the extent of controls offered to customers. Security is a shared responsibility; customers need to correctly configure controls and may need to supply additional controls beyond what their provider offers.

  • Some providers offer a software marketplace where software vendors specially license and package their software to run on that provider’s cloud IaaS offering. Marketplace software can be automatically installed with a click, and can be billed through the provider. Some marketplaces also contain other third-party solutions and services.

  • All providers offer enterprise-class support with 24/7 customer service, via phone, email and chat, along with an account manager. Most providers include this with their offering. Some offer a lower level of support by default, but allow customers to pay extra for enterprise-class support.

  • All the providers will sign contracts with customers can invoice, and can consolidate bills from multiple accounts. While some may also offer online sign-up and credit card billing, they recognize that enterprise buyers prefer contracts and invoices. Some will sign “zero dollar” contracts that do not commit a customer to a certain volume.

  • Many of the providers have white-label or reseller programs, and some may be willing to license their software. We mention software licensing only when it is a significant portion of the provider’s business; other service providers, not enterprises, are usually the licensees. We do not mention channel programs; potential partners should simply assume that all these companies are open to discussing a relationship.

  • Most of the providers offer optional managed services on IaaS. However, not all offer the same type of managed services on IaaS as they do in their broader managed hosting or data center outsourcing services. Some may have managed service provider (MSP) or system integrator (SI) partners that provide managed and professional services.

  • All the evaluated providers offer a portal, documentation, technical support, customer support and contracts in English. Some can provide one or more of these in languages other than English. Most providers can conduct business in local languages, even if all aspects of service are English-only. Customers who need multilingual support will find it very challenging to source an offering.

  • All the providers are part of very large corporations or otherwise have a well-established business. However, many of the providers are undergoing significant re-evaluation of their cloud IaaS businesses. Existing and prospective customers should be aware that such providers may make significant changes to the strategy and direction of their cloud IaaS business, including replacing their current offering with a new platform, or exiting this business entirely in favor of partnering with a more successful provider.

In previous years, this Magic Quadrant has provided significant technical detail on the offerings. These detailed evaluations are now published in “Critical Capabilities for Public Cloud Infrastructure as a Service, Worldwide” instead.

The service provider descriptions are accurate as of the time of publication. Our technical evaluation of service features took place between January 2016 and April 2016.

Format of the Vendor Descriptions

When describing each provider, we first summarize the nature of the company and then provide information about its industrialized cloud IaaS offerings in the following format:

Offerings: A list of the industrialized cloud IaaS offerings (both public and private) that are directly offered by the provider. Also included is commentary on the ways in which these offerings deviate from the standard capabilities detailed in the Understanding the Vendor Profiles, Strengths and Cautions section above. We also list related capabilities of interest, such as object storage, content delivery network (CDN) and managed services, but this is not a comprehensive listing of the provider’s offerings.

Locations: Cloud IaaS data center locations by country, languages that the company does business in, and languages that technical support can be conducted in.

Recommended mode: We note whether the vendor’s offerings are likely to appeal to Mode 1 safety-and-efficiency-oriented IT, Mode 2 agility-oriented IT, or both. We also note whether the offerings are likely to be useful for organizations seeking IT transformation. This recommendation reflects the way that a provider goes to market, provides service and support, and designs its offerings. All such statements are specific to the provider’s cloud IaaS offering, not the provider as a whole.

Recommended uses: These are the circumstances under which we recommend the provider. These are not the only circumstances in which it may be a useful provider, but these are the use cases it is best used for. For a more detailed explanation of the use cases, see the Recommended Uses section below.

In the list of offerings, we state the basis of each provider’s virtualization technology and, if relevant, its cloud management platform (CMP). We also state what APIs it supports — the Amazon Web Services (AWS), OpenStack and vCloud APIs are the three that have broad adoption, but many providers also have their own unique API. Note that supporting one of the three common APIs does not provide assurance that a provider’s service is compatible with a specific tool that purports to support that API; the completeness and accuracy of API implementations vary considerably. Furthermore, the use of the same underlying CMP or API compatibility does not indicate that two services are interoperable. Specifically, OpenStack-based clouds differ significantly from one another, limiting portability; the marketing hype of “no vendor lock-in” is, practically speaking, untrue.

For many customers, the underlying hypervisor will matter, particularly for those that intend to run commercial software on IaaS. Many independent software vendors (ISVs) support only VMware virtualization, and those vendors that support Xen may support only Citrix XenServer, not open-source Xen (which is often customized by IaaS providers and is likely to be different from the current open-source version). Similarly, some ISVs may support the Kernel-based Virtual Machine (KVM) hypervisor in the form of Red Hat Enterprise Virtualization, whereas many IaaS providers use open-source KVM.

For a detailed technical description of public cloud IaaS offerings, along with a use-case-focused technical evaluation, see“Critical Capabilities for Public Cloud Infrastructure as a Service, Worldwide.”

We also provide a detailed list of evaluation criteria in “Evaluation Criteria for Cloud Infrastructure as a Service.” We have used those criteria to perform in-depth assessments of several providers: see “In-Depth Assessment of Amazon Web Services,” “In-Depth Assessment of Google Cloud Platform,” “In-Depth Assessment of SoftLayer, an IBM Company” and “In-Depth Assessment of Microsoft Azure IaaS.”

Recommended Uses

For each vendor, we provide recommendations for use. The most typical recommended uses are:

  • Cloud-native applications. These are applications specifically architected to run in a cloud IaaS environment, using cloud-native principles and design patterns.

  • E-business hosting. These are e-marketing sites, e-commerce sites, SaaS applications, and similar modern websites and web-based applications. They are usually internet-facing. They are designed to scale out and are resilient to infrastructure failure, but they might not use cloud transaction processing principles.

  • General business applications. These are the kinds of general-purpose workloads typically found in the internal data centers of most traditional businesses; the application users are usually located within the business. Many such workloads are small, and they are often not designed to scale out. They are usually architected with the assumption that the underlying infrastructure is reliable, but they are not necessarily mission-critical. Examples include intranet sites, collaboration applications such as Microsoft SharePoint and many business process applications.

  • Enterprise applications. These are general-purpose workloads that are mission-critical, and they may be complex, performance-sensitive or contain highly sensitive data; they are typical of a modest percentage of the workloads found in the internal data centers of most traditional businesses. They are usually not designed to scale out, and the workloads may demand large VM sizes. They are architected with the assumption that the underlying infrastructure is reliable and capable of high performance.

  • Development environments. These workloads are related to the development and testing of applications. They are assumed not to require high availability or high performance. However, they are likely to require governance for teams of users.

  • Batch computing. These workloads include high-performance computing (HPC), big data analytics and other workloads that require large amounts of capacity on demand. They do not require high availability, but may require high performance.

  • Internet of Things (IoT) applications. IoT applications typically combine the traits of cloud-native applications with the traits of big data applications. They typically require high availability, flexible and scalable capacity, interaction with distributed and mobile client devices, and strong security; many such applications also have significant regulatory compliance requirements.

For all the vendors, the recommended uses are specific to self-managed cloud IaaS. However, many of the providers also have managed services, as well as other cloud and noncloud services that may be used in conjunction with cloud IaaS. These include hybrid hosting (customers sometimes blend solutions, such as an entirely self-managed front-end web tier on public cloud IaaS, with managed hosting for the application servers and database), as well as hybrid IaaS/PaaS solutions. Even though we do not evaluate managed services, PaaS and the like in this Magic Quadrant, they are part of a vendor’s overall value proposition and we mention them in the context of providing more comprehensive solution recommendations.

Magic Quadrant

 
Figure 1. Magic Quadrant for Cloud Infrastructure as a Service, Worldwide
 
 
Source: Gartner (August 2016)
 
See original article here.
 
 

Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);