Posted In:Industry News Archives - AppFerret

standard

Nigerian ISP’s configuration error disrupted Google services

2018-11-13 - By 

A Nigerian internet service provider said Tuesday that a configuration error it made during a network upgrade caused a disruption of key Google services, routing traffic to China and Russia.

Prior to MainOne’s explanation Tuesday, there was speculation that Monday’s 74-minute data hijacking might have been intentional. Google’s search, cloud hosting and collaborative business tools were among services disrupted.

“Everyone is pretty confident that nothing untoward took place,” MainOne spokeman Tayo Ashiru said.

The type of traffic misdirection involved can knock essential services offline and facilitate espionage and financial theft. They can also be used to block access to information by sending data into internet black holes. Experts say China, in particular, has systematically hijacked and diverted U.S. internet traffic.

But the problem can also result from human error. That’s what Ashiru said happened to MainOne, a major west African ISP. He said engineers mistakenly forwarded to China Telecom addresses for Google services that were supposed to be local. The Chinese company, in turn, sent along the bad data to Russia’s TransTelecom, a major internet presence. Ashiru said MainOne did not yet understand why China Telecom did that, as the state-run company normally doesn’t allow Google traffic on its network.

The traffic diversion into China created a detour with a dead end, preventing users from accessing the affected Google services, said Alex Henthorn-Iwane, an executive at the network-intelligence company ThousandEyes.

He said Monday’s incident offered yet another lesson in the internet’s susceptibility to “unpredictable and destabilizing events. If this could happen to a company with the scale and resources available that Google has, realize it could happen to anyone.”

The diversion, known as gateway protocol hijacking, is built into the internet, which was designed for collaboration by trusted parties—not competition by hostile nation-states. Experts say it is fixable but that would require investments in encrypted routers that the industry has resisted .

ThousandEyes said the diversion at minimum made Google’s search and business collaboration tools difficult or impossible to reach and “put valuable Google traffic in the hands of ISPs in countries with a long history of Internet surveillance.”

However, most network traffic to Google services—94 percent as of Oct. 27—is encrypted, which shields it from prying eyes even if diverted. Google said in a statement that “access to some Google services was impacted” but did not further quantify the disruption.

Google said it had no reason to believe the traffic hijacking was malicious.

Indeed, the phenomenon has occurred before. Google was briefly afflicted in 2015 when an Indian provider stumbled. In perhaps the best-known case, Pakistan Telecom inadvertently hijacked YouTube’s global traffic in 2008 for a few hours when it was trying to enforce a domestic ban. It sent all YouTube traffic into a virtual ditch in Pakistan.

In two recent cases, such rerouting has affected financial sites. In April 2017, one affected MasterCard and Visa among other sites. This past April, another hijacking enabled cryptocurrency theft .

Original article here.

 

 


standard

Happy 40th Anniversary to the Original Intel 8086 and the x86 Architecture

2018-06-08 - By 

Forty years ago today, Intel launched the original 8086 microprocessor — the grandfather of every x86 CPU ever built, including the ones we use now. This, it must be noted, is more or less the opposite outcome of what everyone expected at the time, including Intel.

According to Stephen P. Morse, who led the 8086 development effort, the new CPU “was intended to be short-lived and not have any successors.” Intel’s original goal with the 8086 was to improve overall performance relative to previous products while retaining source compatibility with earlier products (meaning assembly language for the 8008, 8080, or 8085 could be run on the 8086 after being recompiled). It offered faster overall performance than the 8080 or 8085 and could address up to 1MB of RAM (the 8085 topped out at 64KB). It contained eight 16-bit registers, which is where the x86 abbreviation comes from in the first place, and was originally offered at a clock speed of 5MHz (later versions were clocked as high as 10MHz).

Morse had experience in software as well as hardware and, as this historical retrospectivemakes clear, made decisions intended to make it easy to maintain backwards compatibility with earlier Intel products. He even notes that had he known he was inventing an architecture that would power computing for the next 40 years, he would’ve done some things differently, including using a symmetric register structure and avoiding segmented addressing. Initially, the 8086 was intended to be a stopgap product while Intel worked feverishly to finish its real next-generation microprocessor — the iAPX 432, Intel’s first 32-bit microprocessor. When sales of the 8086 began to slip in 1979, Intel made the decision to launch a massive marketing operation around the chip, dubbed Operation Crush. The goal? Drive adoption of the 8086 over and above competing products made by Motorola and Zilog (the latter founded by former Intel employees, including Federico Faggin, lead architect on the first microprocessor, Intel’s 4004). Project Crush was quite successful and is credited with spurring IBM to adopt the 8088 (a cut-down 8086 with an 8-bit bus) for the first IBM PC.

One might expect, given the x86 architecture’s historic domination of the computing industry, that the chip that launched the revolution would have been a towering achievement or quantum leap above the competition. The truth is more prosaic. The 8086 was a solid CPU core built by intelligent architects backed up by a strong marketing campaign. The computer revolution it helped to launch, on the other hand, transformed the world.

All that said, there’s one other point want to touch on.

It’s Been 40 Years. Why Are We Still Using x86 CPUs?

This is a simple question with a rather complex answer. First, in a number of very real senses, we aren’t really using x86 CPUs anymore. The original 8086 was a chip with 29,000 transistors. Modern chips have transistor counts in the billions. The modern CPU manufacturing process bears little resemblance to the nMOS manufacturing process used to implement the original design in 1978. The materials used to construct the CPU are themselves very different and the advent of EUV (Extreme Ultraviolet Lithography) will transform this process even more.

Modern x86 chips translate x86 microcode into internal micro-ops for more efficient execution. They implement features like out-of-order execution and speculative executionto improve performance and limit the impact of slow memory buses (relative to CPU clocks) with multiple layers of cache and capabilities like branch prediction. People often ask “Why are we still using x86 CPUs?” as if this was analogous to “Why are we still using the 8086?” The honest answer is: We aren’t. An 8086 from 1978 and a Core i7-8700K are both CPUs, just as a Model T and 2018 Toyota are both cars — but they don’t exactly share much beyond that most basic classification.

Furthermore, Intel tried to replace or supplant the x86 architecture multiple times. The iAPX 432, Intel i960, Intel i860, and Intel Itanium were all intended to supplant x86. Far from refusing to consider alternatives, Intel literally spent billions of dollars over multiple decades to bring those alternative visions to life. The x86 architecture won these fights — but it didn’t just win them because it offered backwards compatibility. We spoke to Intel Fellow Ronak Singhal for this article, who pointed out a facet of the issue I honestly hadn’t considered before. In each case, x86 continued to win out against the architectures Intel intended to replace it because the engineers working on those x86 processors found ways to extend and improve the performance of Intel’s existing microarchitectures, often beyond what even Intel engineers had thought possible years earlier.

Is there a penalty for continuing to support the original x86 ISA? There is — but today, it’s a tiny one. The original Pentium may have devoted up to 30 percent of its transistors to backwards compatibility, and the Pentium Pro’s bet on out-of-order execution and internal micro-ops chewed up a huge amount of die space and power, but these bets paid off. Today, the capabilities that consumed huge resources on older chips are a single-digit percent or less of the power or die area budget of a modern microprocessor. Comparisons between a variety of ISAs have demonstrated that architectural design decisions have a much larger impact on performance efficiency and power consumption than ISA does, at least above the microcontroller level.

Will we still be using x86 chips 40 years from now? I have no idea. I doubt any of the Intel CPU designers that built the 8086 back in 1978 thought their core would go on to power most of the personal computing revolution of the 1980s and 1990s. But Intel’s recent moves into fields like AI, machine learning, and cloud data centers are proof that the x86 family of CPUs isn’t done evolving. No matter what happens in the future, 40 years of success are a tremendous legacy for one small chip — especially one which, as Stephen Moore says, “was intended to be short-lived and not have any successors.”

Now read: A Brief History of Intel CPUs, Part 1: The 4004 to the Pentium Pro

Original article here.

 


standard

Microsoft has acquired GitHub for $7.5B in stock

2018-06-04 - By 

After a week of rumors, Microsoft  today confirmed that it has acquired GitHubthe popular Git-based code sharing and collaboration service. The price of the acquisition was $7.5 billion in Microsoft stock. GitHub raised $350 million and we know that the company was valued at about $2 billion in 2015.

Former Xamarin CEO Nat Friedman (and now Microsoft corporate vice president) will become GitHub’s CEO. GitHub founder and former CEO Chris Wanstrath will become a Microsoft technical fellow and work on strategic software initiatives. Wanstrath had retaken his CEO role after his co-founder Tom Preston-Werner resigned following a harassment investigation in 2014.

The fact that Microsoft is installing a new CEO for GitHub is a clear sign that the company’s approach to integrating GitHub will be similar to hit it is working with LinkedIn. “GitHub will retain its developer-first ethos and will operate independently to provide an open platform for all developers in all industries,” a Microsoft spokesperson told us.

GitHub says that as of March 2018, there were 28 million developers in its community, and 85 million code repositories, making it the largest host of source code globally and a cornerstone of how many in the tech world build software.

But despite its popularity with enterprise users, individual developers and open source projects, GitHub has never turned a profit and chances are that the company decided that an acquisition was preferable over trying to IPO.

GitHub’s main revenue source today is paid accounts, which allows for private repositories and a number of other features that enterprises need, with pricing ranging from $7 per user per month to $21/user/month. Those building public and open source projects can use it for free.

While numerous large enterprises use GitHub as their code sharing service of choice, it also faces quite a bit of competition in this space thanks to products like GitLab and Atlassian’s Bitbucket, as well as a wide range of other enterprise-centric code hosting tools.

Microsoft is acquiring GitHub because it’s a perfect fit for its own ambitions to be the go-to platform for every developer, and every developer need, no matter the platform.

Microsoft has long embraced the Git protocol and is using it in its current Visual Studio Team Services product, which itself used to compete with GitHub’s enterprise service. Knowing GitHub’s position with developers, Microsoft has also leaned on the service quite a bit itself, too and some in the company already claim it is the biggest contributor to GitHub today.

Yet while Microsoft’s stance toward open source has changed over the last few years, many open source developers will keep a very close look at what the company will do with GitHub after the acquisition. That’s because there is a lot of distrust of Microsoft in this cohort, which is understandable given Microsoft’s history.

In fact, TechCrunch received a tip on Friday, which noted not only that the deal had already closed, but that open source software maintainers were already eyeing up alternatives and looking potentially to abandon GitHub in the wake of the deal. Some developers (not just those working in open source) were not wasting timeeven to wait for a confirmation of the deal before migrating.

While GitHub is home to more than just open source software, if such a migration came to pass, it would be a very bad look both for GitHub and Microsoft. And, it would a particularly ironic turn, given the very origins of Git: the versioning control system was created by Linus Torvalds in 2005 when he was working on development of the Linux kernel, in part as a response to a previous system, BitKeeper, changing its terms away from being free to use.

The new Microsoft under CEO Satya Nadella strikes us as a very different company from the Microsoft of ten years ago — especially given that the new Microsoft has embraced open source — but it’s hard to forget its earlier history of trying to suppress Linux.

“Microsoft is a developer-first company, and by joining forces with GitHub we strengthen our commitment to developer freedom, openness and innovation,” said Nadella in today’s announcement. “We recognize the community responsibility we take on with this agreement and will do our best work to empower every developer to build, innovate and solve the world’s most pressing challenges.”

Yet at the same time, it’s worth remembering that Microsoft is now a member of the Linux Foundation and regularly backs a number of open source projects. And Windows now has the Linux subsystem while VS Code, the company’s free code editing tool is open source and available on GitHub, as are .NET Core and numerous other Microsoft-led projects.

And many in the company were defending Microsoft’s commitment to GitHub and its principles, even before the deal was announced.

Still, you can’t help but wonder how Microsoft might leverage GitHub within its wider business strategy, which could see the company build stronger bridges between GitHub and Azure, its cloud hosting service, and its wide array of software and collaboration products. Microsoft is no stranger to ingesting huge companies. One of them, LinkedIn, might be another area where Microsoft might explore synergies, specifically around areas like recruitment and online tutorials and education.

 

Original article here.

 


standard

Serverless is eating the stack and people are freaking out — as they should be

2018-04-20 - By 

AWS Lambda has stamped a big DEPRECATED on containers – Welcome to “Serverless Superheroes”! 
In this space, I chat with the toolmakers, innovators, and developers who are navigating the brave new world of “serverless” cloud applications.

In this edition, I chatted with Steven Faulkner, a senior software engineer at LinkedIn and the former director of engineering at Bustle. The following interview has been edited and condensed for clarity.

Forrest Brazeal: At Bustle, your previous company, I heard you cut your hosting costs by about forty percent when you switched to serverless. Can you speak to where all that money was going before, and how you were able to make that type of cost improvement?

Steven Faulkner: I believe 40% is where it landed. The initial results were even better than that. We had one service that was costing about $2500 a month and it went down to about $500 a month on Lambda.

Bustle is a media company — it’s got a lot of content, it’s got a lot of viral, spiky traffic — and so keeping up with that was not always the easiest thing. We took advantage of EC2 auto-scaling, and that worked … except when it didn’t. But when we moved to Lambda — not only did we save a lot of money, just because Bustle’s traffic is basically half at nighttime what it is during the day — we saw that serverless solved all these scaling headaches automatically.

On the flip side, did you find any unexpected cost increases with serverless?

There are definitely things that cost more or could be done cheaper not on serverless. When I was at Bustle they were looking at some stuff around data pipelines and settled on not using serverless for that at all, because it would be way too expensive to go through Lambda.

Ultimately, although hosting cost was an interesting thing out of the gate for us, it quickly became a relative non-factor in our move to serverless. It was saving us money, and that was cool, but the draw of serverless really became more about the velocity with which our team could develop and deploy these applications.

At Bustle, we only have ever had one part-time “ops” person. With serverless, those responsibilities get diffused across our team, and that allowed us all to focus more on the application and less on how to get it deployed.

Any of us who’ve been doing serverless for a while know that the promise of “NoOps” may sound great, but the reality is that all systems need care and feeding, even ones you have little control over. How did your team keep your serverless applications running smoothly in production?

I am also not a fan of the term “NoOps”; it’s a misnomer and misleading for people. Definitely out of the gate with serverless, we spent time answering the question: “How do we know what’s going on inside this system?”

IOPipe was just getting off the ground at that time, and so we were one of their very first customers. We were using IOPipe to get some observability, then CloudWatch sort of got better, and X-Ray came into the picture which made things a little bit better still. Since then Bustle also built a bunch of tooling that takes all of the Lambda logs and data and does some transformations — scrubs it a little bit — and sends it to places like DataDog or to Scalyr for analysis, searching, metrics and reporting.

But I’m not gonna lie, I still don’t think it’s super great. It got to the point where it was workable and we could operate and not feel like we were always missing out on what was actually going on, but there’s a lot of room for improvement.

Another common serverless pain point is local development and debugging. How did you handle that?

I wrote a framework called Shep that Bustle still uses to deploy all of our production applications, and it handles the local development piece. It allows you to develop a NodeJS application locally and then deploy it to Lambda. It could do environment variables before Lambda had environment variables, and have some sanity around versioning and using webpack to bundle. All the the stuff that you don’t really want the everyday developer to have to worry about.

I built Shep in my first couple of months at Bustle, and since then, the Serverless Framework has gotten better. SAM has gotten better. The whole entire ecosystem has leveled up. If I was doing it today I probably wouldn’t need to write Shep. But at the time, that’s definitely when we had to do.

You’re putting your finger on an interesting reality with the serverless space, which is: it’s evolving so fast that it’s easy to create a lot of tooling and glue code that becomes obsolete very quickly. Did you find this to be true?

That’s extremely fair to say. I had a little Twitter thread around this a couple months ago, having a bit of a realization myself that Shep is not the way I would do deployments anymore. When AWS releases their own tooling, it always seems to start out pretty bad, so the temptation is to fill in those gaps with your own tool.

But AWS services change and get better at a very rapid rate. So I think the lesson I learned is lean on AWS as much as possible, or build on top of their foundation and make it pluggable in a way that you can just revert to the AWS tooling when it gets better.

Honestly, I don’t envy a lot of the people who sliced their piece of the serverless pie based on some tool they’ve built. I don’t think that’s necessarily a long term sustainable thing.

As I talk to developers and sysadmins, I feel like I encounter a lot of rage about serverless as a concept. People always want to tell me the three reasons why it would never work for them. Why do you think this concept inspires so much animosity and how do you try to change hearts and minds on this?

A big part of it is that we are deprecating so many things at one time. It does feel like a very big step to me compared to something like containers. Kelsey Hightower said something like this at one point: containers enable you to take the existing paradigm and move it forward, whereas serverless is an entirely new paradigm.

And so all these things that people have invented and invested time and money and resources in are just going away, and that’s traumatic, that’s painful. It won’t happen overnight, but anytime you make something that makes people feel like what they’ve maybe spent the last 10 years doing is obsolete, it’s hard. I don’t really know if I have a good way to fix that.

My goal with serverless was building things faster. I’m a product developer; that’s my background, that’s what I like to do. I want to make cool things happen in the world, and serverless allows me to do that better and faster than I can otherwise. So when somebody comes to me and says “I’m upset that this old way of doing things is going away”, it’s hard for me to sympathize.

It sounds like you’re making the point that serverless as a movement is more about business value than it is about technology.

Exactly! But the world is a big tent and there’s room for all kinds of stuff. I see this movement around OpenFaaS and the various Functions as a Service on Kubernetes and I don’t have a particular use for those things, but I can see businesses where they do, and if it helps get people transitioned over to serverless, that’s great.

So what is your definition of serverless, then?

I always joke that “cloud native” would have been a much better term for serverless, but unfortunately that was already taken. I think serverless is really about the managed services. Like, who is responsible for owning whether this thing that my application depends on stays up or not? And functions as a service is just a small piece of that.

The way I describe it is: functions as a service are cloud glue. So if I’m building a model airplane, well, the glue is a necessary part of that process, but it’s not the important part. Nobody looks at your model airplane and says: “Wow, that’s amazing glue you have there.” It’s all about how you craft something that works with all these parts together, and FaaS enables that.

And, as Joe Emison has pointed out, you’re not just limited to one cloud provider’s services, either. I’m a big user of Algolia with AWS. I love using Algolia with Firebase, or Netlify. Serverless is about taking these pieces and gluing them together. Then it’s up to the service provider to really just do their job well. And over time hopefully the providers are doing more and more.

We’re seeing that serverless mindset eat all of these different parts of the stack. Functions as a service was really a critical bit in order to accelerate the process. The next big piece is the database. We’re gonna see a lot of innovation there in the next year. FaunaDB is doing some cool stuff in that area, as is CosmosDB. I believe there is also a missing piece of the market for a Redis-style serverless offering, something that maybe even speaks Redis commands but under the hood is automatically distributed and scalable.

What is a legitimate barrier to companies that are looking to adopt serverless at this point?

Probably the biggest is: how do you deal with the migration of legacy things? At Bustle we ended up mostly re-architecting our entire platform around serverless, and so that’s one option, but certainly not available to everybody. But even then, the first time we launched a serverless service, we brought down all of our Redis instances — because Lambda spun up all these containers and we hit connection limits that you would never expect to hit in a normal app.

So if you’ve got something sitting on a mainframe somewhere that is used to only having 20 connections and then you moved over some upstream service to Lambda and suddenly it has 10,000 connections instead of 20. You’ve got a problem. If you’ve bought into service-oriented architecture as a whole over the last four or five years, then you might have a better time, because you can say “Well, all these things do is talk to each other via an API, so we can replace a single service with serverless functions.”

Any other emerging serverless trends that interest you?

We’ve solved a lot of the easy, low-hanging fruit problems with serverless at this point. Like how you do environment variables, or how you’re gonna structure a repository and enable developers to quickly write these functions, We’re starting to establish some really good best practices.

What’ll happen next is we’ll get more iteration around architecture. How do I glue these four services together, and how do the Lambda functions look that connect them? We don’t yet have the Rails of serverless — something that doesn’t necessarily expose that it’s actually a Lambda function under the hood. Maybe it allows you to write a bunch of functions in one file that all talk to each other, and then use something like webpack that splits those functions and deploys them in a way that makes sense for your application.

We could even respond to that at runtime. You could have an application that’s actually looking at what’s happening in the code and saying: “Wow this one part of your code is taking a long time to run; we should make that its own Lambda function and we should automatically deploy that and set up this SNS trigger for you.” That’s all very pie in the sky, but I think we’re not that far off from having these tools.

Because really, at the end of the day, as a developer I don’t care about Lambda, right? I mean, I have to care right now because it’s the layer in which I work, but if I can move one layer up where I’m just writing business logic and the code gets split up appropriately, that’s real magic.


Forrest Brazeal is a cloud architect and serverless community advocate at Trek10. He writes the Serverless Superheroes series and draws the ‘FaaS and Furious’ cartoon series at A Cloud Guru. If you have a serverless story to tell, please don’t hesitate to let him know.

Original article here.

 


standard

Rethinking Gartner’s Hype Cycle

2018-01-12 - By 

The Gartner hype cycle is one of the more brilliant insights ever uncovered in the history of technology. I rank it right up there with Moore’s Law and Christensen’s model of disruptive innovation from below.

Gartner’s hype cycle describe a 5-stage pattern that almost all new technologies follow:

  1. technology trigger introduces new possibilities — things like AI, chatbots, AR/VR, blockchain, etc. — which capture the imagination and create a rapid rise in expectations. (“Big data is a breakthrough!”)
  2. The fervor quickly reaches a peak of inflated expectations — the “hype” is deafening and dramatically overshoots the reality of what’s possible. (“Big data will change everything!”>
  3. Reality soon sets in though, as people realize that the promises of that hype aren’t coming to fruition. Expectations drop like a rock, and the market slips into a trough of disillusionment. (“Big data isn’t that magical after all.”)
  4. But there is underlying value to the technology, and as it steadily improves, people begin to figure out realistic applications. This is the slope of enlightenment: expectations rise again, but less sharply, in alignment with what’s achievable. (“Big data is actually useful in these cases…”)
  5. Finally the expectations of the technology are absorbed into everyday life, with well-established best practices, leveling off in the plateau of productivity. (“Big data is an ordinary fact of life. Here’s how we use it.”)

It might not be a law of nature, but as a law of technology markets, it’s pretty consistent.

We hear a lot about the hype cycle in the martech world, because we have been inundated with new technologies in marketing. I’m covering a number of them in my 2018 update to the 5 disruptions to marketing: artificial intelligence (AI), conversational interfaces, augmented reality (AR), Internet of Things (IoT), customer data platforms (CDP), etc.

In marketing, it’s not just technologies that follow this hype cycle, but also concepts and tactics, such as content marketing, account-based marketing, revenue operations, and so on. By the way, that’s not a knock against any of those. There is real value in all of them. But the hype exceeds the reality in the first 1/3 or so of their lifecycle.

Indeed, it’s the reality underneath the hype cycle that people lose sight of. Expectations are perception. The actual advancement of the technology (or concept or tactic) is reality.

At the peak of inflated expectations, reality is far below what’s being discussed ad nauseum in blog posts and board rooms. In the tough of disillusionment, the actual, present-day potential is sadly underestimated — discussions shift to the inflated expectations of the next new thing.

However, this desync between expectations and reality is a good thing — if you know what you’re doing. The gap between expectations and reality creates opportunities for a savvy company to manage to the reality while competitors chase the hype cycle.

It’s a variation of the age-old investment advice: buy low, sell high.

At the peak of inflated expectations, you want to avoid overspending on technology and overpromising results. You don’t want to ignore the movement entirely, since there is fire smoldering below the smoke. But you want to evaluate claims carefully, run things with an experimental mindset, and focus on real learning.

In the trough of disillusionment, that’s when you want to pour gas on the fire. Leverage what you learned from your experimental phase to scale up the things you know work, because you’ve proven them in your business.

Don’t be distracted by the backlash of negative chatter at this stage of the hype cycle. Reinvest your experimental efforts in pushing the possibilities ahead of the slope of enlightenment. This is your chance to race ahead of competitors who are pulling back from their missed results against earlier, unrealistic expectations.

As close as possible, you want to track the actual advancement of the technology. If you can achieve that, you’ll get two big wins, as the hype is on the way up and on the way down. You’ll harness the pendulum of the hype cycle into useful energy.

P.S. When I program the MarTech conferenceagenda, my goal is to give attendees as accurate of a picture of the actual advancement of marketing technologies as possible.

I won’t try to sell you a ticket on overinflated expectations. But I will try to sell you a ticket on getting you the ground truth of marketing technology and innovation, so you can capture the two opportunities that are yours to take from the hype cycle.

Our next event is coming up, April 23-25 in San Jose. Our early bird rates expire on January 27, which saves you $500 on all-access passes. Take advantage of that pricing while you can.

 

Original article here.


standard

Spectre, Meltdown: Critical CPU Security Flaws Explained

2018-01-04 - By 

Over the past few days we’ve covered major new security risks that struck at a number of modern microprocessors from Intel and to a much lesser extent, ARM and AMD. Information on the attacks and their workarounds initially leaked out slowly, but Google has pushed up its timeline for disclosing the problems and some vendors, like AMD, have issued their own statements. The two flaws in question are known as Spectre and Meltdown, and they both relate to one of the core capabilities of modern CPUs, known as speculative execution.

Speculative execution is a performance-enhancing technique virtually all modern CPUs include to one degree or another. One way to increase CPU performance is to allow the core to perform calculations it may need in the future. The different between speculative execution and “execution” is that the CPU performs these calculations before it knows whether it’ll actually be able to use the results.

Here’s how Google’s Project Zero summarizes the problem: “We have discovered that CPU data cache timing can be abused to efficiently leak information out of mis-speculated execution, leading to (at worst) arbitrary virtual memory read vulnerabilities across local security boundaries in various contexts.”

Meltdown is Variant 3 in ARMAMD, and Google parlance. Spectre accounts for Variant 1 and Variant 2.

Meltdown

“On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer.”

Intel is badly hit by Meltdown because its speculative execution methods are fairly aggressive. Specifically, Intel CPUs are allowed to access kernel memory when performing speculative execution, even when the application in question is running in user memory space. The CPU does check to see if an invalid memory access occurs, but it performs the check after speculative execution, not before. Architecturally, these invalid branches never execute — they’re blocked — but it’s possible to read data from affected cache blocks even so.

The various OS-level fixes going into macOS, Windows, and Linux all concern Meltdown. The formal PDF on Meltdown notes that the software patches Google, Apple, and Microsoft are working on are a good start, but that the problem can’t be completely fixed in software. AMD and ARM appear largely immune to Meltdown, though ARM’s upcoming Cortex-A75 is apparently impacted.

Spectre

Meltdown is bad, but Meltdown can at least be ameliorated in software (with updates), even if there’s an associated performance penalty. Spectre is the name given to a set of attacks that “involve inducing a victim to speculatively perform operations that would not occur during correct program execution, and which leak the victim’s confidential information via a side channel to the adversary.”

Unlike Meltdown, which impacts mostly Intel CPUs, Spectre’s proof of concept works against everyone, including ARM and AMD. Its attacks are pulled off differently — one variant targets branch prediction — and it’s not clear there are hardware solutions to this class of problems, for anyone.

What Happens Next

Intel, AMD, and ARM aren’t going to stop using speculative execution in their processors; it’s been key to some of the largest performance improvements we’ve seen in semiconductor history. But as Google’s extensive documentation makes clear, these proof-of-concept attacks are serious. Neither Spectre nor Meltdown relies on any kind of software bug to work. Meltdown can be solved through hardware design and software rearchitecting; Spectre may not.

When reached for comment on the matter, Linux creator Linux Torvalds responded with the tact that’s made him legendary. “I think somebody inside of Intel needs to really take a long hard look at their CPU’s, and actually admit that they have issues instead of writing PR blurbs that say that everything works as designed,” Torvalds writes. “And that really means that all these mitigation patches should be written with ‘not all CPU’s are crap’ in mind. Or is Intel basically saying ‘We are committed to selling you shit forever and ever, and never fixing anything? Because if that’s the case, maybe we should start looking towards the ARM64 people more.”

It does appear, as of this writing, that Intel is disproportionately exposed on these security flaws. While Spectre-style attacks can affect all CPUs, Meltdown is pretty Intel-specific. Thus far, user applications and games don’t seem much impacted, but web servers and potentially other workloads that access kernel memory frequently could run markedly slower once patched.

 

Original article here.

 


standard

Western Digital plans 40TB drives, but it’s still not enough

2017-10-24 - By 

Data continues to grow faster than disk capacity.

Hard disk makers are using capacity as their chief bulwark against the rise of solid-state drives (SSDs), since they certainly can’t argue on performance, and Western Digital — the king of the hard drive vendors — has shown off a new technology that could lead to 40TB drives.

Western Digital already has the largest-capacity drive on the market. It recently introduced a 14TB drive, filled with helium to reduce drag on the spinning platters. But thanks to a new technology called microwave-assisted magnetic recording (MAMR), the company hopes to reach 40TB by 2025. The company promised engineering samples of drive by mid-2018.

MAMR technology is a new method of cramming more data onto the disk. Western Digital’s chief rival, Seagate, is working on a competitive product called HAMR, or heat-assisted magnetic recording. I’ll leave it to propeller heads like AnandTech to explain the electrical engineering of it all. What matters to the end user is that it should ship sometime in 2019, and that’s after 13 years of research and development.

That’s right, MAMR was first developed by a Carnegie Mellon professor in 2006 and work has gone on ever since.

The physics of hard drives

Just like semiconductors, hard drives are running into a brick wall called the laws of physics. Every year it gets harder and harder to shrink these devices while cramming more in them at the same time.

Western Digital believes MAMR should enable 15 percent decline in terabytes per dollar, another argument hard disk has on SSD. A hard disk will always be cheaper per terabyte than SSD because cramming more data into the same space is easier, relatively speaking, for hard drives than flash memory chips. MAMR and HAMR are expected to enable drive makers to pack as much as 4 terabits per square inch on a platter, well beyond the 1.1 terabits per square inch in today’s drives.

Data growing faster than hard disk capacity

The thing is, data is growing faster than hard disk capacity. According to research from IDC (sponsored by Seagate, it should be noted), by 2025 the global datasphere will grow to 163 zettabytes (a zettabyte is a trillion gigabytes). That’s 10 times the 16.1ZB of data generated in 2016. Much of it will come from Big Data and analytics, especially the Internet of Things (IoT), where sensor data will be picking up gigabytes of data per second.

And those data sets are so massive, many companies don’t use them all. They dump their accumulated data into what are called data lakes to be processed later, if ever. I’ve seen estimates that collected data is unused as high as 90 percent. But it has to sit somewhere, and that’s on a hard disk.

Mind you, that’s just Big Data. Individuals are generating massive quantities of data as well. Online backup and storage vendor BackBlaze, which has seen its profile rise after it began reporting on hard drive failures, uses hundreds of thousands of drives in its data centers. It just placed an order for 100 petabytes worth of disk storage, and it plans to deploy all of it in the fourth quarter of this year. And it has plans for another massive order for Q1. And that’s just one online storage vendor among dozens.

All of that is great news for Western Digital, Seagate and Toshiba — and the sales reps who work on commission.

Original article here.


standard

State Of Machine Learning And AI, 2017

2017-10-01 - By 

AI is receiving major R&D investment from tech giants including Google, Baidu, Facebook and Microsoft.

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

Key takeaways from the study include the following:

  • Tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions. The current rate of AI investment is 3X the external investment growth since 2013. McKinsey found that 20% of AI-aware firms are early adopters, concentrated in the high-tech/telecom, automotive/assembly and financial services industries. The graphic below illustrates the trends the study team found during their analysis.
  • AI is turning into a race for patents and intellectual property (IP) among the world’s leading tech companies. McKinsey found that only a small percentage (up to 9%) of Venture Capital (VC), Private Equity (PE), and other external funding. Of all categories that have publically available data, M&A grew the fastest between 2013 And 2016 (85%).The report cites many examples of internal development including Amazon’s investments in robotics and speech recognition, and Salesforce on virtual agents and machine learning. BMW, Tesla, and Toyota lead auto manufacturers in their investments in robotics and machine learning for use in driverless cars. Toyota is planning to invest $1B in establishing a new research institute devoted to AI for robotics and driverless vehicles.
  • McKinsey estimates that total annual external investment in AI was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. Robotics and speech recognition are two of the most popular investment areas. Investors are most favoring machine learning startups due to quickness code-based start-ups have at scaling up to include new features fast. Software-based machine learning startups are preferred over their more cost-intensive machine-based robotics counterparts that often don’t have their software counterparts do. As a result of these factors and more, Corporate M&A is soaring in this area with the Compound Annual Growth Rate (CAGR) reaching approximately 80% from 20-13 to 2016. The following graphic illustrates the distribution of external investments by category from the study.
  • High tech, telecom, and financial services are the leading early adopters of machine learning and AI. These industries are known for their willingness to invest in new technologies to gain competitive and internal process efficiencies. Many startups have also had their start by concentrating on the digital challenges of this industries as well. The MGI Digitization Index is a GDP-weighted average of Europe and the United States. See Appendix B of the study for a full list of metrics and explanation of methodology. McKinsey also created an overall AI index shown in the first column below that compares key performance indicators (KPIs) across assets, usage, and labor where AI could make a contribution. The following is a heat map showing the relative level of AI adoption by industry and key area of asset, usage, and labor category.
  • McKinsey predicts High Tech, Communications, and Financial Services will be the leading industries to adopt AI in the next three years. The competition for patents and intellectual property (IP) in these three industries is accelerating. Devices, products and services available now and on the roadmaps of leading tech companies will over time reveal the level of innovative activity going on in their R&D labs today. In financial services, for example, there are clear benefits from improved accuracy and speed in AI-optimized fraud-detection systems, forecast to be a $3B market in 2020. The following graphic provides an overview of sectors or industries leading in AI addition today and who intend to grow their investments the most in the next three years.
  • Healthcare, financial services, and professional services are seeing the greatest increase in their profit margins as a result of AI adoption.McKinsey found that companies who benefit from senior management support for AI initiatives have invested in infrastructure to support its scale and have clear business goals achieve 3 to 15% percentage point higher profit margin. Of the over 3,000 business leaders who were interviewed as part of the survey, the majority expect margins to increase by up to 5% points in the next year.
  • Amazon has achieved impressive results from its $775 million acquisition of Kiva, a robotics company that automates picking and packing according to the McKinsey study. “Click to ship” cycle time, which ranged from 60 to 75 minutes with humans, fell to 15 minutes with Kiva, while inventory capacity increased by 50%. Operating costs fell an estimated 20%, giving a return of close to 40% on the original investment
  • Netflix has also achieved impressive results from the algorithm it uses to personalize recommendations to its 100 million subscribers worldwide. Netflix found that customers, on average, give up 90 seconds after searching for a movie. By improving search results, Netflix projects that they have avoided canceled subscriptions that would reduce its revenue by $1B annually.

 

Original article here.


standard

Smartphone users on Wi-Fi drive most website traffic

2017-09-26 - By 

Smartphones are responsible for significant web traffic growth, and a surprising amount of it is on Wi-Fi, not mobile networks.

Web visits from desktops and tablets have declined dramatically, says Adobe Digital Insights in Adobe Mobile Trends Refresh — Q2 2017.

The device people are using: their smartphone. And the majority of that device’s traffic is arriving via Wi-Fi connections, not mobile networks, the analytics-oriented research firm says. Adobe has been tracking over 150 billion visits to 400 websites and apps since 2015.

The sites these mobile users are visiting are large-organization national news, media and entertainment, and retail — with over 60 percent of those smartphone visits connecting through Wi-Fi.

Major travel, banking and investment, automotive, and insurance company sites are up there, too, with more than 50 percent of their public-deriving traffic, from smartphone devices, coming through Wi-Fi instead of via mobile networks.

Cisco bullish on Wi-Fi

Networking equipment vendor Cisco is also bullish on Wi-Fi. In research published in February, it says that by next year, “Wi-Fi traffic will even surpass fixed/wired traffic.” And by 2021, 63 percent of global mobile data traffic will be offloaded onto Wi-Fi networks and not use mobile.

“By 2021, 29 percent of the global IP traffic will be carried by Wi-Fi networks from dual mode [mobile] devices,” Cisco says.

Wi-Fi is also expected to handle mobile network offloading for many future Internet of Things (IoT) devices, the company says.

It’s seems reports Wi-Fi’s death, like Mark Twain’s, are greatly exaggerated.

Reasons for Wi-Fi’s hold and domination over mobile include speed, which is often faster than mobile networks, and cost. Wi-Fi costs less for consumers than mobile networks. Expect a reversal, though, if mobile networks get cut-rate enough.

Will 5G displace Wi-Fi?

What will happen when 5G mobile networks come along in or around 2020? Is Wi-Fi’s writing on the wall then? Maybe. For a possible answer, one may need to look at history.

“New cellular technologies with higher speeds than their predecessors tend to have lower offload rates,” Cisco says. That’s because of more capacity and advantageous data limits for the consumer. It’s designed to kick-start the tech, as was the case with 4G’s launch.

In any case, whatever way one looks at it, mobile internet of one kind or another that isn’t fixed is where it’s at. It’s responsible for web traffic growth. People want smartphones for consuming media.

“Bigger screens are losing share,” Adobe says in an article accompanying its report.

U.S. government websites corroborate Adobe’s mobile trend. In an August report, the General Services Administration (GSA) said mobile had grabbed 43 percent of all traffic to government websites in December 2016, compared to 36 percent a year before. It sees even more growth this year, it says.

“Most industries see more than half of their traffic from mobile devices,” Adobe concludes.

Original article here.

 


standard

The Scale of the Internet 2017 (infograph)

2017-09-04 - By 

Just a month ago, it was revealed that Facebook has more than two billion active monthly users. That means that in any given month, more than 25% of Earth’s population logs in to their Facebook account at least once.

This kind of scale is almost impossible to grasp.

Here’s one attempt to put it in perspective: imagine Yankee Stadium’s seats packed with 50,000 people, and multiply this by a factor of 40,000.

That’s about how many different people log into Facebook every month worldwide.

A smaller window

The Yankee Stadium analogy sort of helps, but it’s still very hard to picture.

The scale of the internet is so great, that it doesn’t make sense to look at the information on a monthly basis, or even to use daily figures.

Instead, let’s drill down to just what happens in just one internet minute:

Created each year by Lori Lewis and Chadd Callahan of Cumulus Media, the above graphic shows the incredible scale of e-commerce, social media, email, and other content creation that happens on the web.

Content competition

If you’ve ever had a post on Facebook or Instagram fizzle out, it’s safe to say that the above proliferation of content in our social feeds is part of the cause.

In a social media universe where there are no barriers to entry and almost infinite amounts of competition, the content game has tilted to become a “winner take all” scenario. Since people don’t have the time to look at the 452,200 tweets sent every minute, they naturally gravitate to the things that already have social proof.

People look to the people they trust to see what’s already being talking about, which is why influencers are more important than ever to marketers.

Eyes on the prize

For those that are able to get the strategy and timing right, the potential spoils are salivating:

The never-ending challenge, however, is how to stand out from the crowd.

Original article here.


standard

Python Tops 2017’s Most Popular Programming Languages

2017-07-26 - By 

Trying to decide which programming languages to study, whether prior to college, during it, or in continuing professional development can have a significant impact on your employment prospects and opportunities thereafter. Given this, periodic efforts have been made to rank the most important and popular languages over time, to give more insight into where’s the best place to focus one’s efforts.

IEEE Spectrum has just put together its fourth interactive list of top programming languages. The group designed the list to allow users to weight their own interests and use-cases independently. You can access the full list and sort it by language type (Web, Mobile, Enterprise, Embedded), fastest growing markets, general trends in usage, and languages popular specifically for open source development. You can also implement your own customized sorting methods.

Programming language rankings and image by IEEE Spectrum

Python has been rising for the past few years, but last year it was as far back as #3, whereas this year, it wins overall with a rank of 100. Python, C, Java, and C++ round out the top four, with all well above 95, while the fifth place contestant, C# (Microsoft’s own language, developed as part of its .NET framework) sits at a solid 88.6. The drop-off in spots #5-10 is never as large as the gap between C++ and C#, and the tenth language, Apple’s Swift, makes the list for the first time at 75.3 overall rank.

Previously popular languages like Ruby have fallen dramatically, which is part of why Swift has had the opportunity to rise. Apple’s predecessor language to Swift, Object-C, has fallen to 26th place as Apple transitions itself and developers over to the newer language.

The rankings do change somewhat, depending on your market segment. In Embedded, for example, the top five ranks are occupied by C, C++, Arduino, Assembly, and Haskell. In Mobile, the Top 5 are C, Java, C++, C#, and JavaScript. For web development, the Top 5 are Python, Java, C#, JavaScript, and PHP.

How you adjust the languages and focus your criteria, in other words, leads to a fairly different distribution of languages. But while Python may have been IEEE’s overall top choice, it’s not necessarily the best choice if you’re trying to cover a lot of bases or hit broad targets. At least one variant of C is present in the Top 5 of every single category, and multiple categories have C, C++, and C# present in three of the Top 5 (the Web category is anomalous in this regard, as only C# makes it into the Top 5).

IEEE continues to refine its criteria and measurements and has applied these new weightings to the previous year’s results as well. If you want more information on how the company weights data or to see how languages compare year-on-year, all such information is available here.

Original article here.


standard

The tech industry is dominated by 5 big companies — here’s how each makes its money

2017-05-26 - By 

More and more, everything crucial about the present and future of consumer tech runs through at least one five companies: Alphabet, Apple, Facebook, Amazon, and Microsoft.

Smartphones, laptops, app distribution, voice assistants and AI, streaming music and video, cloud computing, online shopping, advertising — whatever it is, chances are it runs through the oligopoly in some way. The list of startups that have bought by the big five, meanwhile, is almost too long to count.

Each of the five make great products, to be clear, but it’s hard to deny that they control how tech money flows.

How each of those companies make their revenues, though, varies wildly. As this recent chart from Visual Capitalist shows, each of the big five hold their empires on the back of different industries. Google’s parent company Alphabet, for all the dabbling it does, is an online advertising company first and foremost. Facebook is, too. Apple is a hardware company through and through, while everything about Amazon flows from its e-commerce business.

Though it’s still the dominant player in PCs, Microsoft stands out as the only tech giant with diversified sources of revenue. It has Windows, of course, but with the PC market in decline, it’s also getting significant gains from Office, the Azure cloud business, Xbox, Ads, and various other businesses.

Original article here.


standard

New Leader, Trends, and Surprises in Analytics, Data Science, Machine Learning Software Poll

2017-05-24 - By 

Python caught up with R and (barely) overtook it; Deep Learning usage surges to 32%; RapidMiner remains top general Data Science platform; Five languages of Data Science.

The 18th annual KDnuggets Software Poll again got huge participation from analytics and data science community and vendors, attracting about 2,900 voters, almost exactly the same as last year. Here is the initial analysis, with more detailed results to be posted later.

Python, whose share has been growing faster than R for the last several years, has finally caught up with R, and (barely) overtook it, with 52.6% share vs 52.1% for R.

The biggest surprise is probably the phenomenal share of Deep Learning tools, now used by 32% of all respondents, while only 18% used DL in 2016 and 9% in 2015. Google Tensorflow rapidly became the leading Deep Learning platform with 20.2% share, up from only 6.8% in 2016 poll, and entered the top 10 tools.

While in 2014 I wrote about Four main languages for Analytics, Data Mining, Data Science being R, Python, SQL, and SAS, the 5 main languages of Data Science in 2017 appear to be Python, R, SQL, Spark, and Tensorflow.

RapidMiner remains the most popular general platform for data mining/data science, with about 33% share, almost exactly the same as in 2016.

We note that many vendors have encouraged their users to vote, but all vendors had equal chances, so this does not violate KDnuggets guidelines. We have not seen any bot voting or direct links to vote for only one tool this year.

Spark grew to about 23% and kept its place in top 10 ahead of Hadoop.

Besides TensorFlow, another new tool in the top tier is Anaconda, with 22% share.

Top Analytics/Data Science Tools

Fig 1: KDnuggets Analytics/Data Science 2017 Software Poll: top tools in 2017, and their share in the 2015-6 polls

See original full article here.


standard

IoT 2017 Report: How the IoT is improving lives to transform the world

2017-04-05 - By 

The Internet of Things (IoT) is disrupting businesses, governments, and consumers and transforming how they interact with the world. Companies are going to spend almost $5 trillion on the IoT in the next five years — and the proliferation of connected devices and massive increase in data has started an analytical revolution.

To gain insight into this emerging trend, BI Intelligence conducted an exclusive Global IoT Executive Survey on the impact of the IoT on companies around the world. The study included over 500 respondents from a wide array of industries, including manufacturing, technology, and finance, with significant numbers of C-suite and director-level respondents. 

Through this exclusive study and in-depth research into the field, BI Intelligence details the components that make up IoT ecosystem. We size the IoT market in terms of device installations and investment through 2021. And we examine the importance of IoT providers, the challenges they face, and what they do with the data they collect. Finally, we take a look at the opportunities, challenges, and barriers related to mass adoption of IoT devices among consumers, governments, and enterprises.

Here are some key takeaways from the report:

  • We project that there will be a total of 22.5 billion IoT devices in 2021, up from 6.6 billion in 2016.
  • We forecast there will be $4.8 trillion in aggregate IoT investment between 2016 and 2021.
  • It highlights the opinions and experiences of IoT decision-makers on topics that include: drivers for adoption; major challenges and pain points; stages of adoption, deployment, and maturity of IoT implementations; investment in and utilization of devices, platforms, and services; the decision-making process; and forward- looking plans.

In full, the report:

  • Provides a primer on the basics of the IoT ecosystem
  • Offers forecasts for the IoT moving forward and highlights areas of interest in the coming years
  • Looks at who is and is not adopting the IoT, and why
  • Highlights drivers and challenges facing companies implementing IoT solutions

To get your copy of this invaluable guide to the IoT, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of the IoT.

Original article here.


standard

The Smartphone Platform War Is Over

2017-02-22 - By 

While the global smartphone market is as competitive as ever in terms of manufacturers fighting for the consumers’ love (and money), the long-raging platform war appears to be over. According to a recent report by Gartner, Android and iOS now account for more than 99 percent of global smartphone sales, rendering every other platform irrelevant.

As the chart below illustrates that hasn’t always been the case. Back in 2010, Android and iOS devices accounted for less than 40 percent of global smartphone sales. Back then, devices running Nokia’s Symbian and BlackBerry accounted for a significant portion of smartphone sales and Microsoft’s market share stood at 4.2 percent.

While Symbian is long extinct and BlackBerry has started transitioning to Android devices, Microsoft has not yet given up on Windows 10 Mobile as a platform aimed at professional users. Whether Windows, or any other platform for that matter, stands a chance against the dominance of Android and iOS at this point seems highly doubtful though.

Original article here.


standard

10 new AWS cloud services you never expected

2017-01-27 - By 

From data scooping to facial recognition, Amazon’s latest additions give devs new, wide-ranging powers in the cloud

In the beginning, life in the cloud was simple. Type in your credit card number and—voilà—you had root on a machine you didn’t have to unpack, plug in, or bolt into a rack.

That has changed drastically. The cloud has grown so complex and multifunctional that it’s hard to jam all the activity into one word, even a word as protean and unstructured as “cloud.” There are still root logins on machines to rent, but there are also services for slicing, dicing, and storing your data. Programmers don’t need to write and install as much as subscribe and configure.

Here, Amazon has led the way. That’s not to say there isn’t competition. Microsoft, Google, IBM, Rackspace, and Joyent are all churning out brilliant solutions and clever software packages for the cloud, but no company has done more to create feature-rich bundles of services for the cloud than Amazon. Now Amazon Web Services is zooming ahead with a collection of new products that blow apart the idea of the cloud as a blank slate. With the latest round of tools for AWS, the cloud is that much closer to becoming a concierge waiting for you to wave your hand and give it simple instructions.

Here are 10 new services that show how Amazon is redefining what computing in the cloud can be.

Glue

Anyone who has done much data science knows it’s often more challenging to collect data than it is to perform analysis. Gathering data and putting it into a standard data format is often more than 90 percent of the job.

Glue is a new collection of Python scripts that automatically crawls your data sources to collect data, apply any necessary transforms, and stick it in Amazon’s cloud. It reaches into your data sources, snagging data using all the standard acronyms, like JSON, CSV, and JDBC. Once it grabs the data, it can analyze the schema and make suggestions.

The Python layer is interesting because you can use it without writing or understanding Python—although it certainly helps if you want to customize what’s going on. Glue will run these jobs as needed to keep all the data flowing. It won’t think for you, but it will juggle many of the details, leaving you to think about the big picture.

FPGA

Field Programmable Gate Arrays have long been a secret weapon of hardware designers. Anyone who needs a special chip can build one out of software. There’s no need to build custom masks or fret over fitting all the transistors into the smallest amount of silicon. An FPGA takes your software description of how the transistors should work and rewires itself to act like a real chip.

Amazon’s new AWS EC2 F1 brings the power of FGPA to the cloud. If you have highly structured and repetitive computing to do, an EC2 F1 instance is for you. With EC2 F1, you can create a software description of a hypothetical chip and compile it down to a tiny number of gates that will compute the answer in the shortest amount of time. The only thing faster is etching the transistors in real silicon.

Who might need this? Bitcoin miners compute the same cryptographically secure hash function a bazillion times each day, which is why many bitcoin miners use FPGAs to speed up the search. Anyone with a similar compact, repetitive algorithm you can write into silicon, the FPGA instance lets you rent machines to do it now. The biggest winners are those who need to run calculations that don’t map easily onto standard instruction sets—for example, when you’re dealing with bit-level functions and other nonstandard, nonarithmetic calculations. If you’re simply adding a column of numbers, the standard instances are better for you. But for some, EC2 with FGPA might be a big win.

Blox

As Docker eats its way into the stack, Amazon is trying to make it easier for anyone to run Docker instances anywhere, anytime. Blox is designed to juggle the clusters of instances so that the optimum number are running—no more, no less.

Blox is event driven, so it’s a bit simpler to write the logic. You don’t need to constantly poll the machines to see what they’re running. They all report back, so the right number can run. Blox is also open source, which makes it easier to reuse Blox outside of the Amazon cloud, if you should need to do so.

X-Ray

Monitoring the efficiency and load of your instances used to be simply another job. If you wanted your cluster to work smoothly, you had to write the code to track everything. Many people brought in third parties with impressive suites of tools. Now Amazon’s X-Ray is offering to do much of the work for you. It’s competing with many third-party tools for watching your stack.

When a website gets a request for data, X-Ray traces as it as flows your network of machines and services. Then X-Ray will aggregate the data from multiple instances, regions, and zones so that you can stop in one place to flag a recalcitrant server or a wedged database. You can watch your vast empire with only one page.

Rekognition

Rekognition is a new AWS tool aimed at image work. If you want your app to do more than store images, Rekognition will chew through images searching for objects and faces using some of the best-known and tested machine vision and neural-network algorithms. There’s no need to spend years learning the science; you simply point the algorithm at an image stored in Amazon’s cloud, and voilà, you get a list of objects and a confidence score that ranks how likely the answer is correct. You pay per image.

The algorithms are heavily tuned for facial recognition. The algorithms will flag faces, then compare them to each other and references images to help you identify them. Your application can store the meta information about the faces for later processing. Once you put a name to the metadata, your app will find people wherever they appear. Identification is only the beginning. Is someone smiling? Are their eyes closed? The service will deliver the answer, so you don’t need to get your fingers dirty with pixels. If you want to use impressive machine vision, Amazon will charge you not by the click but by the glance at each image.

Athena

Working with Amazon’s S3 has always been simple. If you want a data structure, you request it and S3 looks for the part you want. Amazon’s Athena now makes it much simpler. It will run the queries on S3, so you don’t need to write the looping code yourself. Yes, we’ve become too lazy to write loops.

Athena uses SQL syntax, which should make database admins happy. Amazon will charge you for every byte that Athena churns through while looking for your answer. But don’t get too worried about the meter running out of control because the price is only $5 per terabyte. That’s about 50 billionths of a cent per byte. It makes the penny candy stores look expensive.

Lambda@Edge

The original idea of a content delivery network was to speed up the delivery of simple files like JPG images and CSS files by pushing out copies to a vast array of content servers parked near the edges of the Internet. Amazon is taking this a step further by letting us push Node.js code out to these edges where they will run and respond. Your code won’t sit on one central server waiting for the requests to poke along the backbone from people around the world. It will clone itself, so it can respond in microseconds without being impeded by all that network latency.

Amazon will bill your code only when it’s running. You won’t need to set up separate instances or rent out full machines to keep the service up. It is currently in a closed test, and you must apply to get your code in their stack.

Snowball Edge

If you want some kind of physical control of your data, the cloud isn’t for you. The power and reassurance that comes from touching the hard drive, DVD-ROM, or thumb drive holding your data isn’t available to you in the cloud. Where is my data exactly? How can I get it? How can I make a backup copy? The cloud makes anyone who cares about these things break out in cold sweats.

The Snowball Edge is a box filled with data that can be delivered anywhere you want. It even has a shipping label that’s really an E-Ink display exactly like Amazon puts on a Kindle. When you want a copy of massive amounts of data that you’ve stored in Amazon’s cloud, Amazon will copy it to the box and ship the box to wherever you are. (The documentation doesn’t say whether Prime members get free shipping.)

Snowball Edge serves a practical purpose. Many developers have collected large blocks of data through cloud applications and downloading these blocks across the open internet is far too slow. If Amazon wants to attract large data-processing jobs, it needs to make it easier to get large volumes of data out of the system.

If you’ve accumulated an exabyte of data that you need somewhere else for processing, Amazon has a bigger version called Snowmobile that’s built into an 18-wheel truck complete with GPS tracking.

Oh, it’s worth noting that the boxes aren’t dumb storage boxes. They can run arbitrary Node.js code too so you can search, filter, or analyze … just in case.

Pinpoint

Once you’ve amassed a list of customers, members, or subscribers, there will be times when you want to push a message out to them. Perhaps you’ve updated your app or want to convey a special offer. You could blast an email to everyone on your list, but that’s a step above spam. A better solution is to target your message, and Amazon’s new Pinpoint tool offers the infrastructure to make that simpler.

You’ll need to integrate some code with your app. Once you’ve done that, Pinpoint helps you send out the messages when your users seem ready to receive them. Once you’re done with a so-called targeted campaign, Pinpoint will collect and report data about the level of engagement with your campaign, so you can tune your targeting efforts in the future.

Polly

Who gets the last word? Your app can, if you use Polly, the latest generation of speech synthesis. In goes text and out comes sound—sound waves that form words that our ears can hear, all the better to make audio interfaces for the internet of things.

Original article here.


standard

All the Big Players Are Remaking Themselves Around AI

2017-01-02 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.

 


standard

5 trends that will change the way you work in 2017 (videos)

2017-01-01 - By 

Robots are going to take a seat at the conference room table in 2017.

Humans are going to be more stressed than ever.

And to stay competitive with their new robot colleagues, workers are going to start taking smart drugs.

That’s according to futurist Faith Popcorn, the founder and CEO of the consultancy Faith Popcorn’s BrainReserve. Since launching in 1974, she has helped Fortune 500 companies including MasterCard, Coca-Cola, P&G and IBM.

Here are five trends you can expect to see in the workplace in 2017, according to Popcorn.

1.Coffee alone won’t keep you competitive.

Employees are going to start taking a burgeoning class of cognitive enhancers called nootropics, or “smart drugs.” These nutritional supplements don’t all have the same ingredients but they reportedly increase physical and mental stamina.

Silicon Valley has been an early adopter of the bio-hacking trend. That’s perhaps unsurprising, as techies were also the first to try the likes of food substitute Soylent. There’s an active sub-reddit page dedicated to the topic.

Nootropics will go mainstream in 2017 because “the robots are edging us out,” says Popcorn. “When you come to work you have to be enhanced, you have to be on the edge, you have to be able to work longer and harder. You have to be able to become more important to your company.”

 

2.Robots will rise.

Unskilled blue-collar workers will be the first to lose their jobs to automation, but robots will eventually replace white-collar workers, too, says Popcorn, pointing to an Oxford University study that found 47 percent of U.S. jobs are at risk of being replaced.

“Who would you rather have do your research? A cognitive computer or a human?” says Popcorn. “Human error is a disaster. … Robots don’t make mistakes.”

 

3.Everyone will start doing the hustle.

Already, more than a third of the U.S. workforce are freelancers and will generate an estimated $1 trillion in revenue, according to a survey released earlier this fall by the Freelancers Union and the freelancing platform Upwork. The percentage of freelancers will increase in 2017 and beyond, she believes. “It’s accelerating every year,” says Popcorn.

She also points to some large companies that are building offices with fewer seats than employees. Citibank built an office space in Long Island City, Queens, with 150 seats for 200 employees and no assigned desks to encourage a fluid-feeling environment.

And Popcorn points to the rise of the side hustle: People “need more money than they are being paid,” she says. And they don’t trust their employers. “People are saying, ‘I want to have two or three hooks in the water. I don’t want to devote myself to one company.'”

Younger employees in particular are not interested in working for large, legacy companies like those their parents worked for, according to research Popcorn has done. “We are really turned off on ‘big.'”

 

4.There will be tears.

While people have always been emotional beings, historically emotions haven’t belonged inside the office. That’s basically because workplaces have largely been run by men. But that’s changing.

“The female entry into the workplace has brought emotional intelligence into the workplace and that comes with emotion,” says Popcorn. “There is a lot of anxiety about the future, there is a lot of stress-related burnout and we are seeing more emotion being displayed in the workplace.”

That doesn’t mean you should start crying on your boss’s shoulder, though. Especially if your boss is male. While women tend to be more comfortable with their feelings, men are still uncomfortable with elevated levels of emotion, says Popcorn, admitting that these gender-based observations are generalizations.

“WE ARE SEEING MORE EMOTION BEING DISPLAYED IN THE WORKPLACE.”

-Faith Popcorn, futurist

Going forward, the futurist expects to see more stress rooms in office buildings and “more of a recognition that people are living under a crushing amount of anxiety.” A stress room would be a welcoming space for employees to go to take a break and perhaps drink kava, a relaxing, root-based tea.

Open floor plans don’t give employees any place to breathe, Popcorn points out: “It’s like being watched 24/7.” Employees put in earbuds to approximate privacy, but sitting in open spaces is not conducive to employee mental health. “It is very stressful to work in the open floors,” she says. “It’s good for real estate, you can do it with fewer square feet, but it’s not particularly good for people.”

5.The boundary between work and play will crumble.

“People are going to be working 24 hours a day,” says Popcorn. Technology has enabled global, constant communication. The WeLive spaces that WeWork launched are indicative of this trend towards work and life integration, she says. “There is no line between work and play.”

 

 

Original article here.


standard

Tech trends for 2017: more AI, machine intelligence, connected devices and collaboration

2016-12-30 - By 

The end of year or beginning of year is always a time when we see many predictions and forecasts for the year ahead. We often publish a selection of these to show how tech-based innovation and economic development will be impacted by the major trends.

A number of trends reports and articles have bene published – ranging from investment houses, to research firms, and even innovation agencies. In this article we present headlines and highlights of some of these trends – from Gartner, GP Bullhound, Nesta and Ovum.

Artificial intelligence will have the greatest impact

GP Bullhound released its 52-page research report, Technology Predictions 2017, which says artificial intelligence (AI) is poised to have the greatest impact on the global technology sector. It will experience widespread consumer adoption, particularly as virtual personal assistants such as Apple Siri and Amazon Alexa grow in popularity as well as automation of repetitive data-driven tasks within enterprises.

Online streaming and e-sports are also significant market opportunities in 2017 and there will be a marked growth in the development of content for VR/AR platforms. Meanwhile, automated vehicles and fintech will pose longer-term growth prospects for investors.

The report also examines the growth of Europe’s unicorn companies. It highlights the potential for several firms to reach a $10 billion valuation and become ‘decacorns’, including BlaBlaCar, Farfetch, and HelloFresh.

Alec Dafferner, partner, GP Bullhound, commented, “The technology sector has faced up to significant challenges in 2016, from political instability through to greater scrutiny of unicorns. This resilience and the continued growth of the industry demonstrate that there remain vast opportunities for investors and entrepreneurs.”

Big data and machine learning will be disruptors

Advisory firm Ovum says big data continues to be the fastest-growing segment of the information management software market. It estimates the big data market will grow from $1.7bn in 2016 to $9.4bn by 2020, comprising 10 percent of the overall market for information management tooling. Its 2017 Trends to Watch: Big Data report highlights that while the breakout use case for big data in 2017 will be streaming, machine learning will be the factor that disrupts the landscape the most.

Key 2017 trends:

  • Machine learning will be the biggest disruptor for big data analytics in 2017.
  • Making data science a team sport will become a top priority.
  • IoT use cases will push real-time streaming analytics to the front burner.
  • The cloud will sharpen Hadoop-Spark ‘co-opetition’.
  • Security and data preparation will drive data lake governance.

Intelligence, digital and mesh

In October, Gartner issued its top 10 strategic technology trends for 2017, and recently outlined the key themes – intelligent, digital, and mesh – in a webinar.  It said that autonomous cars and drone transport will have growing importance in the year ahead, alongside VR and AR.

“It’s not about just the IoT, wearables, mobile devices, or PCs. It’s about all of that together,” said Cearley, according to hiddenwires magazine. “We need to put the person at the canter. Ask yourself what devices and service capabilities do they have available to them,” said David Cearley, vice president and Gartner fellow, on how ‘intelligence everywhere’ will put the consumer in charge.

“We need to then look at how you can deliver capabilities across multiple devices to deliver value. We want systems that shift from people adapting to technology to having technology and applications adapt to people.  Instead of using forms or screens, I tell the chatbot what I want to do. It’s up to the intelligence built into that system to figure out how to execute that.”

Gartner’s view is that the following will be the key trends for 2017:

  • Artificial intelligence (AI) and machine learning: systems that learn, predict, adapt and potentially operate autonomously.
  • Intelligent apps: using AI, there will be three areas of focus — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces.
  • Intelligent things, as they evolve, will shift from stand-alone IoT devices to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks.
  • Virtual and augmented reality: VR can be used for training scenarios and remote experiences. AR will enable businesses to overlay graphics onto real-world objects, such as hidden wires on the image of a wall.
  • Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.
  • Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.
  • Conversational systems will shift from a model where people adapt to computers to one where the computer ‘hears’ and adapts to a person’s desired outcome.
  • Mesh and app service architecture is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs (application programming interfaces) and events to deliver modular, flexible and dynamic solutions.
  • Digital technology platforms: every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the internet of things and business ecosystems.
  • Adaptive security architecture: multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

The real-world vision of these tech trends

UK innovation agency Nesta also offers a vision for the year ahead, a mix of the plausible and the more aspirational, based on real-world examples of areas that will be impacted by these tech trends:

  • Computer says no: the backlash: the next big technological controversy will be about algorithms and machine learning, which increasingly make decisions that affect our daily lives; in the coming year, the backlash against algorithmic decisions will begin in earnest, with technologists being forced to confront the effects of aspects like fake news, or other events caused directly or indirectly by the results of these algorithms.
  • The Splinternet: 2016’s seismic political events and the growth of domestic and geopolitical tensions, means governments will become wary of the internet’s influence, and countries around the world could pull the plug on the open, global internet.
  • A new artistic approach to virtual reality: as artists blur the boundaries between real and virtual, the way we create and consume art will be transformed.
  • Blockchain powers a personal data revolution: there is growing unease at the way many companies like Amazon, Facebook and Google require or encourage users to give up significant control of their personal information; 2017 will be the year when the blockchain-based hardware, software and business models that offer a viable alternative reach maturity, ensuring that it is not just companies but individuals who can get real value from their personal data.
  • Next generation social movements for health: we’ll see more people uniting to fight for better health and care, enabled by digital technology, and potentially leading to stronger engagement with the system; technology will also help new social movements to easily share skills, advice and ideas, building on models like Crohnology where people with Crohn’s disease can connect around the world to develop evidence bases and take charge of their own health.
  • Vegetarian food gets bloodthirsty: the past few years have seen growing demand for plant-based food to mimic meat; the rising cost of meat production (expected to hit $5.2 billion by 2020) will drive kitchens and laboratories around the world to create a new wave of ‘plant butchers, who develop vegan-friendly meat substitutes that would fool even the most hardened carnivore.
  • Lifelong learners: adult education will move from the bottom to the top of the policy agenda, driven by the winds of automation eliminating many jobs from manufacturing to services and the professions; adult skills will be the keyword.
  • Classroom conundrums, tackled together: there will be a future-focused rethink of mainstream education, with collaborative problem solving skills leading the charge, in order to develop skills beyond just coding – such as creativity, dexterity and social intelligence, and the ability to solve non-routine problems.
  • The rise of the armchair volunteer: volunteering from home will become just like working from home, and we’ll even start ‘donating’ some of our everyday data to citizen science to improve society as well; an example of this trend was when British Red Cross volunteers created maps of the Ebola crisis in remote locations from home.

In summary

It’s clear that there is an expectation that the use of artificial intelligence and machine learning platforms will proliferate in 2017 across multiple business, social and government spheres. This will be supported with advanced tools and capabilities like virtual reality and augmented reality. Together, there will be more networks of connected devices, hardware, and data sets to enable collaborative efforts in areas ranging from health to education and charity. The Nesta report also suggests that there could be a reality check, with a possible backlash against the open internet and the widespread use of personal data.

Original article here.


standard

2016 was a Big year in storage

2016-12-29 - By 

Looking back is more than nostalgia. It helps us see what has changed and what hasn’t, and where we might improve. 2016 has been a momentus year for storage. Here are the top stories.

LEGACY VENDORS AND THE CLOUD

The cloud has devastated revenue, growth and margins of legacy vendors. Any CFO can look online to see what similar capacity, performance, and higher availability costs compared to the huge capital costs of traditional RAID arrays.

2016 saw the world’s largest independent storage company — EMC — bought by Dell, after shopping itself to all the big system vendors. The $60 billion price tag was excessive given the rapid obsolescence of much of EMC’s intellectual property, but a worthy capstone to CEO Joe Tucci’s brilliant leadership of the storage giant.

Tucci saw what many other CEOs denied, which is that the scale-out commodity-based storage systems and the internalization of storage have forever changed the storage industry. EMC needed a system partner to leverage their storage expertise, and Dell needed a robust enterprise sales force.

NetApp acquired SolidFire, a promising flash array vendor, that finally got them into the highest growth area of legacy storage. Plagued by years of flash misfires and infighting, NetApp has done well in the new market, but has had to lay off thousands of employees.

NetApp is touting their integration with Amazon Web Services — cloud — but that is a rear guard action as cloud vendors gobble up more enterprise dollars. Their next big problem: object storage systems are getting faster, offer much better data protection, are much more scalable, and more cost-effective than NetApp’s flagship NAS boxes. I hope their CEO, George Kurian, recognizes the threat and acts decisively in 2017.

LEGACY VENDORS AND THE UPSTARTS

Legacy vendors are getting squeezed between the cloud and aggressive storage startups. Companies like Nimble Storage, Nutanix, and Pure Storage offer modern architectures that leave the RAID paradigm in the dust. All three had successful IPOs, and now have the money to bring the fight to the legacy vendors.

Other startups have been acquired by legacy vendors to remake their products lines. DSSD, supported by Silicon Valley legend Andreas Bechtolsheim, was bought by EMC a couple of years ago. NetApp acquired SolidFire this year. HGST acquired Amplidata last year and are making a solid play for the active archive market. The storage startup scene continues to boil.

NON-VOLATILE RAM

NVRAM is the Next Big Thing for servers and notebooks, as support from Intel and Microsoft shows. Some versions — there are around 10 — are almost as fast as DRAM, but use much less power and are much denser. Terabyte DIMMs, anyone? Big Data will especially benefit from high capacity NVRAM equipped servers.

2016 was supposed to be the year that Intel introduced their NVRAM 3D XPoint Optane drives, but like many ambitious engineering projects, they’ve slipped into 2017, and may be one reason the recent MacBook Pro’s were delayed. But Intel isn’t the only player, and certainly isn’t the first to market.

MRAM vendor Everspin IPO’d this year, raising funds needed to further enhance their NVRAM line. Nantero licensed their NVRAM to a couple of major fabs, putting their carbon nanotube technology on the fast track.

THUNDERBOLT 3

I’ve been a happy Thunderbolt 1 user for years. It’s a great technology that is fast, stable, and low-cost.

2016 saw it get even better, now that one Thunderbolt 3 connector supports 40 Gbit/s bandwidth with half the power consumption of Thunderbolt 2. That’s enough bandwidth to drive dual 4k displays at 60 Hz, PCIe 3.0, HDMI 2.0, DisplayPort 1.2, as well as 10 Gbit/s USB 3.1. Plus to to 100 watts of power to charge systems and up to 15W for bus-powered devices.

Using newly available and cheap PCIe switches, Thunderbolt 3 can be stretched to build large clusters at low prices. We’ll see more of that in 2017. On notebooks it offers performance and connectivity undreamed of 10 years ago. External drives with gigabyte per second performance are already here, with more on the way.

THE STORAGE BITS TAKE

I’ve been involved with storage for over 35 years, starting when a disk drive cost $40 a megabyte. For the last 15 years the industry has been on an innovation spree that has upended many companies and delivered incredible capabilities.

Storage is the basis of our digital world. Given the crisis of a post-fact world, I take comfort in the fact that a $100 billion plus industry is working hard to store and protect the data that is critical to the challenges humanity faces.

Original article here.


standard

AWS 2016 Hot Startups Review

2016-12-18 - By 

It is the end of 2016! Tina Barr has a great roundup of all the startups we featured this year. Check it out and see if you managed to read about them all, then come back in January for when we start up all over again.

Also- I wanted to thank Elsa Mayer for her hard work in helping out with the startup posts.

-Ana

What a year it has been for startups! We began the Hot Startups series in March as a way to feature exciting AWS-powered startups and the motivation behind the products and services they offer. Since then we’ve featured 27 startups across a wide range of industries including healthcare, commerce, social, finance, and many more. Each startup offers a unique product to its users – whether it’s an entertaining app or website, an educational platform, or a product to help businesses grow, startups have the ability to reach customers far and wide.

The startups we showcased this year are headquartered all over the world. Check them out on the map below!

In case you missed any of the posts, here is a list of all the hot startups we featured in 2016:

March

  • Intercom – One place for every team in an Internet business to see and talk
    to customers, personally, at scale.
  • Tile – A popular key locator product that works with an app to help people find their stuff.
  • Bugsnag – A tool to capture and analyze runtime errors in production web and mobile applications.
  • DroneDeploy  – Making the sky productive and accessible for everyone.

April

  • Robinhood – Free stock trading to democratize access to financial markets.
  • Dubsmash – Bringing joy to communication through video
  • Sharethrough – An all-in-one native advertising platform.

June

  • Shaadi.com – Helping South Asians to find a companion for life.
  • Capillary– Boosting customer engagement for e-commerce.
  • Monzo – A mobile-first bank.

July

  • Depop– A social mobile marketplace for artists and friends to buy and sell products.
  • Nextdoor – Building stronger and safer neighborhoods through technology.
  • Branch – Provides free deep linking technology for mobile app developers to gain and retain users.

August

  • Craftsvilla– Offering a platform to purchase ethnic goods.
  • SendBird – Helping developers build 1-on-1 messaging and group chat quickly.
  • Teletext.io – A solution for content management, without the system.
  • Wavefront– A cloud-based analytics platform.

September

  • Funding Circle – The leading online marketplace for business loans.
  • Karhoo– A ride comparison app.
  • nearbuy – Connecting customers and local merchants across India.

October

  • Optimizely – Providing web and mobile A/B testing for the world’s leading brands.
  • Touch Surgery – Building technologies for the global surgical community.
  • WittyFeed – Creating viral content.

November

  • AwareLabs – Helping small businesses build smart websites.
  • Doctor On Demand– Delivering fast, easy, and cost-effective access to top healthcare providers.
  • Starling Bank – Mobile banking for the next generation.
  • VigLink – Powering content-driven commerce.

Thank you for keeping up with us as we shared these startups’ amazing stories throughout the year. Be sure to check back here in January for our first hot startups of 2017!

-Tina Barr

Original article here.


standard

Gartner’s Top 10 Strategic Technology Trends for 2017

2016-12-05 - By 

Artificial intelligence, machine learning, and smart things promise an intelligent future.

Today, a digital stethoscope has the ability to record and store heartbeat and respiratory sounds. Tomorrow, the stethoscope could function as an “intelligent thing” by collecting a massive amount of such data, relating the data to diagnostic and treatment information, and building an artificial intelligence (AI)-powered doctor assistance app to provide the physician with diagnostic support in real-time. AI and machine learning increasingly will be embedded into everyday things such as appliances, speakers and hospital equipment. This phenomenon is closely aligned with the emergence of conversational systems, the expansion of the IoT into a digital mesh and the trend toward digital twins.

Three themes — intelligent, digital, and mesh — form the basis for the Top 10 strategic technology trends for 2017, announced by David Cearley, vice president and Gartner Fellow, at Gartner Symposium/ITxpo 2016 in Orlando, Florida. These technologies are just beginning to break out of an emerging state and stand to have substantial disruptive potential across industries.

Intelligent

AI and machine learning have reached a critical tipping point and will increasingly augment and extend virtually every technology enabled service, thing or application.  Creating intelligent systems that learn, adapt and potentially act autonomously rather than simply execute predefined instructions is primary battleground for technology vendors through at least 2020.

Trend No. 1: AI & Advanced Machine Learning

AI and machine learning (ML), which include technologies such as deep learning, neural networks and natural-language processing, can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously. Systems can learn and change future behavior, leading to the creation of more intelligent devices and programs.  The combination of extensive parallel processing power, advanced algorithms and massive data sets to feed the algorithms has unleashed this new era.

In banking, you could use AI and machine-learning techniques to model current real-time transactions, as well as predictive models of transactions based on their likelihood of being fraudulent. Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios..

Trend No. 2: Intelligent Apps

Intelligent apps, which include technologies like virtual personal assistants (VPAs), have the potential to transform the workplace by making everyday tasks easier (prioritizing emails) and its users more effective (highlighting important content and interactions). However, intelligent apps are not limited to new digital assistants – every existing software category from security tooling to enterprise applications such as marketing or ERP will be infused with AI enabled capabilities.  Using AI, technology providers will focus on three areas — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces. By 2018, Gartner expects most of the world’s largest 200 companies to exploit intelligent apps and utilize the full toolkit of big data and analytics tools to refine their offers and improve customer experience.

Trend No. 3: Intelligent Things

New intelligent things generally fall into three categories: robots, drones and autonomous vehicles. Each of these areas will evolve to impact a larger segment of the market and support a new phase of digital business but these represent only one facet of intelligent things.  Existing things including IoT devices will become intelligent things delivering the power of AI enabled systems everywhere including the home, office, factory floor, and medical facility.

As intelligent things evolve and become more popular, they will shift from a stand-alone to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks. However, nontechnical issues such as liability and privacy, along with the complexity of creating highly specialized assistants, will slow embedded intelligence in some scenarios.

Digital

The lines between the digital and physical world continue to blur creating new opportunities for digital businesses.  Look for the digital world to be an increasingly detailed reflection of the physical world and the digital world to appear as part of the physical world creating fertile ground for new business models and digitally enabled ecosystems.

Trend No. 4: Virtual & Augmented Reality

Virtual reality (VR) and augmented reality (AR) transform the way individuals interact with each other and with software systems creating an immersive environment.  For example, VR can be used for training scenarios and remote experiences. AR, which enables a blending of the real and virtual worlds, means businesses can overlay graphics onto real-world objects, such as hidden wires on the image of a wall.  Immersive experiences with AR and VR are reaching tipping points in terms of price and capability but will not replace other interface models.  Over time AR and VR expand beyond visual immersion to include all human senses.  Enterprises should look for targeted applications of VR and AR through 2020.

Trend No. 5: Digital Twin

Within three to five years, billions of things will be represented by digital twins, a dynamic software model of a physical thing or system. Using physics data on how the components of a thing operate and respond to the environment as well as data provided by sensors in the physical world, a digital twin can be used to analyze and simulate real world conditions, responds to changes, improve operations and add value. Digital twins function as proxies for the combination of skilled individuals (e.g., technicians) and traditional monitoring devices and controls (e.g., pressure gauges). Their proliferation will require a cultural change, as those who understand the maintenance of real-world things collaborate with data scientists and IT professionals.  Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.

Trend No. 6: Blockchain

Blockchain is a type of distributed ledger in which value exchange transactions (in bitcoin or other token) are sequentially grouped into blocks.  Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.  They promise a model to add trust to untrusted environments and reduce business friction by providing transparent access to the information in the chain.  While there is a great deal of interest the majority of blockchain initiatives are in alpha or beta phases and significant technology challenges exist.

Mesh

The mesh refers to the dynamic connection of people, processes, things and services supporting intelligent digital ecosystems.  As the mesh evolves, the user experience fundamentally changes and the supporting technology and security architectures and platforms must change as well.

Trend No. 7: Conversational Systems

Conversational systems can range from simple informal, bidirectional text or voice conversations such as an answer to “What time is it?” to more complex interactions such as collecting oral testimony from crime witnesses to generate a sketch of a suspect.  Conversational systems shift from a model where people adapt to computers to one where the computer “hears” and adapts to a person’s desired outcome.  Conversational systems do not use text/voice as the exclusive interface but enable people and machines to use multiple modalities (e.g., sight, sound, tactile, etc.) to communicate across the digital device mesh (e.g., sensors, appliances, IoT systems).

Trend No. 8: Mesh App and Service Architecture

The intelligent digital mesh will require changes to the architecture, technology and tools used to develop solutions. The mesh app and service architecture (MASA) is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs and events to deliver modular, flexible and dynamic solutions.  Solutions ultimately support multiple users in multiple roles using multiple devices and communicating over multiple networks. However, MASA is a long term architectural shift that requires significant changes to development tooling and best practices.

Trend No. 9: Digital Technology Platforms

Digital technology platforms are the building blocks for a digital business and are necessary to break into digital. Every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the Internet of Things and business ecosystems. In particular new platforms and services for IoT, AI and conversational systems will be a key focus through 2020.   Companies should identify how industry platforms will evolve and plan ways to evolve their platforms to meet the challenges of digital business.

Trend No. 10: Adaptive Security Architecture

The evolution of the intelligent digital mesh and digital technology platforms and application architectures means that security has to become fluid and adaptive. Security in the IoT environment is particularly challenging. Security teams need to work with application, solution and enterprise architects to consider security early in the design of applications or IoT solutions.  Multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

Original article here.


standard

2016’s Top Tech Billionaires (Infographic)

2016-11-13 - By 

From Mark Zuckerberg to Larry Ellison, these leaders have proven to the world that anything is possible.

From developments in robotics to an influx of self-driving technology, 2016 has been an exciting year. And with the proliferation of tech in our daily lives, it’s no wonder that some of the world’s top billionaires come from the tech space.

From Mark Zuckerberg to Larry Ellison, these leaders have proven to the world that anything is possible. But with so many innovators in the tech industry — who are the richest?

To find out, check out ERS IT Solutions’ 2016’s Top Tech Billionaires infographic below.

 

Original article here.

 


standard

We need universal basic income because robots will take all the jobs – Musk

2016-11-12 - By 

We may need to pay people just to live in an automated world, says space biz baron.

Elon Musk reckons the robot revolution is inevitable and it’s going to take all the jobs.

For humans to survive in an automated world, he said that governments are going to be forced to bring in a universal basic income—paying each citizen a certain amount of money so they can afford to survive. According to Musk, there aren’t likely to be any other options.

“There is a pretty good chance we end up with a universal basic income, or something like that, due to automation,” he told CNBC in an interview. “Yeah, I am not sure what else one would do. I think that is what would happen.”

The idea behind universal basic income is to replace all the different sources of welfare, which are hard to administer and come with policing costs. Instead, the government gives everyone a lump sum each month—the size of which would vary depending on political beliefs—and they can spend it however they want.

Switzerland, a country with high wages and high employment, recently held a referendum on giving its people 2,500 Swiss francs (£2,065) per month, plus 625 francs (£516) per child. It was ultimately rejected by a wide margin by the country’s fairly conservative electorate, who generally thought it would give people too much for free.

President Obama has also floated the idea in a confab with Wired: “Whether a universal income is the right model—is it gonna be accepted by a broad base of people?—that’s a debate that we’ll be having over the next 10 or 20 years.”

Robots have already replaced numerous blue collar manufacturing jobs, and are taking over more and more warehousing and logistics roles. Some—perhaps prematurely—are fretting about future AIs being developed to replace professions such as doctors and lawyers. Already, moves are being made in that direction, with chatbots which can get people off parking tickets, and an AI that can predict cases at the European Court of Human Rights. Doctors should be looking over their shoulders, too.

Musk isn’t necessarily downbeat on the automated future, however. He thinks that in the future “people will have time to do other things, more complex things, more interesting things,” and they’ll “certainly have more leisure time.” And then, he added, “we gotta figure how we integrate with a world and future with a vast AI.”

“Ultimately,” he said, “I think there has to be some improved symbiosis with digital super intelligence.”

Original article here.


standard

Mobile to account for 75% of internet use in 2017

2016-11-01 - By 

Global smartphone penetration is driving up mobile as the primary mode of accessing the internet, according to a new report from mobile ad firm Zenith. The firm projects that at the current rate of growth, mobile devices will account for 75% of all internet use in 2017 globally and 79% in 2018.

As mobile becomes increasingly important to the online experience, brands and businesses will need to ensure their mobile offering is optimized for the user.   

The main drivers behind the projected growth of mobile internet are rapid growth of smartphone ownership, access to faster internet, and the increasing popularity of large-screen devices. 

  • Already, smartphone penetration — the share of the population with smartphones — currently sits at 56% globally, up 33 percentage points in just four years, notes Zenith. Smartphones have become more accessible to consumers in emerging and mature markets alike as the cost of high-performing devices continues to decline. 
  • 4G is becoming more readily available in a greater number of markets globally. 4G supports larger amounts of data than 3G and 2G at faster speeds, giving users the ability to spend more time in richer media. Globally, 4G subscriptions are projected to grow at an annualized rate of 25% between 2015 and 2021, to reach 4.3 billion subscriptions, according to Ericsson. 
  • Phablets — smartphones with screens 5.1 inches and larger — are quickly growing in popularity. By 2017, phablets are expected to account for more than half of all smartphones shipped globally, according to Flurry. The large-screen smartphones are proving increasingly popular as mobile user behavior shifts toward more visual-heavy activities such as online video and gaming. 

The forecast underscores the importance of brands and businesses ensuring that their mobile strategy is mobile-first. To best reach consumers, brands need to focus on where consumers are spending their time.

This doesn’t necessitate building an app per se, but making sure that the mobile website supports a solid user experience, rather than being a miniature version of the desktop offering. This includes creating advertisements optimized for the mobile experience. Zenith projects that mobile will overtake desktop’s share of internet advertising in 2017, growing further to make up 60% of all internet advertising by 2018. 

Providing further support for the overall shift toward mobile, Google split its search indexes last month between mobile and desktop in order to provide more accurate and mobile-relevant search results. This means that businesses without a mobile experience could miss out on a massive chunk of internet users in the future.  

Jessica Smith, a research analyst at BI Intelligence, has compiled a detailed report on mobile marketing that takes a close look at the different tactics being used today, spanning legacy mobile technologies like SMS to emerging capabilities like beacon-aided location-based marketing. The report also identifies some of the most useful mobile marketing technologies that mobile marketers are putting to good use as parts of larger strategies.

Here are some key takeaways from the report:

  • As consumers spend more time on their mobile devices, marketing campaigns are following suit. Mobile ad spend continues to lag mobile time spent, providing an opportunity for creative marketers.
  • Marketers should leverage different mobile tactics depending on the size and demographics of the audience they want to reach and the type of message they want to send. With all tactics, marketers need to respect the personal nature of the mobile device and pay attention to the potential for communication overload.
  • Mobile messaging — particularly SMS and email — has the broadest reach and highest adoption among mobile users. Messaging apps, relative newcomers but gaining fast in popularity, offer more innovative and engaging outreach options.
  • Emerging technology, such as dynamic creative optimization, is breathing new life into mobile browser-based ad campaigns, but marketers should keep an eye on consumer adoption of mobile ad blockers.
  • In-app advertising can generate high engagement rates, especially with video. Location-based apps and beacons offer additional data that can enhance targeting capabilities.

In full, the report:

  • Identifies the major mobile technologies being used to reach consumers.
  • Sizes up the potential reach and potential of each of these mobile technologies.
  • Presents an example of a company or brand that has successfully leveraged that mobile technology to reach consumers.
  • Assesses the efficacy of each approach.
  • Examines the potential pitfalls and other shortcomings of each mobile technology.

To get your copy of this invaluable guide to the world of mobile marketing, choose one of these options:

  1. Subscribe to an ALL-ACCESS Membership with BI Intelligence and gain immediate access to this report AND over 100 other expertly researched deep-dive reports, subscriptions to all of our daily newsletters, and much more. >> START A MEMBERSHIP
  2. Purchase the report and download it immediately from our research store. >> BUY THE REPORT

The choice is yours. But however you decide to acquire this report, you’ve given yourself a powerful advantage in your understanding of how mobile marketing is rapidly evolving.

Original article here.


standard

Can Your Startup Answer These 23 Pitch Competition Questions?

2016-10-11 - By 

Asked and answered. These real-life pitch questions from Steve Case’s Rise of the Rest tour can help give you an edge on your next pitch.

Pitch competitions are a reality of startup life, as common as coffee mugs that say “Hustle Harder” or thought leaders expounding on the need for “grit.”

Still, even the smartest entrepreneur isn’t always ready for what competition judges might ask. During Steve Case’s Rise of the Rest tour, a seven-city road trip across the U.S. highlighting entrepreneurs outside the major startup hubs, founders in Phoenix participated in their own mock pitch competition, allowing them to practice and polish their answers.  

We’ve collected a curated selection of questions during the competition, some asked more often than others. 

To prepare for your next competition, get prepping with these potential questions:

Goals

1. What’s your top priority in the next six months? What metric are you watching the most closely?
2. What’s your exit strategy?

The basics

3. How does your product/service work?
4. Who is your customer?
5. Do you have contracts and if so, how often do they renew?

The team

6. Why is your team the team to bring this to market?
7. You say you’ll have 100 staffers in five years. You have six now. What will those new staffers do?

Advantages

8. Why is your product/service better than what’s already on the market?
9. Who are your competitors?
10. Do you have a patent?

Related: Join Entrepreneur on the Road This Week With Steve Case’s ‘Rise of the Rest’

Partnerships

11. If you win the investment, what would that partnership look like?
12. You’ve secured a strategic partnership. Is that partnership exclusive? And if not, is that a liability?

Growth

13. What’s your barrier to capacity?
14. What’s your expansion strategy?

Pricing and revenue

15. How much of your revenue is from upsells? And how do you see that changing over time?
16. Everyone says they can monetize the data they collect. What’s your plan?
17. Can you explain your revenue model?
18. What’s your margin?
19. Are you charging too little?
20. Are you charging too much?

What’s ahead

21. How will you get to 1 million users?
22. Is this trend sustainable?
23. What regulatory approvals do you need and how have you progressed so far?

Original article here.


standard

The Race For AI: Google, Twitter, Intel, Apple In A Rush To Grab Artificial Intelligence Startups

2016-10-10 - By 

Nearly 140 private companies working to advance artificial intelligence technologies have been acquired since 2011, with over 40 acquisitions taking place in 2016 alone (as of 10/7/2016). Corporate giants like Google, IBM, Yahoo, Intel, Apple and Salesforce, are competing in the race to acquire private AI companies, with Samsung emerging as a new entrant this month with its acquisition of startup Viv Labs, which is developing a Siri-like AI assistant.

Google has been the most prominent global player, with 11 acquisitions in the category under its belt (follow all of Google’s M&A activity here through our real-time Google acquisitions tracker).

In 2013, the corporate giant picked up deep learning and neural network startup DNNresearchfrom the computer science department at the University of Toronto. This acquisition reportedly helped Google make major upgrades to its image search feature. In 2014 Googleacquired British company DeepMind Technologies for some $600M (Google DeepMind’s program recently beat a human world champion in the board game “Go”). This year, it acquired visual search startup Moodstock, and bot platform Api.ai.

Intel and Apple are tied for second place. The former acquired 3 startups this year alone: Itseez, Nervana Systems, and Movidius, while Apple acquired Turi and Tuplejump recently.

Twitter ranks third, with 4 major acquisitions, the most recent being image-processing startupMagic Pony.

Salesforce, which joined the race last year with the acquisition of Tempo AI, has already made two major acquisitions this year: Khosla Ventures-backed MetaMind and open-source machine-learning server PredictionIO.

We updated this timeline on 10/7/2016 to include acquirers who have made atleast 2 acquisitions since 2011.

Major Acquirers In Artificial Intelligence Since 2011
CompanyDate of AcquisitionAcquirer
Hunch11/21/2011eBay
Cleversense12/14/2011Google
Face.com5/29/2012Facebook
DNNresearch3/13/2013Google
Netbreeze3/20/2013Microsoft
Causata8/7/2013NICE
Indisys8/25/2013Yahoo
IQ Engines9/13/2013Intel
LookFlow10/23/2013Yahoo
SkyPhrase12/2/2013Yahoo
Gravity1/23/2014AOL
DeepMind1/27/2014Google
Convertro5/6/2014AOL
Cogenea5/20/2014IBM
Desti5/30/2014Nokia
Medio Systems6/12/2014Nokia
Madbits7/30/2014Twitter
Emu8/6/2014Google
Jetpac8/16/2014Google
Dark Blue Labs10/23/2014DeepMind
Vision Factory10/23/2014DeepMind
Wit.ai1/5/2015Facebook
Equivio1/20/2015Microsoft
Granata Decision Systems1/23/2015Google
AlchemyAPI3/4/2015IBM
Explorys4/13/2015IBM
TellApart4/28/2015Twitter
Timeful5/4/2015Google
Tempo AI5/29/2015Salesforce
Sociocast6/9/2015AOL
Whetlab6/17/2015Twitter
Orbeus10/1/2015Amazon
Vocal IQ10/2/2015Apple
Perceptio10/6/2015Apple
Saffron10/26/2015Intel
Emotient1/7/2016Apple
Nexidia1/11/2016NICE
PredictionIO2/19/2016Salesforce
MetaMind4/4/2016Salesforce
Crosswise4/14/2016Oracle
Expertmaker5/5/2016eBay
Itseez5/27/2016Intel
Magic Pony6/20/2016Twitter
Moodstocks7/6/2016Google
SalesPredict7/11/2016eBay
Turi8/5/2016Apple
Nervana Systems8/9/2016Intel
Genee8/22/2016Microsoft
Movidius9/6/2016Intel
Palerra9/19/2016Oracle
Api.ai9/19/2016Google
Angel.ai9/20/2016Amazon
tuplejump9/22/2016Apple

Original article here.


standard

Four points on why digital transformation is a big deal for the future of IT

2016-10-05 - By 

For the IT sector, the concept of digital transformation represents a time for evolution, revolution and opportunity, according to Information Technology Association of Canada (ITAC) president Robert Watson.

The new president for the technology association made the statements at last week’s IDC Directions and CanadianCIO Symposium in Toronto. The tech trends event was co-hosted by ITWC and IDC with support from ITAC.

Notable sessions included the ITWC-moderated Digital Transformation panel — which featured veteran CIOs discussing the digital transformation opportunities and challenges— and IDC Canada’s Nigel Wallis outlining why Canadian business models should shift to reap IoT rewards.

Digital transformation refers to the changes associated with the application of digital technology in all aspects of human society; the overarching event theme focused on digital transformation as more than mere buzzword, but as process that tech leaders and organizations should already be adopting. Considering the IT department is the “substance of every industry,” it follows that information technology can play a key role in setting the pace for innovation and future developments, offered Watson.

Both the public and private sectors are looking to diversify operations and economies — the IT sector will be the leaders and enable development of emerging technologies including the Internet of Things (IoT): “It is coming for sure and a fantastic opportunity.”

With that in mind, here are four key takeaways from the event.

“Have you ever seen a more dynamic, exciting, and scary time in our industry?”

IDC’s senior vice president and chief analyst Frank Gens outlined reasons why IT is currently entering an “innovation stage” with the era of the Third Platform, which refers to emerging tech such as cloud technology, mobile, social and IoT.

According to IDC, the Third Platform is anticipated to grow to approximately 20 per cent by the year 2020; eighty per cent of Fortune 100 companies are expected to have digital transformation teams in place by the end of this year.

“It’s about a new foundation for enterprise services. You can connect back-end AI to this growing edge of IoT…you are really talking about collective learning and accelerated learning around the next foundation of enterprise solutions,” said Gens.

Takeaway: In a cloud- and mobile-dominated IT world, enterprises should look to quickly develop platform- and API-based services across their network, noted Gens, while also looking to grow the developer base to use those services.

“Robotics is an extremely vertical driven solution.”

Think of that classic 1927 film Metropolis, and its anthropomorphic robot Maria: While IT has come a long way from Metropolis in terms of developments in robotics, the industry isn’t quite there yet. But we’re close, noted IDC research analyst Vlad Mukherjee, and the industry should look at current advancements in the field.

According to Mukherjee, robotics are driving digital transformation processes by establishing new revenue streams and changing the way we work.

Currently, robotics tech is classified in terms of commercial service, industry and consumer. Canadian firms in total are currently spending $1.08 B on the technology, Mukherjee said.

Early adopters are looking at reducing costs; this includes the automotive and manufacturing sectors, but also fields such as  healthcare, logistics, and resource extraction. In the case of commercial service robotics, the concept works and the business case is there, but not at the point where we can truly take advantage, he said.

The biggest expense for robotics is service, maintenance, and battery life, said Mukherjee.

Takeaway: Industrial robots are evolving to become more flexible, easier to setup, support more human interaction and be more autonomously mobile. Enterprises should keep abreast of robotics developments, particularly the rise of collaborative industrial robots which have a lower barrier for SME adoption. This includes considering pilot programs and use cases that explore how the technology can help improve operations and automated processes.

“China has innovated significantly in terms of business models that the West has yet to emulate.”

Analysts Bryan Ma, vice-president of client devices for IDC Asia-Pacific, and Krista Collins, research manager of mobility and consumer for IDC Canada, outlined mobility trends and why the mobility and augmented or virtual reality markets seen in the east will inevitably make their way to Canada.

China is no longer considered a land of cheap knockoffs, said Ma, adding consider the rise of companies like Xiaomi, considered “The Apple of China.”

Globally, shipments of virtual reality (VR) hardware are expected to skyrocket this year, according to IDC’s forecasts. It expects shipments to hit 9.6 million units worldwide, generating $2.3 billion mostly for the four lead manufacturers: Samsung, Sony, HTC, and Oculus.

With VR in its early days, both Ma and Collins see the most growth potential for the emerging medium coming from the consumer market. Gaming and even adult entertainment options promise to be the first use-cases for mass adoption, with applications in the hospitality, real estate, or travel sectors coming later.

“That will be bigger on the consumer side of the market,” Collins said. “That’s what we’ll see first here in Canada and in other parts of the world.”

Takeaway: Augmented reality (AR) headsets will take longer to ramp up, IDC expects. In 2016, less than half a million units will ship. That will quickly climb to 45.6 million units by 2020, chasing the almost 65 million expected shipments of VR headsets. But unlike VR, the first applications for AR will be in the business world.

“Technology is integrated with everything”

There are currently more than 3.8 billion mobile phones on the planet — just think of the opportunities, offered David Senf, vice president of infrastructure solutions for IDC Canada.

He argued that digital transformation is an even bigger consideration than security — and responding to business concerns is a top concern for IT in 2016. IT staff spent two weeks more “keeping the lights on” in 2015 versus being focused on new, innovative projects. This has to change, said Senf.

IT is living in an era of big data and advanced analytics. As cloud technology matures — from just being software-as-a-service (SaaS) to platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) — CIOs should think about the cloud in a new way. Instead of just the cloud, it’s a vital architecture that should be supporting the business.

“Organizations are starting to define what that architecture looks like,” said Senf, adding the successful ones understand that the cloud is a competitive driver, particularly from an identity management, cost, and data residency perspective.

Takeaway: If the infrastructure isn’t already ready for big data, it might already be behind the curve. Senf notes CIOs should ensure that the IT department is able to scale quickly for change — and is ready to support the growing demands of the business side, including mobility public cloud access.

Get ready to experiment and become comfortable with data sources and analysis. This includes looking at the nature of probabilistic findings — and using PaaS, he added.

Read more: http://www.itworldcanada.com/article/the-future-of-it-four-points-on-why-digital-transformation-is-a-big-deal/383121#ixzz4MFnnfcAh
or visit http://www.itworldcanada.com for more Canadian IT News


standard

UK-based hyper-convergence startup bets on ARM processors

2016-10-03 - By 

Cambridge, U.K.-based startup Kaleao Ltd.  is entering the hyper-converged systems market today with a platform based on the ARM chip architecture that it claims can achieve unparalleled scalability and performance at a fraction of the cost of competing systems.

The KMAX platform features an integrated OpenStack cloud environment and miniature hypervisors that dynamically defines physical computing resources and assigns them directly to virtual machines and applications. These “microvisors,” as Kaleao calls them, dynamically orchestrate global pools of software-defined and hardware-accelerated resources with much lower overhead than that of typical hypervisors. Users can still run the KVM hypervisor if they want.

The use of the ARM 64-bit processor distinguishes Kaleao from the pack of other hyper-converged vendors such as VMware Inc., Nutanix Inc. and SimpliVity Inc., which use Intel chips. ARM is a reduced instruction set computing-based architecture that is commonly used in mobile devices because of its low power consumption.

“We went with ARM because the ecosystem allows for more differentiation and it’s a more open platform,” said Giovanbattista Mattiussi, principal marketing manager at Kaleao. “It enabled us to rethink the architecture itself.”

One big limitation of ARM is that it’s unable to support the Windows operating system or VMware vSphere virtualization manager. Instead, Kaleao is bundling Ubuntu Linux and OpenStack, figuring those are the preferred choices for cloud service providers and enterprises that are building private clouds. Users can also install any other Linux distribution.

Kaleao said the low-overhead of its microvisors, combined with the performance of RAM processors, enables it to deliver 10 times the performance of competing systems at least than one-third of the energy consumption. Users can run four to six times as many microvisors as hypervisors, Mattiussi said. “It’s like the VM is running on the hardware with no software layers in between,” he said. “We can pick up a piece of the CPU here, a piece of storage there. It’s like having a bare-bones server running under the hypervisor.”

The platform provides up to 1,536 CPU cores, 370 TB of all-flash storage with 960 gigabytes per second of networking in a 3u rack. Energy usage is less than 15 watts per eight-core server. “Scalability is easy,” Mattiussi said. “You just need to add pieces of hardware.”

KMAX will be available in January in server and appliance versions. The company hasn’t released pricing but said its cost structure enables prices in the range of $600 to $700 per server, or about $10,000 for a 16-server blade. It plans to sell direct and through distributors. The company has opened a U.S. office in Charlotte, NC and has European outposts in Italy, Greece and France.

Co-founders Giampietro Tecchiolli and John Goodacre have a long track record of work in hardware and chip design, and both are active in the Euroserver green computing project. Goodacre continues to serve as director of technology and systems at ARM Holdings plc, which make the ARM processor.

Kaleao has raised €3 million and said it’s finalizing a second round of €5 million.

Original article here


standard

IBM, Google, Facebook, Microsoft, Amazon form enormous AI partnership

2016-09-29 - By 

On Wednesday, the world learned of a new industry association called the Partnership on Artificial Intelligence, and it includes some of the biggest tech companies in the world. IBM, Google, Facebook, Microsoft, and Amazon have all signed on as marquis members, though the group hopes to expand even further over time. The goal is to create a body that can provide a platform for discussions among stakeholders and work out best practices for the artificial intelligence industry. Not directly mentioned, but easily seen on the horizon, is its place as the primary force lobbying for smarter legislation on AI and related future-tech issues.

Best practices can be boring or important, depending on the context, and in this case they are very, very important. Best practices could provide a framework for accurate safety testing, which will be important as researchers ask people to put more and more of their lives in the hands of AI and AI-driven robots. This sort of effort might also someday work toward a list of inherently dangerous and illegitimate actions or AI “thought” processes. One of its core goals is to produce thought leadership on the ethics of AI development.

So, this could end up being the bureaucracy that produces our earliest laws of robotics, if not the one that enforces them. The world “law” is usually used metaphorically in robotics. But with access to the lobbying power of companies like Google and Microsoft, we should expect the Partnership on AI to wade into discussions of real laws soon enough. For instance, the specifics of regulations governing self-driving car technology could still determine which would-be software standard will hit the market first. With the founding of this group, Google has put itself in a position to perhaps direct that regulation for its own benefit.

But, boy, is that ever not how they want you to see it. The group is putting in a really ostentatious level of effort to assure the world it’s not just a bunch of technology super-corps determining the future of mankind, like some sort of cyber-Bilderberg Group. The group’s website makes it clear that it will have “equal representation for corporate and non-corporate members on the board,” and that it “will share leadership with independent third-parties, including academics, user group advocates, and industry domain experts.”

Well, it’s one thing to say that, and quite another to live it. It remains to be seen if the group will actually comport itself as it will need to if it wants real support from the best minds in open source development. Below, the Elon Musk-associated non-profit research company OpenAI responds to the announcement with a rather passive-aggressive word of encouragement.

The effort to include non-profits and other non-corporate bodies makes perfect sense. There aren’t many areas in software engineering where you can claim to be the definitive authority if you don’t have the public on-board. Microsoft, in particular, is painfully aware of how hard it is to push a proprietary standard without the support of the open-source community. Not only will its own research be stronger and more diverse for incorporating the “crowd,” any recommendations it makes will carry more weight with government and far more weight with the public.

That’s why it’s so notable that some major players are absent from this early roll coll — most notably Apple and Intel. Apple haslong been known to be secretive about its AI research, even to the point of hurting its own competitiveness, while Intel has a history of treating AI as an unwelcome distraction. Neither approach is going to win the day, though there is an argument to be made that by remaining outside the group, Apple can still selfishly consume any insights it releases to the public.

Leaving such questions of business ethics aside, robot ethics remains a pressing problem. Self-driving cars illustrate exactly why, and the classic thought experiment involves a crowded freeway tunnel, with no room to swerve or time to brake. Seeing a crash ahead, your car must decide whether to swerve left and crash you into a pillar, or swerve right and save itself while forcing the car beside you right off the road itself. What is moral, in this situation? Would your answer change if the other car was carrying a family of five?

Right now these questions are purely academic. The formation of groups like this show they might not remain so for long.

Original article here.


Site Search

Search
Generic filters
Exact matches only
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);