Posted In:Cloud Computing Archives - AppFerret

standard

Google’s AutoML is a Machine Learning Game-Changer

2018-05-24 - By 

Google’s AutoML is a new up-and-coming (alpha stage) cloud software suite of Machine Learning tools. It’s based on Google’s state-of-the-art research in image recognition called Neural Architecture Search (NAS). NAS is basically an algorithm that, given your specific dataset, searches for the most optimal neural network to perform a certain task on that dataset. AutoML is then a suite of machine learning tools that will allow one to easily train high-performance deep networks, without requiring the user to have any knowledge of deep learning or AI; all you need is labelled data! Google will use NAS to then find the best network for your specific dataset and task. They’ve already shown how their methods can achieve performance that is far better than that of hand-designed networks.

AutoML totally changes the whole machine learning game because for many applications, specialised skills and knowledge won’t be required. Many companies only need deep networks to do simpler tasks, such as image classification. At that point they don’t need to hire 5 machine learning PhDs; they just need someone who can handle moving around and organising their data.

There’s no doubt that this shift in how “AI” can be used by businesses will create change. But what kind of change are we looking at? Whom will this change benefit? And what will happen to all of the people jumping into the machine learning field? In this post, we’re going to breakdown what Google’s AutoML, and in general the shift towards Software 2.0, means for both businesses and developers in the machine learning field.

More development, less research for businesses

A lot of businesses in the AI space, especially start-ups, are doing relatively simple things in the context of deep learning. Most of their value is coming from their final put-together product. For example, most computer vision start-ups are using some kind of image classification network, which will actually be AutoML’s first tool in the suite. In fact, Google’s NASNet, which achieves the current state-of-the-art in image classification is already publicly available in TensorFlow! Businesses can now skip over this complex experimental-research part of the product pipeline and just use transfer learning for their task. Because there is less experimental-research, more business resources can be spent on product design, development, and the all important data.

Speaking of which…

It becomes more about product

Connecting from the first point, since more time is being spent on product design and development, companies will have faster product iteration. The main value of the company will become less about how great and cutting edge their research is and more about how well their product/technology is engineered. Is it well designed? Easy to use? Is their data pipeline set up in such a way that they can quickly and easily improve their models? These will be the new key questions for optimising their products and being able to iterate faster than their competition. Cutting edge research will also become less of a main driver of increasing the technology’s performance.

Now it’s more like…

Data and resources become critical

Now that research is a less significant part of the equation, how can companies stand out? How do you get ahead of the competition? Of course sales, marketing, and as we just discussed, product design are all very important. But the huge driver of the performance of these deep learning technologies is your data and resources. The more clean and diverse yet task-targeted data you have (i.e both quality and quantity), the more you can improve your models using software tools like AutoML. That means lots of resources for the acquisition and handling of data. All of this partially signifies us moving away from the nitty-gritty of writing tons of code.

It becomes more of…

Software 2.0: Deep learning becomes another tool in the toolbox for most

All you have to do to use Google’s AutoML is upload your labelled data and boom, you’re all set! For people who aren’t super deep (ha ha, pun) into the field, and just want to leverage the power of the technology, this is big. The application of deep learning becomes more accessible. There’s less coding, more using the tool suite. In fact, for most people, deep learning because just another tool in their toolbox. Andrej Karpathy wrote a great article on Software 2.0 and how we’re shifting from writing lots of code to more design and using tools, then letting AI do the rest.

But, considering all of this…

There’s still room for creative science and research

Even though we have these easy-to-use tools, the journey doesn’t just end! When cars were invented, we didn’t just stop making them better even though now they’re quite easy to use. And there’s still many improvements that can be made to improve current AI technologies. AI still isn’t very creative, nor can it reason, or handle complex tasks. It has the crutch of needing a ton of labelled data, which is both expensive and time consuming to acquire. Training still takes a long time to achieve top accuracy. The performance of deep learning models is good for some simple tasks, like classification, but does only fairly well, sometimes even poorly (depending on task complexity), on things like localisation. We don’t yet even fully understand deep networks internally.

All of these things present opportunities for science and research, and in particular for advancing the current AI technologies. On the business side of things, some companies, especially the tech giants (like Google, Microsoft, Facebook, Apple, Amazon) will need to innovate past current tools through science and research in order to compete. All of them can get lots of data and resources, design awesome products, do lots of sales and marketing etc. They could really use something more to set them apart, and that can come from cutting edge innovation.

That leaves us with a final question…

Is all of this good or bad?

Overall, I think this shift in how we create our AI technologies is a good thing. Most businesses will leverage existing machine learning tools, rather than create new ones since they don’t have a need for it. Near-cutting-edge AI becomes accessible to many people, and that means better technologies for all. AI is also quite an “open” field, with major figures like Andrew Ng creating very popular courses to teach people about this important new technology. Making things more accessible helps people transition with the fast-paced tech field.

Such a shift has happened many times before. Programming computers started with assembly level coding! We later moved on to things like C. Many people today consider C too complicated so they use C++. Much of the time, we don’t even need something as complex as C++, so we just use the super high level languages of Python or R! We use the tool that is most appropriate at hand. If you don’t need something super low-level, then you don’t have to use it (e.g C code optimisation, R&D of deep networks from scratch), and can simply use something more high-level and built-in (e.g Python, transfer learning, AI tools).

At the same time, continued efforts in the science and research of AI technologies is critical. We can definitely add tremendous value to the world by engineering new AI-based products. But there comes a point where new science is needed to move forward. Human creativity will always be valuable.

Conclusion

Thanks for reading! I hope you enjoyed this post and learned something new and useful about the current trend in AI technology! This is a partially opinionated piece, so I’d love to hear any responses you may have below!

Original article here.


standard

Serverless is eating the stack and people are freaking out — as they should be

2018-04-20 - By 

AWS Lambda has stamped a big DEPRECATED on containers – Welcome to “Serverless Superheroes”! 
In this space, I chat with the toolmakers, innovators, and developers who are navigating the brave new world of “serverless” cloud applications.

In this edition, I chatted with Steven Faulkner, a senior software engineer at LinkedIn and the former director of engineering at Bustle. The following interview has been edited and condensed for clarity.

Forrest Brazeal: At Bustle, your previous company, I heard you cut your hosting costs by about forty percent when you switched to serverless. Can you speak to where all that money was going before, and how you were able to make that type of cost improvement?

Steven Faulkner: I believe 40% is where it landed. The initial results were even better than that. We had one service that was costing about $2500 a month and it went down to about $500 a month on Lambda.

Bustle is a media company — it’s got a lot of content, it’s got a lot of viral, spiky traffic — and so keeping up with that was not always the easiest thing. We took advantage of EC2 auto-scaling, and that worked … except when it didn’t. But when we moved to Lambda — not only did we save a lot of money, just because Bustle’s traffic is basically half at nighttime what it is during the day — we saw that serverless solved all these scaling headaches automatically.

On the flip side, did you find any unexpected cost increases with serverless?

There are definitely things that cost more or could be done cheaper not on serverless. When I was at Bustle they were looking at some stuff around data pipelines and settled on not using serverless for that at all, because it would be way too expensive to go through Lambda.

Ultimately, although hosting cost was an interesting thing out of the gate for us, it quickly became a relative non-factor in our move to serverless. It was saving us money, and that was cool, but the draw of serverless really became more about the velocity with which our team could develop and deploy these applications.

At Bustle, we only have ever had one part-time “ops” person. With serverless, those responsibilities get diffused across our team, and that allowed us all to focus more on the application and less on how to get it deployed.

Any of us who’ve been doing serverless for a while know that the promise of “NoOps” may sound great, but the reality is that all systems need care and feeding, even ones you have little control over. How did your team keep your serverless applications running smoothly in production?

I am also not a fan of the term “NoOps”; it’s a misnomer and misleading for people. Definitely out of the gate with serverless, we spent time answering the question: “How do we know what’s going on inside this system?”

IOPipe was just getting off the ground at that time, and so we were one of their very first customers. We were using IOPipe to get some observability, then CloudWatch sort of got better, and X-Ray came into the picture which made things a little bit better still. Since then Bustle also built a bunch of tooling that takes all of the Lambda logs and data and does some transformations — scrubs it a little bit — and sends it to places like DataDog or to Scalyr for analysis, searching, metrics and reporting.

But I’m not gonna lie, I still don’t think it’s super great. It got to the point where it was workable and we could operate and not feel like we were always missing out on what was actually going on, but there’s a lot of room for improvement.

Another common serverless pain point is local development and debugging. How did you handle that?

I wrote a framework called Shep that Bustle still uses to deploy all of our production applications, and it handles the local development piece. It allows you to develop a NodeJS application locally and then deploy it to Lambda. It could do environment variables before Lambda had environment variables, and have some sanity around versioning and using webpack to bundle. All the the stuff that you don’t really want the everyday developer to have to worry about.

I built Shep in my first couple of months at Bustle, and since then, the Serverless Framework has gotten better. SAM has gotten better. The whole entire ecosystem has leveled up. If I was doing it today I probably wouldn’t need to write Shep. But at the time, that’s definitely when we had to do.

You’re putting your finger on an interesting reality with the serverless space, which is: it’s evolving so fast that it’s easy to create a lot of tooling and glue code that becomes obsolete very quickly. Did you find this to be true?

That’s extremely fair to say. I had a little Twitter thread around this a couple months ago, having a bit of a realization myself that Shep is not the way I would do deployments anymore. When AWS releases their own tooling, it always seems to start out pretty bad, so the temptation is to fill in those gaps with your own tool.

But AWS services change and get better at a very rapid rate. So I think the lesson I learned is lean on AWS as much as possible, or build on top of their foundation and make it pluggable in a way that you can just revert to the AWS tooling when it gets better.

Honestly, I don’t envy a lot of the people who sliced their piece of the serverless pie based on some tool they’ve built. I don’t think that’s necessarily a long term sustainable thing.

As I talk to developers and sysadmins, I feel like I encounter a lot of rage about serverless as a concept. People always want to tell me the three reasons why it would never work for them. Why do you think this concept inspires so much animosity and how do you try to change hearts and minds on this?

A big part of it is that we are deprecating so many things at one time. It does feel like a very big step to me compared to something like containers. Kelsey Hightower said something like this at one point: containers enable you to take the existing paradigm and move it forward, whereas serverless is an entirely new paradigm.

And so all these things that people have invented and invested time and money and resources in are just going away, and that’s traumatic, that’s painful. It won’t happen overnight, but anytime you make something that makes people feel like what they’ve maybe spent the last 10 years doing is obsolete, it’s hard. I don’t really know if I have a good way to fix that.

My goal with serverless was building things faster. I’m a product developer; that’s my background, that’s what I like to do. I want to make cool things happen in the world, and serverless allows me to do that better and faster than I can otherwise. So when somebody comes to me and says “I’m upset that this old way of doing things is going away”, it’s hard for me to sympathize.

It sounds like you’re making the point that serverless as a movement is more about business value than it is about technology.

Exactly! But the world is a big tent and there’s room for all kinds of stuff. I see this movement around OpenFaaS and the various Functions as a Service on Kubernetes and I don’t have a particular use for those things, but I can see businesses where they do, and if it helps get people transitioned over to serverless, that’s great.

So what is your definition of serverless, then?

I always joke that “cloud native” would have been a much better term for serverless, but unfortunately that was already taken. I think serverless is really about the managed services. Like, who is responsible for owning whether this thing that my application depends on stays up or not? And functions as a service is just a small piece of that.

The way I describe it is: functions as a service are cloud glue. So if I’m building a model airplane, well, the glue is a necessary part of that process, but it’s not the important part. Nobody looks at your model airplane and says: “Wow, that’s amazing glue you have there.” It’s all about how you craft something that works with all these parts together, and FaaS enables that.

And, as Joe Emison has pointed out, you’re not just limited to one cloud provider’s services, either. I’m a big user of Algolia with AWS. I love using Algolia with Firebase, or Netlify. Serverless is about taking these pieces and gluing them together. Then it’s up to the service provider to really just do their job well. And over time hopefully the providers are doing more and more.

We’re seeing that serverless mindset eat all of these different parts of the stack. Functions as a service was really a critical bit in order to accelerate the process. The next big piece is the database. We’re gonna see a lot of innovation there in the next year. FaunaDB is doing some cool stuff in that area, as is CosmosDB. I believe there is also a missing piece of the market for a Redis-style serverless offering, something that maybe even speaks Redis commands but under the hood is automatically distributed and scalable.

What is a legitimate barrier to companies that are looking to adopt serverless at this point?

Probably the biggest is: how do you deal with the migration of legacy things? At Bustle we ended up mostly re-architecting our entire platform around serverless, and so that’s one option, but certainly not available to everybody. But even then, the first time we launched a serverless service, we brought down all of our Redis instances — because Lambda spun up all these containers and we hit connection limits that you would never expect to hit in a normal app.

So if you’ve got something sitting on a mainframe somewhere that is used to only having 20 connections and then you moved over some upstream service to Lambda and suddenly it has 10,000 connections instead of 20. You’ve got a problem. If you’ve bought into service-oriented architecture as a whole over the last four or five years, then you might have a better time, because you can say “Well, all these things do is talk to each other via an API, so we can replace a single service with serverless functions.”

Any other emerging serverless trends that interest you?

We’ve solved a lot of the easy, low-hanging fruit problems with serverless at this point. Like how you do environment variables, or how you’re gonna structure a repository and enable developers to quickly write these functions, We’re starting to establish some really good best practices.

What’ll happen next is we’ll get more iteration around architecture. How do I glue these four services together, and how do the Lambda functions look that connect them? We don’t yet have the Rails of serverless — something that doesn’t necessarily expose that it’s actually a Lambda function under the hood. Maybe it allows you to write a bunch of functions in one file that all talk to each other, and then use something like webpack that splits those functions and deploys them in a way that makes sense for your application.

We could even respond to that at runtime. You could have an application that’s actually looking at what’s happening in the code and saying: “Wow this one part of your code is taking a long time to run; we should make that its own Lambda function and we should automatically deploy that and set up this SNS trigger for you.” That’s all very pie in the sky, but I think we’re not that far off from having these tools.

Because really, at the end of the day, as a developer I don’t care about Lambda, right? I mean, I have to care right now because it’s the layer in which I work, but if I can move one layer up where I’m just writing business logic and the code gets split up appropriately, that’s real magic.


Forrest Brazeal is a cloud architect and serverless community advocate at Trek10. He writes the Serverless Superheroes series and draws the ‘FaaS and Furious’ cartoon series at A Cloud Guru. If you have a serverless story to tell, please don’t hesitate to let him know.

Original article here.

 


standard

Why SQL is beating NoSQL, and what this means for the future of data

2018-03-29 - By 

After years of being left for dead, SQL today is making a comeback. How come? And what effect will this have on the data community?

Since the dawn of computing, we have been collecting exponentially growing amounts of data, constantly asking more from our data storage, processing, and analysis technology. In the past decade, this caused software developers to cast aside SQL as a relic that couldn’t scale with these growing data volumes, leading to the rise of NoSQL: MapReduce and Bigtable, Cassandra, MongoDB, and more.

Yet today SQL is resurging. All of the major cloud providers now offer popular managed relational database services: e.g., Amazon RDSGoogle Cloud SQLAzure Database for PostgreSQL (Azure launched just this year). In Amazon’s own words, its PostgreSQL- and MySQL-compatible database Aurora database product has been the “fastest growing service in the history of AWS”. SQL interfaces on top of Hadoop and Spark continue to thrive. And just last month, Kafka launched SQL support. Your humble authors themselves are developers of a new time-series database that fully embraces SQL.

In this post we examine why the pendulum today is swinging back to SQL, and what this means for the future of the data engineering and analysis community.


Part 1: A New Hope

To understand why SQL is making a comeback, let’s start with why it was designed in the first place.

Our story starts at IBM Research in the early 1970s, where the relational database was born. At that time, query languages relied on complex mathematical logic and notation. Two newly minted PhDs, Donald Chamberlin and Raymond Boyce, were impressed by the relational data model but saw that the query language would be a major bottleneck to adoption. They set out to design a new query language that would be (in their own words): “more accessible to users without formal training in mathematics or computer programming.”

Read the Full article here.

 


standard

Six Top Cloud Security Threats in 2018

2018-02-14 - By 

2018 is set to be a very exciting year for cloud computing. In the fourth financial quarter of 2017, Amazon, SAP, Microsoft, IBM, Salesforce, Oracle, and Google combined had over $22 billion in their revenue from cloud services. Cloud services will only get bigger in 2018. It’s easy to understand why businesses love the cloud. It’s easier and more affordable to use third-party cloud services than for every enterprise to have to maintain their own datacenters on their own premises.

It’s certainly possible to keep your company’s data on cloud servers secure. But cyber threats are evolving, and cloud servers are a major target. Keep 2018’s top cloud security threats in mind, and you’ll have the right mindset for properly securing your business’ valuable data.

1. Data Breaches

2017 was a huge year for data breaches. Even laypeople to the cybersecurity world heard about September’s Equifax breach because it affected at least 143 million ordinary people. Breaches frequently happen to cloud data, as well.

In May 2017, a major data breach that hit OneLogin was discovered. OneLogin provides identity management and single sign-on capabilities for the cloud services of over 2,000 companies worldwide.

“Today we detected unauthorized access to OneLogin data in our US data region. We have since blocked this unauthorized access, reported the matter to law enforcement, and are working with an independent security firm to determine how the unauthorized access happened and verify the extent of the impact of this incident. We want our customers to know that the trust they have placed in us is paramount,” said OneLogin CISO Alvaro Hoyos.

Over 1.4 billion records were lost to data breaches in March 2017 alone, many of which involved cloud servers.

2. Data loss

Sometimes data lost from cloud servers is not due to cyber attack. Non-malicious causes of data loss include natural disasters like floods and earthquakes and simple human error, such as when a cloud administrator accidentally deletes files. Threats to your cloud data don’t always look like clever kids wearing hoodies. It’s easy to underestimate the risk of something bad happening to your data due to an innocent mistake.

One of the keys to mitigating the non-malicious data loss threat is to maintain lots of backups at physical sites at different geographic locations.

3. Insider threats

Insider threats to cloud security are also underestimated. Most employees are trustworthy, but a rogue cloud service employee has a lot of access that an outside cyber attacker would have to work much harder to acquire.

From a whitepaper by security researchers William R Claycomb and Alex Nicoll:

“Insider threats are a persistent and increasing problem. Cloud computing services provide a resource for organizations to improve business efficiency, but also expose new possibilities for insider attacks. Fortunately, it appears that few, if any, rogue administrator attacks have been successful within cloud service providers, but insiders continue to abuse organizational trust in other ways, such as using cloud services to carry out attacks. Organizations should be aware of vulnerabilities exposed by the use of cloud services and mindful of the availability of cloud services to employees within the organization. The good news is that existing data protection techniques can be effective, if diligently and carefully applied.”

4. Denial of Service attacks

Denial of service (DoS) attacks are pretty simple for cyber attackers to execute, especially if they have control of a botnet. Also, DDoS-as-a-service is growing in popularity on the Dark Web. Now attackers don’t need know-how and their own bots; all they have to do is transfer some of their cryptocurrency in order to buy a Dark Web service.

Denis Makrushin wrote for Kaspersky Lab:

“Ordering a DDoS attack is usually done using a full-fledged web service, eliminating the need for direct contact between the organizer and the customer. The majority of offers that we came across left links to these resources rather than contact details. Customers can use them to make payments, get reports on work done or utilize additional services. In fact, the functionality of these web services looks similar to that offered by legal services.”

An effective DDoS attack on a cloud service gives a cyber attacker the time they need to execute other types of cyber attacks without getting caught.

5. Spectre and Meltdown

This is a new addition to the list of known cloud security threats for 2018. The Meltdown and Spectre speculative execution vulnerabilities also affect CPUs that are used by cloud services. Spectre is especially difficult to patch.

From CSO Online:

“Both Spectre and Meltdown permit side-channel attacks because they break down the isolation between applications. An attacker that is able to access a system through unprivileged log in can read information from the kernel, or attackers can read the host kernel if they are a root user on a guest virtual machine (VM).

This is a huge issue for cloud service providers. While patches are becoming available, they only make it harder to execute an attack. The patches might also degrade performance, so some businesses might choose to leave their systems unpatched. The CERT Advisory is recommending the replacement of all affected processors—tough to do when replacements don’t yet exist.”

6. Insecure APIs

Application Programming Interfaces are important software components for cloud services. In many cloud systems, APIs are the only facets outside of the trusted organizational boundary with a public IP address. Exploiting a cloud API gives cyber attackers considerable access to your cloud applications. This is a huge problem!

Cloud APIs represent a public front door to your applications. Secure them very carefully.

To learn more about maintaining control in your cloud environment, click here.

Original article here.

 


standard

IBM Fueling 2018 Cloud Growth With 1,900 Cloud Patents Plus Blazingly Fast AI-Optimized Chip

2018-01-17 - By 

CLOUD WARS — Investing in advanced technology to stay near the top of the savagely competitive enterprise-cloud market, IBM earned more than 1,900 cloud-technology patents in 2017 and has just released an AI-optimized chip said to have 10 times more IO and bandwidth than its nearest rival.

IBM is coming off a year in which it stunned many observers by establishing itself as one of the world’s top three enterprise-cloud providers—along with Microsoft and Amazon—by generating almost $16 billion in cloud revenue for the trailing 12 months ended Oct. 31, 2017.

While that $16-billion cloud figure pretty much matched the cloud-revenue figures for Microsoft and Amazon, many analysts and most media observers continue—for reasons I cannot fathom—to fail to acknowledge IBM’s stature as a broad-based enterprise-cloud powerhouse whose software capabilities position the company superbly for the next wave of cloud growth in hybrid cloud, PaaS, and SaaS.

And IBM, which announces its Q4 and annual earnings results on Thursday, Jan. 18, is displaying its full commitment to remaining among the top ranks of cloud vendors by earning almost 2,000 patents for cloud technologies in 2017, part of a companywide total of 9,043 patents received last year.

Noting that almost half of those 9,043 patents came from “pioneering advancements in AI, cloud computing, cybersecurity, blockchain and quantum computing,” IBM CEO Ginni Rometty said this latest round of advanced-technology innovation is “aimed at helping our clients create smarter businesses.”

In those cloud-related areas, IBM said its new patents include the following:

  • 1,400 AI patents, including one for an AI system that analyzes and can mirror a user’s speech patterns to make it easier for humans and AI to understand one another.
  • 1,200 cybersecurity patents, “including one for technology that enables AI systems to turn the table on hackers by baiting them into email exchanges and websites that expend their resources and frustrate their attacks.”
  • In machine learning, a system for autonomous vehicles that transfers control of the vehicle to humans “as needed, such as in an emergency.”
  • In blockchain, a method for reducing the number of steps needed to settle transactions among multiple business parties, “even those that are not trusted and might otherwise require a third-party clearinghouse to execute.”

For IBM, the pursuit of new cloud technologies is particularly important because a huge portion of its approximately $16 billion in cloud revenue comes from outside the standard cloud-revenue stream of IaaS, PaaS and SaaS and instead is generated by what I call IBM’s “cloud-conversion” business—an approach unique to IBM.

While IBM rather aridly defines that business as “hardware, software and services to enable IBM clients to implement comprehensive cloud solutions,” the concept comes alive when viewed through the perspective of what those offerings mean to big corporate customers. To understand how four big companies are tapping into IBM’s cloud conversion business, please check out my recent article called Inside IBM’s $7-Billion Cloud-Solutions Business: 4 Great Digital-Transformation Stories.

IBM’s most-recent batch of cloud-technology patents—and IBM has now received more patents per year than any other U.S. company for 25 straight years—includes a patent that an IBM blog post describes this way: “a system that monitors data sources including weather reports, social networks, newsfeeds and network statistics to determine the best uses of cloud resources to meet demand. It’s one of the numerous examples of using unstructured data can help organizations work more efficiently.”

That broad-based approach to researching and developing advanced technology also led to the launch last month of a microchip that IBM says is specifically optimized for artificial-intelligence workloads.

A TechCrunch article about IBM’s new Power9 chip said it will be used not only in the IBM Cloud but also the Google Cloud: “The company intends to sell the chips to third-party manufacturers and to cloud vendors including Google. Meanwhile, it’s releasing a new computer powered by the Power9 chip, the AC922 and it intends to offer the chips in a service on the IBM cloud.”

How does the new IBM chip stack up? The TechCrunch article offered this breathless endorsement of the Power9’s performance from analyst Patrick Moorhead of Moor Insights & Strategy: “Power9 is a chip which has a new systems architecture that is optimized for accelerators used in machine learning. Intel makes Xeon CPUs and Nervana accelerators and NVIDIA makes Tesla accelerators. IBM’s Power9 is literally the Swiss Army knife of ML acceleration as it supports an astronomical amount of IO and bandwidth, 10X of anything that’s out there today.”

It’s shaping up to be a very interesting year from IBM in the cloud, and I’ll be reporting later this week on Thursday’s earnings release.

As businesses jump to the cloud to accelerate innovation and engage more intimately with customers, my Cloud Wars series analyze the major cloud vendors from the perspective of business customers.

 

Original article here.

 


standard

The Root Cause of IoT Skepticism

2017-12-28 - By 

It’s healthy to be skeptical of new ideas, but let’s take a look at the philosophy that might be holding back the biggest advancement in technology in a century.

In about exactly 13 years (target as set by NASA) or rather 7 years (as set by Elon Musk’s Space-X) from now, we humans are going to set foot on Mars and become a truly space-faring race.

We live in pretty exciting times riding on a threshold of Continuous Imagination empowering Continuous Innovation. Every product in every domain is undergoing a sea change, adding new features, releasing them faster than their competitors, adapting to incremental rate of technological substitution. But most of these new feature improvements and product launches are not guided by new requirements from customers. In the face of stiff competition that gets stiffer by the day, evolution and adaptation is the only natural process of survival and winning. As Charles Darwin would put it, it’s, “Survival of the fittest.”

But as history suggests, there are and there always will be skeptics among us who will doubt every action that deviates from convention – like those who doubt climate change, the need to explore the unexplored, and the need to change.

A human mind exposed to scientific education exhibits skepticism and pragmatism over dogmatism and largely remains technology-agonistic. It validates everything agnostically with knowledge and reasoning before accepting new ideas. But human progress always comes from philosophical insights — imaginations that led to the discovery or invention of new things. Technological progress has only turned science fiction (read: philosophy) into scientific facts.

With the above premises in mind, in this article, we intend to explore the realm of IoT, its implications on our lives, and our own limitations in foreseeing the imminent future as companies and customers.

We Understand the Internet, but not IoT

IoT, the Internet of Things (or Objects), denote the entire network of Internet-connected devices – vehicles, home and office appliances, and machinery equipment embedded with electronics, software, sensors, actuators, and the wired/Wi-Fi and RFID network connectivity that enable these objects to connect and exchange data. The benefits of this new ubiquitous connectivity will be reaped by everyone, and we will, for the first time, be able to hear and feel the heartbeat of the Earth.

For example, as cows, pigs, water pipes, people, and even shoes, trees, and animals become connected to IoT, farmers will have greater control over diseases affecting milk and meat production through the availability of real-time data and analytics. It is estimated that, on average, each connected cow will generate around 200 MB of data every month.

According to Cisco, back in 2003, the penetration of the Internet and connected devices per person was really low — but that grew at an exponential rate, doubling after every 5.32 years. That’s similar to the properties of Moore’s Law. Between 2008 and 2009, with the advent of smartphones, these figures rocketed, and it was predicted that 50 billion connected devices shall be in use by the year 2020. Thus, IoT was born and is in its adolescent phase already.

 

Today, IoT is well underway, as seen in initiatives such as Cisco’s Planetary Skin, smart grid, and intelligent vehicles, HP’s central nervous system for the earth (CeNSE), and smart dust, have the potential to add millions — even billions — of sensors to the Internet.

But just like as in during the social media explosion, the new age of IoT, connected devices, connected machines, connected cars, connected patients, connected consumers, and connected networks of Things we will need new age collaboration tools, new software, new database technologies, and infrastructure to accommodate, store and analyze huge amounts of data that will be generated — like the host of emerging technologies including graph databases, Big Data, microservices, and so on.

But that’s not all.

The Internet of Things will also require IOE – Integration of Everything — for meaningful interaction between devices and provides.

But as Kai Wähner of TIBCO discusses in his presentation “Microservices: Death of the Enterprise Service Bus,” microservices and API-led connectivity are ideally matched to meet integration challenges in the foreseeable future. MuleSoft’s “Anypoint Platform for APIs” backed by Cisco, Bosch’s “IoT platform,” or the upcoming API management suite from Kovair is a pointer to all this and shall empower the IoT revolution.

The explosion of connected devices — each requiring a specific IP address — already exhausted what was available in 2010 under IPv4 and required IPv6’s implementation immediately. In addition to opening up more IP addresses, IPv6 will also suffice for intra-planetary communication for a much longer period. Governments and the World Wide Web Consortium have remained laggards and skeptical with IPv6 implementation and allowed the exhaustion of IP addresses.

But it hasn’t been just governments. Bureaucratic and large, technology-driven organizations like Amazon, Google, and Facebook can remain skeptics under disguise and continue to block movements like Net Neutrality/ ZeroNet, Blockchain technology, IPFS (Inter Planetary File Sharing protocol), the overly cumbersome HTTP as they fear their monopolies will be challenged.

Conclusion

We humans, engaged in different capacities as company executives, consumers, government officials, or technology evangelists, are facing the debate of skepticism vs. futurism and will continue to doubt IoT — embracing it only incrementally until we see true, widespread benefits from it.

And we can see how our skepticism has worked against recognition and advancement.

After remaining skeptical for 120 years, the IEEE finally recognized the pioneering work done in by the Indian Physicist J.C. Bose during colonial rule and conferred on him the designation of the “Father of Telecommunication”. The mm wavelength frequency that he invented in his experiment in 1895 in Kolkata is the foundation of 5G (Wi-Fi Mobile network) that scientists and technologists across the world are now trying to reinvent that will provide the backbone for IoT.

Finally, we leave it to the reader’s imagination about the not-so-distant future, when all the connected devices in IoT begin to pass the Turing Test.

 

Original article here.

 


standard

Cloud Computing Market Projected To Reach $411B By 2020

2017-10-22 - By 

Gartner’s latest worldwide public cloud services revenue forecast published earlier this month predicts Infrastructure-as-a-Service (IaaS), currently growing at a 23.31% Compound Annual Growth Rate (CAGR), will outpace the overall market growth of 13.38% through 2020. Software-as-a-Service (SaaS) revenue is predicted to grow from $58.6B in 2017 to $99.7B in 2020. Taking into account the entire forecast period of 2016 – 2020, SaaS is on pace to attain 15.65% compound annual growth throughout the forecast period, also outpacing the total cloud market. The following graphic compares revenue growth by cloud services category for the years 2016 through 2020. Please click on the graphic to expand it for easier reading.

Catalysts driving greater adoption and correspondingly higher CAGRs include a shift Gartner sees in infrastructure, middleware, application and business process services spending. In 2016, Gartner estimates approximately 17% of the total market revenue for these areas had shifted to the cloud. Gartner predicts by 2021, 28% of all IT spending will be for cloud-based infrastructure, middleware, application and business process services. Another factor is the adoption of Platform-as-a-Service (PaaS). Gartner notes that enterprises are confident that PaaS can be a secure, scalable application development platform in the future.  The following graphic compares the compound annual growth rates (CAGRs) of each cloud service area including the total market.

 

Original article here.


standard

What Does Serverless Computing Mean?

2017-08-24 - By 

For developers, worrying about infrastructure is a chore they can do without. Serverless computing relieves that burden.

It’s always unfortunate to start the definition of a phrase by calling it a misnomer, but that’s where you have to begin with serverless computing: Of course there will always be servers. Serverless computing merely adds another layer of abstraction atop cloud infrastructure, so developers no longer need to worry about servers, including virtual ones in the cloud.

To explore this idea, I spoke with one of serverless computing’s most vocal proponents: Chad Arimura, CEO of the startup Iron.io, which develops software for microservices workload management. Arimura says serverless computing is all about the modern developer’s evolving frame of reference:

What we’ve seen is that the atomic unit of scale has been changing from the virtual machine to the container, and if you take this one step further, we’re starting to see something called a function … a single-purpose block of code. It’s something you can conceptualize very easily: Process an image. Transform a piece of data. Encode a piece of video.

To me this sounded a lot like microservices architecture, where instead of a building a monolithic application, you assemble an application from single-purpose services. What then is the difference between a microservice and a function?

A service has a common API that people can access. You don’t know what’s going on under the hood. That service may be powered by functions. So functions are the building block code aspect of it; the service itself is like the interface developers can talk to.

Just as developers use microservices to assemble applications and call services from functions, they can grab functions from a library to build the services themselves — without having to consider server infrastructure as they create an application.

AWS Lambda is the best-known example of serverless computing. As an Amazon instructional video explains, “once you upload your code to Lambda, the service handles all the capacity, scaling, patching, and administration of the infrastructure to run your code.” Both AWS Lambda and Iron.io offer function libraries to further accelerate development. Provisioning and autoscaling are on demand.

Keep in mind all of this is above the level of service orchestration — of the type offered by Mesos, Kubernetes, or Docker Swarm. Although Iron.io offers its own orchestration layer, which predated those solutions being generally available, it also plugs into them, “but we really play at the developer/API later,” says Arimura.

In fact, it’s fair to view the core of Iron.io’s functionality roughly equivalent to that of AWS Lambda, only deployable on all major public and private cloud platforms. Arimura sees the ability to deploy on premises as a particular Iron.io advantage because the hybrid cloud is becoming more and more essential to the enterprise approach to cloud computing. Think of the consistency and application portability enabled by the same serverless computing environment across public and private clouds.

Arimura even goes as far as to use the controversial “no-ops,” coined by former Netflix cloud architect Adrain Cockcroft. Again, just as there will always be servers, there will always be ops to run them. Again, no-ops and serverless computing take the developer’s point of view: Someone else has to worry about that stuff, but not me while I create software.

Serverless computing, then, represents yet another leap in developer efficiency, where even virtual infrastructure concerns melt away and libraries of services and functions reduce once again the amount of code developers need to write from scratch.

Enterprise dev shops have been slow to adopt agile, CICD, devops, and the like. But as we move up the stack to serverless computing levels of abstraction, the palpable benefits of modern development practices become more and more enticing.

Original article here.


standard

Cloud, backup and storage devices—how best to protect your data

2017-03-31 - By 

We are producing more data than ever before, with more than 2.5 quintillion bytes produced every day, according to computer giant IBM. That’s a staggering 2,500,000,000,000 gigabytes of data and it’s growing fast.

We have never been so connected through smart phones, smart watches, laptops and all sorts of wearable technologies inundating today’s marketplace. There were an estimated 6.4 billion connected “things” in 2016, up 30% from the previous year.

We are also continuously sending and receiving data over our networks. This unstoppable growth is unsustainable without some kind of smartness in the way we all produce, store, share and backup data now and in the future.

In the cloud

Cloud services play an essential role in achieving sustainable data management by easing the strain on bandwidth, storage and backup solutions.

But is the cloud paving the way to better backup services or is it rendering backup itself obsolete? And what’s the trade-off in terms of data safety, and how can it be mitigated so you can safely store your data in the cloud?

The cloud is often thought of as an online backup solution that works in the background on your devices to keep your photos and documents, whether personal or work related, backed up on remote servers.

In reality, the cloud has a lot more to offer. It connects people together, helping them store and share data online and even work together online to create data collaboratively.

It also makes your data ubiquitous, so that if you lose your phone or your device fails you simply buy a new one, sign in to your cloud account and voila! – all your data are on your new device in a matter of minutes.

Do you really back up your data?

An important advantage of cloud-based backup services is also the automation and ease of use. With traditional backup solutions, such as using a separate drive, people often discover, a little too late, that they did not back up certain files.

Relying on the user to do backups is risky, so automating it is exactly where cloud backup is making a difference.

Cloud solutions have begun to evolve from online backup services to primary storage services. People are increasingly moving from storing their data on their device’s internal storage (hard drives) to storing them directly in cloud-based repositories such as DropBox, Google Drive and Microsoft’s OneDrive.

Devices such as Google’s Chromebook do not use much local storage to store your data. Instead, they are part of a new trend in which everything you produce or consume on the internet, at work or at home, would come from the cloud and be stored there too.

Recently announced cloud technologies such as Google’s Drive File Stream or Dropbox’s Smart Sync are excellent examples of how are heading in a new direction with less data on the device and a bigger primary storage role for the cloud.

Here is how it works. Instead of keeping local files on your device, placeholder files (sort of empty files) are used, and the actual data are kept in the cloud and downloaded back onto the device only when needed.

Edits to the files are pushed to the cloud so that no local copy is kept on your device. This drastically reduces the risk of data leaks when a device is lost or stolen.

So if your entire workspace is in the cloud, is backup no longer needed?

No. In fact, backup is more relevant than ever, as disasters can strike cloud providers themselves, with hacking and ransomware affecting cloud storage too.

Backup has always had the purpose of reducing risks using redundancy, by duplicating data across multiple locations. The same can apply to cloud which can be duplicated across multiple cloud locations or multiple cloud service providers.

Privacy matters

Yet beyond the disruption of the market, the number-one concern about the use of for storing user data is privacy.

Data privacy is strategically important, particularly when customer data are involved. Many privacy-related problems can happen when using the cloud.

There are concerns about the processes used by cloud providers for privacy management, which often trade privacy for convenience. There are also concerns about the technologies put in place by cloud providers to overcome privacy related issues, which are often not effective.

When it comes to technology, encryption tools protecting your sensitive data have actually been around for a long time.

Encryption works by scrambling your data with a very large digital number (called a key) that you keep secret so that only you can decrypt the data. Nobody else can decode your data without that key.

Using encryption tools to encrypt your data with your own key before transferring it into the cloud is a sensible thing to do. Some cloud providers are now offering this option and letting you choose your own key.

Share vs encryption

But if you store data in the cloud for the purpose of sharing it with others – and that’s often the precise reason that users choose to use – then you might require a process to distribute encryption keys to multiple participants.

This is where the hassle can start. People you share with would need to get the key too, in some way or another. Once you share that key, how would you revoke it later on? How would you prevent it from being re-shared without your consent?

More importantly, how would you keep using the collaboration features offered by cloud providers, such as Google Docs, while working on encrypted files?

These are the key challenges ahead for cloud users and providers. Solutions to those challenges would truly be game-changing.

 

Original article here.


standard

AWS dominates cloud computing, bigger than IBM/Google/Microsoft combined

2017-02-12 - By 

Amazon’s cloud provider is the biggest player in the rapidly growing cloud infrastructure market, according to new data.

Amazon Web Services (AWS) accounts for one third of the cloud infrastructure market, more than the value generated by its next three biggest rivals combined.

AWS dominates, with a 33.8 percent global market share, while its three nearest competitors — Microsoft, Google, and IBM — together accounted for 30.8 percent of the market, according to calculations by analyst Canalys.

The four leading service providers were followed by Alibaba and Oracle, which made up 2.4 percent and 1.7 percent of the total respectively, with rest of the market made up of a number of smaller players.

According to the researchers, total spending on cloud infrastructure services, which stood at $10.3bn in the fourth quarter of last year (up 49 percent year-on-year) will hit $55.8bn in 2017 — up 46 percent on 2016’s total of $38.1bn.

Continuing demand is leading the cloud companies to accelerate their data centre expansion. Canalys said AWS launched 11 new availability zones globally in 2016, four of which were established in Canada and the UK in the past quarter. IBM also opened its new data centre in the UK, bringing its total cloud data centres to 50 worldwide, while Microsoft also added with new facilities in the UK and Germany.

Google and Oracle set up their first infrastructure in Japan and China respectively, aiming at expanding their footprint in the Asia Pacific region, while Alibaba also unveiled the availability of its four new data centres in Australia, Japan, Germany, and the United Arab Emirates.

Strict data sovereignty laws — under which personal data has to be stored in servers that are physically located within the country — mean cloud service providers have to build data centres in key markets, such as Germany, Canada, Japan, the UK, China, and the Middle East, said Canalys research analyst Daniel Liu.

Original article here.


standard

Amazon AWS Still Leads Cloud Computing in 2017, 1000 Services Added

2017-02-06 - By 

Amazon Web Services, or AWS, the current king of Infrastructure as a Service (IaaS), has once again proved that their leadership position remains undisputed. AWS continued its tradition of posting quarterly sequential growth during the fourth quarter of 2016, reaching $3.546 billion in revenues, representing growth of 47% compared to Q4-15 and $305 million more than their Q3-16 numbers.

Just a quick look at the last three quarters will reveal that their sequential sales expansion increased in 2016 compared to 2015. Despite reaching a run rate of more than $14 billion, Amazon Web Services continues to expand at an extremely fast pace, posting nearly 50% growth year over year.

The fact that they are able to add +$300 million in sales sequentially (from one quarter to the next) shows that the growth momentum is still very much intact and that a slowdown in the short term is out of the question.

If Amazon Web Services continues the current trajectory, an annual revenue figure close to or more than $20 billion will be possible in the next four to six quarters. That will be a monumental achievement for a hard core retailer competing with top tech companies of the world.

During the recent quarter, AWS’s competitor Microsoft announced that annualized commercial cloud run rate has exceeded $14 billion. Amazon possible makes a little bit more from cloud than Microsoft, but the latter makes a hefty sum from its SaaS product line-up led by Office 365. The fact that Amazon is still a little ahead of Microsoft without a significant SaaS portfolio is testament to the kind of strength AWS has in the IaaS space.

AWS continues to be the most profitable unit for Amazon with an operating income of $926 million, compared to $816 million from Amazon’s North America retail unit. During the fourth quarter earnings call Amazon CFO Brian Olsavsky told analysts that Amazon had cut prices seven times during fourth quarter, and added more than 1000 services and features in 2016 compared to over 700 in 2015.

The process of cutting prices and adding more services helped AWS operating numbers nicely, with the segment reporting an operating margin of 31.3%.

At these levels Amazon does remains the company to beat because, with such fat margins, engaging Amazon in a price war is not going to work for any competitor. The unit has enough bandwidth to withstand any price onslaught.

The only way to compete with Amazon is to have a better offering and add more value to your services. But even here, Amazon is leaving no wiggle room. How does a competitor match up to a company that has added thousands of services, is laser-focused on cloud infrastructure, keeps cutting prices without waiting for someone else to do it first, keeps slowly but steadily expanding its data center footprint and has $14 billion to show for it.

The relentless Amazon is possibly the best thing that happened to the cloud industry because they will keep everyone on their toes, including themselves.

You can access Amazon’s fourth quarter 2016 earnings report here.

Original article here.


standard

The Cloud is at the Heart of Building the Blockchain of Success

2017-02-01 - By 

Blockchain is the new buzzword on the block; and while many business leaders, managers, developers and IT departments are Googling it and left scratching their heads, others are wising up to it, are realising how brilliant it is, and are recognising the opportunity it’s going to bring and the potential impact it will have.

If we put aside the tech behind it and focus on what it can do, it’s actually capable of disrupting many industries and bringing new innovations not only into finance, but also property, automotive, music, trading and healthcare.

To make it easier to understand what blockchain can bring to businesses, think about how a Google Doc enables people to access and make updates in real time. No need to save over and send new files to all and sundry, as the next time someone opens the doc it will be the most up to date as the file automatically keeps a record of who made which changes and when, as that digital address is native to the cloud, not the local hard drive. Google Docs is to Microsoft Word what blockchain is to a traditional ledger system.

Startups and large corporations are working together to figure out how this ‘shared ledger’ concept can benefit their businesses. And this concept of data retention is at the heart of cloud-based technology.

Cloud technologies are the forerunners to blockchain and developers and designers who are creating new innovations in this space, should keep an eye on blockchain opportunities too. Private blockchain networks can run in secure cloud environments and we have witnessed test collaborations between Google’s cloud services, IBM, Microsoft and Amazon and if successful, these cloud services could play a role in blockchain deployments.

Applying blockchain to business

Let’s take a look at some use cases and how blockchain can be implement in different industry sectors to speed up processes, guarantee security, trust and transparency and keep accurate records that can be accessed by stakeholders, no matter where they are in the world.

Property: You’re buying a house and want to know when the last repairs and updates were carried out, which companies provided them and when. Blockchain could help homeowners and estate agents keep a record of information relating to a property, which would be centrally located for anyone in the house buying and selling process to access – reducing hours of paper pushing and phone calls and create transparent information on the status and maintenance of the house before putting in an offer.

Automotive: In a similar way to housing, tracking the value of second hand vehicles through blockchain would make purchases a lot easier for buyers and traders. Information on the car’s mileage, services, and driving history would be accurate, and if the car was ever written off the information could be accessed digitally to salvage the new gearbox that was installed only two months ago.

Music: There has already been massive disruption in the music industry but in the age of streaming services, blockchain could show musicians, creators, fans, marketers and labels the data and dialogue involved in listening to their songs and albums. Artists would be much closer to their fans and over time they could influence and reward them. A truly democratic and commercially viable way of promoting music. Thanks to blockchain.

Banking: Most big banks have a headline piece highlighting how they are working with blockchain especially within security. The technology promotes security and trust and allows all parties to work with one single reference point, which can cut manpower and middlemen costs.

As with any new technology, there are stumbling blocks. Commercial banks may not want all that information to be managed by developers so private blockchains may need to be created. It’s important to take a collaborative approach so banking organisations can pool their resources, identify and share hurdles and resolutions.

Trading stocks and shares: Nasdaq has successfully completed a blockchain test in Estonia to run proxy voting on its exchange and is now assessing whether to implement the new system as it has streamlined a process that was highly manual and time consuming. Nasdaq is one of the early adopters and a supporter of the technology in the exchange industry and already uses it to power its market for share of private companies It is also launching a marketplace powered by blockchain for pre-IPO private securities exchange in the USA.

Healthcare: Within healthcare, blockchain promises to address security and data integrity issues relating to patient information within healthcare providers, hospitals, insurance companies and clinical trials. IBM Watson teamed up with the US FDA to trial a data sharing initiative to keep track of patients involved in a particular trial and they are going on to explore how a blockchain framework could potentially provide benefits to public health.

Blockchain as a service: Blockchain as a service is the most viable way for the technology to scale. Start-ups like Chain.com are making blockchain applications much more accessible to big corporations. It is probably the most recognised ‘blockchain as a service’ platform startup as it lets enterprises use blockchain technology in a variety of network infrastructures.

Where to next

To put into perspective how big it could become, the World Economic Forum predicts that by about 2027 about 10% of the global GDP would be stored on blockchains so companies looking to get their piece of the action should start investigating now.

Silicon Valley investor Marc Andreessen cites blockchain as “one of the most fundamental inventions in the history of computer science” and we’d agree. 2017 is going to be the year it is tested, trialled and iterated to suit individual market and business requirements.

All that without even mentioning Bitcoin – we’ll save that for another day.

Original article here.


standard

10 new AWS cloud services you never expected

2017-01-27 - By 

From data scooping to facial recognition, Amazon’s latest additions give devs new, wide-ranging powers in the cloud

In the beginning, life in the cloud was simple. Type in your credit card number and—voilà—you had root on a machine you didn’t have to unpack, plug in, or bolt into a rack.

That has changed drastically. The cloud has grown so complex and multifunctional that it’s hard to jam all the activity into one word, even a word as protean and unstructured as “cloud.” There are still root logins on machines to rent, but there are also services for slicing, dicing, and storing your data. Programmers don’t need to write and install as much as subscribe and configure.

Here, Amazon has led the way. That’s not to say there isn’t competition. Microsoft, Google, IBM, Rackspace, and Joyent are all churning out brilliant solutions and clever software packages for the cloud, but no company has done more to create feature-rich bundles of services for the cloud than Amazon. Now Amazon Web Services is zooming ahead with a collection of new products that blow apart the idea of the cloud as a blank slate. With the latest round of tools for AWS, the cloud is that much closer to becoming a concierge waiting for you to wave your hand and give it simple instructions.

Here are 10 new services that show how Amazon is redefining what computing in the cloud can be.

Glue

Anyone who has done much data science knows it’s often more challenging to collect data than it is to perform analysis. Gathering data and putting it into a standard data format is often more than 90 percent of the job.

Glue is a new collection of Python scripts that automatically crawls your data sources to collect data, apply any necessary transforms, and stick it in Amazon’s cloud. It reaches into your data sources, snagging data using all the standard acronyms, like JSON, CSV, and JDBC. Once it grabs the data, it can analyze the schema and make suggestions.

The Python layer is interesting because you can use it without writing or understanding Python—although it certainly helps if you want to customize what’s going on. Glue will run these jobs as needed to keep all the data flowing. It won’t think for you, but it will juggle many of the details, leaving you to think about the big picture.

FPGA

Field Programmable Gate Arrays have long been a secret weapon of hardware designers. Anyone who needs a special chip can build one out of software. There’s no need to build custom masks or fret over fitting all the transistors into the smallest amount of silicon. An FPGA takes your software description of how the transistors should work and rewires itself to act like a real chip.

Amazon’s new AWS EC2 F1 brings the power of FGPA to the cloud. If you have highly structured and repetitive computing to do, an EC2 F1 instance is for you. With EC2 F1, you can create a software description of a hypothetical chip and compile it down to a tiny number of gates that will compute the answer in the shortest amount of time. The only thing faster is etching the transistors in real silicon.

Who might need this? Bitcoin miners compute the same cryptographically secure hash function a bazillion times each day, which is why many bitcoin miners use FPGAs to speed up the search. Anyone with a similar compact, repetitive algorithm you can write into silicon, the FPGA instance lets you rent machines to do it now. The biggest winners are those who need to run calculations that don’t map easily onto standard instruction sets—for example, when you’re dealing with bit-level functions and other nonstandard, nonarithmetic calculations. If you’re simply adding a column of numbers, the standard instances are better for you. But for some, EC2 with FGPA might be a big win.

Blox

As Docker eats its way into the stack, Amazon is trying to make it easier for anyone to run Docker instances anywhere, anytime. Blox is designed to juggle the clusters of instances so that the optimum number are running—no more, no less.

Blox is event driven, so it’s a bit simpler to write the logic. You don’t need to constantly poll the machines to see what they’re running. They all report back, so the right number can run. Blox is also open source, which makes it easier to reuse Blox outside of the Amazon cloud, if you should need to do so.

X-Ray

Monitoring the efficiency and load of your instances used to be simply another job. If you wanted your cluster to work smoothly, you had to write the code to track everything. Many people brought in third parties with impressive suites of tools. Now Amazon’s X-Ray is offering to do much of the work for you. It’s competing with many third-party tools for watching your stack.

When a website gets a request for data, X-Ray traces as it as flows your network of machines and services. Then X-Ray will aggregate the data from multiple instances, regions, and zones so that you can stop in one place to flag a recalcitrant server or a wedged database. You can watch your vast empire with only one page.

Rekognition

Rekognition is a new AWS tool aimed at image work. If you want your app to do more than store images, Rekognition will chew through images searching for objects and faces using some of the best-known and tested machine vision and neural-network algorithms. There’s no need to spend years learning the science; you simply point the algorithm at an image stored in Amazon’s cloud, and voilà, you get a list of objects and a confidence score that ranks how likely the answer is correct. You pay per image.

The algorithms are heavily tuned for facial recognition. The algorithms will flag faces, then compare them to each other and references images to help you identify them. Your application can store the meta information about the faces for later processing. Once you put a name to the metadata, your app will find people wherever they appear. Identification is only the beginning. Is someone smiling? Are their eyes closed? The service will deliver the answer, so you don’t need to get your fingers dirty with pixels. If you want to use impressive machine vision, Amazon will charge you not by the click but by the glance at each image.

Athena

Working with Amazon’s S3 has always been simple. If you want a data structure, you request it and S3 looks for the part you want. Amazon’s Athena now makes it much simpler. It will run the queries on S3, so you don’t need to write the looping code yourself. Yes, we’ve become too lazy to write loops.

Athena uses SQL syntax, which should make database admins happy. Amazon will charge you for every byte that Athena churns through while looking for your answer. But don’t get too worried about the meter running out of control because the price is only $5 per terabyte. That’s about 50 billionths of a cent per byte. It makes the penny candy stores look expensive.

Lambda@Edge

The original idea of a content delivery network was to speed up the delivery of simple files like JPG images and CSS files by pushing out copies to a vast array of content servers parked near the edges of the Internet. Amazon is taking this a step further by letting us push Node.js code out to these edges where they will run and respond. Your code won’t sit on one central server waiting for the requests to poke along the backbone from people around the world. It will clone itself, so it can respond in microseconds without being impeded by all that network latency.

Amazon will bill your code only when it’s running. You won’t need to set up separate instances or rent out full machines to keep the service up. It is currently in a closed test, and you must apply to get your code in their stack.

Snowball Edge

If you want some kind of physical control of your data, the cloud isn’t for you. The power and reassurance that comes from touching the hard drive, DVD-ROM, or thumb drive holding your data isn’t available to you in the cloud. Where is my data exactly? How can I get it? How can I make a backup copy? The cloud makes anyone who cares about these things break out in cold sweats.

The Snowball Edge is a box filled with data that can be delivered anywhere you want. It even has a shipping label that’s really an E-Ink display exactly like Amazon puts on a Kindle. When you want a copy of massive amounts of data that you’ve stored in Amazon’s cloud, Amazon will copy it to the box and ship the box to wherever you are. (The documentation doesn’t say whether Prime members get free shipping.)

Snowball Edge serves a practical purpose. Many developers have collected large blocks of data through cloud applications and downloading these blocks across the open internet is far too slow. If Amazon wants to attract large data-processing jobs, it needs to make it easier to get large volumes of data out of the system.

If you’ve accumulated an exabyte of data that you need somewhere else for processing, Amazon has a bigger version called Snowmobile that’s built into an 18-wheel truck complete with GPS tracking.

Oh, it’s worth noting that the boxes aren’t dumb storage boxes. They can run arbitrary Node.js code too so you can search, filter, or analyze … just in case.

Pinpoint

Once you’ve amassed a list of customers, members, or subscribers, there will be times when you want to push a message out to them. Perhaps you’ve updated your app or want to convey a special offer. You could blast an email to everyone on your list, but that’s a step above spam. A better solution is to target your message, and Amazon’s new Pinpoint tool offers the infrastructure to make that simpler.

You’ll need to integrate some code with your app. Once you’ve done that, Pinpoint helps you send out the messages when your users seem ready to receive them. Once you’re done with a so-called targeted campaign, Pinpoint will collect and report data about the level of engagement with your campaign, so you can tune your targeting efforts in the future.

Polly

Who gets the last word? Your app can, if you use Polly, the latest generation of speech synthesis. In goes text and out comes sound—sound waves that form words that our ears can hear, all the better to make audio interfaces for the internet of things.

Original article here.


standard

IoT 2016 in review: The 8 Biggest IoT developments of the year

2017-01-16 - By 

As we go into 2017 our IoT Analytics team is again evaluating the main IoT developments of the past year in the global “Internet of Things” arena. This article highlights some general IoT 2016 observations as well as our top 8 news stories, with a preview for the new year of opportunities and challenges for global IoT businesses. (For your reference, here is our 2015 IoT year in review article.)

In 2016 the main theme for IoT was the shift from hype to reality. While in 2015, most people only heard about IoT in the media or consumed some marketing blogs, 2016 was different. Many consumers and enterprises went out and started their own IoT endeavors or bought their own IoT devices. Both consumer IoT and enterprise IoT enjoyed record uptake, but also saw some major setbacks.

A. General IoT 2016 observations

A1. Consumer IoT

Millions of consumers bought their first IoT Device in 2016. For many of them this was Amazon Echo (see below for more details).

Image 1: The Amazon Echo Dot was a consumer IoT 2016 success (left hand side) while other devices didn’t always convince (e.g., Nest thermostat – right hand side)

Unfortunately many consumers also realized that marketing promises and reality are often still disparate. Cases of disappointed users are increasing (For example a smart thermostat user who discovered that his thermostat was disconnected for a day).

Some companies were dissolved in 2016 (like the Smart Home Hub Revolv in April – causing many angry customers), others went bankrupt (like the smart watch maker Pebble in December) or didn’t even come to life at all (such as the augmented reality helmet startup Skully that enjoyed a lot of publicity, but filed for bankruptcy in August without having sold a single product).

 

A2. Enterprise IoT

On the enterprise/industrial side of things, IoT 2016 will go down as the year many firms got real about their first IoT pilot projects.

A general wake-up call came in September when a massive cybersecurity attack that involved IoT devices (mainly CCTV cameras) shut down DNS provided Dyn and with it their customer’s websites (e.g., AirBnB, Netflix and Twitter). While this kind of attack didn’t directly affect most IoT companies, its implications scared many IT and IoT decision-makers. As a result, many IoT discussions have now shifted towards cybersecurity solutions.

 

B. Top 8 IoT 2016 Stories

For us at IoT Analytics, the IoT Security Attack on Dyn servers qualifies as the #1 story of the year. Here are our top takeaways from IoT 2016:

1.    Biggest overall story: IoT Security attack on Dyn servers

The Dyn DDoS attack was the first large-scale cybersecurity attack that involved IoT devices – Dyn estimates that 100,000 infected IoT devices were involved. As a first-of-a-kind, it sent shockwaves through corporate IT and IoT.

Chinese CCTV system manufacturer, Hangzhou Xiongmai Technology Company, was at the core of the attack.  Its cameras (among others) were infected with the so-called Mirai malware. This allowed the hackers to connect to the infected IoT devices and launch a flood of well-timed massive requests on Dyn servers – which led to the shutdown of their services.

2.    Biggest Consumer IoT Success: Amazon Echo

Launched in June 2015, the Amazon Echo Smart Home Voice Control was undoubtedly the consumer IoT success story of the year. Recent data provided by Amazon reveals that device sales exploded by 9x (year-on-year vs. last Christmas).

Amazon sold more than 1 million Echo devices in December 2016

Our app-based Smart Home models confirm this trend suggesting that Amazon sold more than 1 million Echo devices in December 2016 and close to 4 million devices throughout the whole of 2016.

With these gains, Amazon has suddenly become the #1 Smart Home Hub and is leading the paradigm shift towards a voice-controlled automated home. Google jumped on the same train in October by releasing Google Home; Microsoft Home Hub is expected to follow in 2017.

3.    Most overcrowded space: IoT Platforms

When we launched our coverage of IoT Platforms in early 2015, little did we know that the topic would soon become the hottest IoT area. Our count of platform providers in May 2016 showed 360 platforms. Our internal research is now well over 400. IoT Platforms is also well placed in the Gartner Hype Cycle 2016.

Companies have realized that the value of IoT lies in the data and that those that manage this data will be the ones capturing a large chunk of this value. Hence, everyone is building an IoT platform.

The frightening part is not necessarily the number but rather the fact that the sales pitches of the platform providers all sound like this: “We are the only true end-2-end platform which is device-agnostic and completely secure”.

4.    Largest M&A Deal: Qualcomm/NXP

While we can see a massive expansion of global IoT software/analytics and platform offerings, we are also witnessing a consolidation among larger IoT hardware providers – notably in the chip sector. In October 2016, US-based chipmaker Qualcomm announced it would buy the leader in connected car chips NXP for $39B, making it the biggest-ever deal in the semiconductor industry.

Other large hardware/semiconductor acquisitions and mergers during IoT 2016 include Softbank/ARM ($32B) and TDK/Invensense ($1.3B)

5.    Most discussed M&A Deal: Cisco/Jasper

In February, Cisco announced that it would buy IoT Platform provider Jasper Technologies for $1.4B. Journalists celebrated the acquisition as a logical next step for Cisco’s “Internet of Everything” story – combining Cisco’s enterprise routers with Jasper’s backend software for network operators and hopefully helping Cisco put an end to declining hardware sales.

6.    Largest startup funding: Sigfox

Sigfox already made it into our 2015 IoT news list with their $100M Series D round. Their momentum and the promise of a global Low Power Wide Area Network led to an even larger funding round in 2016. In November, the French-based company received a record $160M in a Series E that involved Intel Capital and Air Liquide among others.

Another notable startup funding during IoT 2016 involved the IoT Platform C3IoT. The Redwood City based company received $70M in their Series D funding.

7.    Investment story of the year: IoT Stocks

For the first time IoT stocks outperformed the Nasdaq significantly. The IoT & Industry 4.0 stock fund (Traded in Germany under ISIN: DE000LS9GAC8) is up 17.5% year-on-year, beating the Nasdaq which is up 9.6% in the same time frame. Cloud service providers Amazon and Microsoft are up 14% for the year, IoT Platform provider PTC is up 35%. Even communication hardware firm Sierra Wireless started rebounding in Q4/2016.

Some of the IoT 2016 outperformance is due to an increasing number of IoT acquisitions (e.g., TDK/Invensense). At the beginning of 2016 we asked if the underperformance of IoT stocks in 2015 was an opportunity in 2016. In hindsight, the answer to that question is “Yes”. Will the trend continue in 2017?

8.    Most important government initiative: EU Data Protection policy

In May, the European Union passed the General Data Protection Regulation (“GDPR”) which will come into effect on 25 May 2018. The new law has a wide range of implications for IoT technology vendors and users. Among other aspects:

  • Security breaches must be reported
  • Each IoT user must provide explicit consent that their data may be processed
  • Each user must be given the right to object to automated decision making
  • Data coming from IoT Devices used by children may not be processed

From a security and privacy policy point of view the law is a major step forward. IoT technology providers working in Europe and around the world now need to revisit their data governance, data privacy and data security practices and policies.

C. What to expect in 2017:

  • War for IoT platform leadership. The large IoT platform providers are gearing up for the war for IoT (platform) leadership. After years of organic development, several larger vendors started buying smaller platform providers in 2016, mainly to close existing technology gaps (e.g., GE-Bitstew, SAP-Plat.one, Microsoft-Solair)
  • War for IoT connectivity leadership. NB-IoT will finally be introduced in 2017. The new low-power standard that is heavily backed by major telco technology providers will go head-to-head with existing LPWAN technology such as Sigfox or LoRa.
  • AR/VR becoming mainstream. IoT Platform providers PTC (Vuforia) and Microsoft (Hololens) have already showcased a vast range of Augmented Reality / Virtual Reality use cases. We should expect the first real-life use cases emerging in 2017.
  • Even more reality and less hype. The attention is shifting from vendor/infrastructure topics such as what the next generation of platforms or connectivity standards will look like and towards actual implementations and use cases. While there are still major developments the general IoT audience will start taking some of these technology advancements for granted and focus on where the value lies. We continue to follow that story and will update our list of IoT projects

Our IoT coverage in 2017: Subscribe to our newsletter for continued coverage and updates. In 2017, we will keep our focus on important IoT topics such as IoT Platforms, Security and Industry 4.0 with plenty of new reports due in Q1/2017. If you are interested in a comprehensive IoT coverage you may contact us for an enterprise subscription to our complete IoT research content.

Much success for 2017 from our IoT Analytics Team to yours!

 

Original article here.


standard

CRM is Shifting to SaaS in the Cloud

2017-01-07 - By 

The CRM market serving the large enterprise is mature. The market has consolidated in the past five years. For example, Oracle has built its customer experience portfolio primarily by acquisition. SAP, like Oracle, aims to support end-to-end customer experiences and has made acquisitions — notably, Hybris in 2013 — to bolster its capabilities. Salesforce made a series of moves to strengthen the Service Cloud. It used this same tactic to broaden its CRM footprint with the acquisition of Demandware for eCommerce in 2016.

These acquisitions broaden and deepen the footprints of large vendors, but these vendors must spend time integrating acquired products, offering common user experiences as well as common business analyst and administrator tooling — priorities that can conflict with core feature development.

What this means is that these CRM vendors increasingly offer broader and deeper capabilities which bloat their footprint and increase their complexity with features that many users can’t leverage. At the same time, new point solution vendors are popping up at an unprecedented rate and are delivering modern interfaces and mobile-first strategies that address specific business problems such as sales performance management, lead to revenue management, and digital customer experience.

The breadth and depth of CRM capabilities available from vendor solutions makes it increasingly challenging to be confident of your CRM choice. In the Forrester Wave: CRM Suites For Enterprise Organizations, Q4 2016. we pinpoint the strengths of leading vendors that offer solutions suitable for enterprise CRM teams. Here are some of our key findings:

  • The shift to software-as-a-service (SaaS) is well underway. Forrester Data shows that 1/3 enterprises are using SaaS CRM, and another 1/3 complement their existing solutions with SaaS. We expect SaaS to become the primary deployment model for CRM and that newer SaaS solutions will replace most on-premises installations in the next five years.  
  • Intelligence takes center stage. Large organizations that manage huge volumes of data struggle to pinpoint optimal offers, discount levels, product bundles, and next best steps for customer engagement. They increasingly turn to analytics to uncover insight and prescribe the right action for the business user to take. Today, leading vendors offer a range of packaged capabilities to infuse decisioning in customer-facing interactions.
  • Vendors increasingly invest in vertical editions. Horizontal CRM can only take you so far, as different industries have different requirements for engaging with customers. CRM vendors increasingly offer solutions — templates, common process flows, data model extensions, and UI labels — pertinent to specific industries.
  • Customer success rises to the top. In a mature market, you have to dig deep to find real differences between vendor offerings. CRM success depends on the right choice of consulting partners to implement and integrate your solution. CRM vendors are maturing their consulting services, deeply investing in growing regional and global strategic services partners, and investing in customer success to properly onboard customers and actively manage customer relationships. This preserves a company’s revenue stream by reducing churn, expands revenue by increasing customer lifetime value, and can influence new sales via customer advocacy efforts.

Original article here.


standard

Cloud market valued at $148bn for 2016 & growing 25% annually

2017-01-05 - By 

Operator and vendor revenues across the main cloud services and infrastructure market segments hit $148 billion (£120.5bn) in 2016 growing at 25% annually, according to the latest note from analyst firm Synergy Research.

Infrastructure as a service (IaaS) and platform as a service (PaaS) experienced the highest growth rates at 53%, followed by hosted private cloud infrastructure services, at 35%, and enterprise SaaS, at 34%. Amazon Web Services (AWS) and Microsoft lead the way in IaaS and PaaS, with IBM and Rackspace on top for hosted private cloud.

In the four quarters ending September (Q3) 2016, total spend on hardware and software to build cloud infrastructure exceeded $65bn, according to the researchers. Spend on private cloud accounts for more than half of the overall total, but public cloud spend is growing much more rapidly. The note also argues unified comms as a service (UCaaS) is growing ‘steadily’.

“We tagged 2015 as the year when cloud became mainstream and I’d say that 2016 is the year that cloud started to dominate many IT market segments,” said Jeremy Duke, Synergy Research Group founder and chief analyst in a statement. “Major barriers to cloud adoption are now almost a thing of the past, especially on the public cloud side.

“Cloud technologies are now generating massive revenues for technology vendors and cloud service providers and yet there are still many years of strong growth ahead,” Duke added.

The most recent examination of the cloud infrastructure market by Synergy back in August argued AWS, Microsoft, IBM and Google continue to grow more quickly than their smaller competitors and, between them, own more than half of the global cloud infrastructure service market. 

Original article here.


standard

All the Big Players Are Remaking Themselves Around AI

2017-01-02 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.

 


standard

Tech trends for 2017: more AI, machine intelligence, connected devices and collaboration

2016-12-30 - By 

The end of year or beginning of year is always a time when we see many predictions and forecasts for the year ahead. We often publish a selection of these to show how tech-based innovation and economic development will be impacted by the major trends.

A number of trends reports and articles have bene published – ranging from investment houses, to research firms, and even innovation agencies. In this article we present headlines and highlights of some of these trends – from Gartner, GP Bullhound, Nesta and Ovum.

Artificial intelligence will have the greatest impact

GP Bullhound released its 52-page research report, Technology Predictions 2017, which says artificial intelligence (AI) is poised to have the greatest impact on the global technology sector. It will experience widespread consumer adoption, particularly as virtual personal assistants such as Apple Siri and Amazon Alexa grow in popularity as well as automation of repetitive data-driven tasks within enterprises.

Online streaming and e-sports are also significant market opportunities in 2017 and there will be a marked growth in the development of content for VR/AR platforms. Meanwhile, automated vehicles and fintech will pose longer-term growth prospects for investors.

The report also examines the growth of Europe’s unicorn companies. It highlights the potential for several firms to reach a $10 billion valuation and become ‘decacorns’, including BlaBlaCar, Farfetch, and HelloFresh.

Alec Dafferner, partner, GP Bullhound, commented, “The technology sector has faced up to significant challenges in 2016, from political instability through to greater scrutiny of unicorns. This resilience and the continued growth of the industry demonstrate that there remain vast opportunities for investors and entrepreneurs.”

Big data and machine learning will be disruptors

Advisory firm Ovum says big data continues to be the fastest-growing segment of the information management software market. It estimates the big data market will grow from $1.7bn in 2016 to $9.4bn by 2020, comprising 10 percent of the overall market for information management tooling. Its 2017 Trends to Watch: Big Data report highlights that while the breakout use case for big data in 2017 will be streaming, machine learning will be the factor that disrupts the landscape the most.

Key 2017 trends:

  • Machine learning will be the biggest disruptor for big data analytics in 2017.
  • Making data science a team sport will become a top priority.
  • IoT use cases will push real-time streaming analytics to the front burner.
  • The cloud will sharpen Hadoop-Spark ‘co-opetition’.
  • Security and data preparation will drive data lake governance.

Intelligence, digital and mesh

In October, Gartner issued its top 10 strategic technology trends for 2017, and recently outlined the key themes – intelligent, digital, and mesh – in a webinar.  It said that autonomous cars and drone transport will have growing importance in the year ahead, alongside VR and AR.

“It’s not about just the IoT, wearables, mobile devices, or PCs. It’s about all of that together,” said Cearley, according to hiddenwires magazine. “We need to put the person at the canter. Ask yourself what devices and service capabilities do they have available to them,” said David Cearley, vice president and Gartner fellow, on how ‘intelligence everywhere’ will put the consumer in charge.

“We need to then look at how you can deliver capabilities across multiple devices to deliver value. We want systems that shift from people adapting to technology to having technology and applications adapt to people.  Instead of using forms or screens, I tell the chatbot what I want to do. It’s up to the intelligence built into that system to figure out how to execute that.”

Gartner’s view is that the following will be the key trends for 2017:

  • Artificial intelligence (AI) and machine learning: systems that learn, predict, adapt and potentially operate autonomously.
  • Intelligent apps: using AI, there will be three areas of focus — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces.
  • Intelligent things, as they evolve, will shift from stand-alone IoT devices to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks.
  • Virtual and augmented reality: VR can be used for training scenarios and remote experiences. AR will enable businesses to overlay graphics onto real-world objects, such as hidden wires on the image of a wall.
  • Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.
  • Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.
  • Conversational systems will shift from a model where people adapt to computers to one where the computer ‘hears’ and adapts to a person’s desired outcome.
  • Mesh and app service architecture is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs (application programming interfaces) and events to deliver modular, flexible and dynamic solutions.
  • Digital technology platforms: every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the internet of things and business ecosystems.
  • Adaptive security architecture: multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

The real-world vision of these tech trends

UK innovation agency Nesta also offers a vision for the year ahead, a mix of the plausible and the more aspirational, based on real-world examples of areas that will be impacted by these tech trends:

  • Computer says no: the backlash: the next big technological controversy will be about algorithms and machine learning, which increasingly make decisions that affect our daily lives; in the coming year, the backlash against algorithmic decisions will begin in earnest, with technologists being forced to confront the effects of aspects like fake news, or other events caused directly or indirectly by the results of these algorithms.
  • The Splinternet: 2016’s seismic political events and the growth of domestic and geopolitical tensions, means governments will become wary of the internet’s influence, and countries around the world could pull the plug on the open, global internet.
  • A new artistic approach to virtual reality: as artists blur the boundaries between real and virtual, the way we create and consume art will be transformed.
  • Blockchain powers a personal data revolution: there is growing unease at the way many companies like Amazon, Facebook and Google require or encourage users to give up significant control of their personal information; 2017 will be the year when the blockchain-based hardware, software and business models that offer a viable alternative reach maturity, ensuring that it is not just companies but individuals who can get real value from their personal data.
  • Next generation social movements for health: we’ll see more people uniting to fight for better health and care, enabled by digital technology, and potentially leading to stronger engagement with the system; technology will also help new social movements to easily share skills, advice and ideas, building on models like Crohnology where people with Crohn’s disease can connect around the world to develop evidence bases and take charge of their own health.
  • Vegetarian food gets bloodthirsty: the past few years have seen growing demand for plant-based food to mimic meat; the rising cost of meat production (expected to hit $5.2 billion by 2020) will drive kitchens and laboratories around the world to create a new wave of ‘plant butchers, who develop vegan-friendly meat substitutes that would fool even the most hardened carnivore.
  • Lifelong learners: adult education will move from the bottom to the top of the policy agenda, driven by the winds of automation eliminating many jobs from manufacturing to services and the professions; adult skills will be the keyword.
  • Classroom conundrums, tackled together: there will be a future-focused rethink of mainstream education, with collaborative problem solving skills leading the charge, in order to develop skills beyond just coding – such as creativity, dexterity and social intelligence, and the ability to solve non-routine problems.
  • The rise of the armchair volunteer: volunteering from home will become just like working from home, and we’ll even start ‘donating’ some of our everyday data to citizen science to improve society as well; an example of this trend was when British Red Cross volunteers created maps of the Ebola crisis in remote locations from home.

In summary

It’s clear that there is an expectation that the use of artificial intelligence and machine learning platforms will proliferate in 2017 across multiple business, social and government spheres. This will be supported with advanced tools and capabilities like virtual reality and augmented reality. Together, there will be more networks of connected devices, hardware, and data sets to enable collaborative efforts in areas ranging from health to education and charity. The Nesta report also suggests that there could be a reality check, with a possible backlash against the open internet and the widespread use of personal data.

Original article here.


standard

IoT + Big Data Means 92% Of Everything We Do Will Be In The Cloud

2016-12-24 - By 

You don’t need Sherlock Holmes to tell you that cloud computing is on the rise, and that cloud traffic keeps going up. However, it is enlightening to see the degree by which it is increasing, which is, in essence, about to quadruple in the next few years. By that time, 92% percent of workloads will be processed by cloud data centers; versus only eight percent being processed by traditional data centers.

Cisco, which does a decent job of measuring such things, just released estimates that shows cloud traffic likely to rise 3.7-fold by 2020, increasing 3.9 zettabytes (ZB) per year in 2015 (the latest full year data for which data is available) to 14.1 ZB per year by 2020.

The big data and associated Internet of Things are a big part of this growth, the study’s authors state. By 2020, database, analytics and IoT workloads will account for 22% of total business workloads, compared to 20% in 2015. The total volume of data generated by IoT will reach 600 ZB per year by 2020, 275 times higher than projected traffic going from data centers to end users/devices (2.2 ZB); 39 times higher than total projected data center traffic (15.3 ZB).

Public cloud is growing faster than private cloud growth, the survey also finds. By 2020, 68% (298 million) of the cloud workloads will be in public cloud data centers, up from 49% (66.3 million) in 2015. During the same time period, 32% (142 million) of the cloud workloads will be in private cloud data centers, down from 51% (69.7 million) in 2015.

As the Cisco team explains it, much of the shift to public cloud will likely be part of hybrid cloud strategies. For example, “cloud bursting is an example of hybrid cloud where daily computing requirements are handled by a private cloud, but for sudden spurts of demand the additional traffic demand — bursting — is handled by a public cloud.”

The Cisco estimates also show that while Software as a Service (SaaS, for online applications) will keep soaring, there will be less interest in Infrastructure as a Service (IaaS, for online servers, capacity, storage).  By 2020, 74% of the total cloud workloads will be software-as-a-service (SaaS) workloads, up from 65% at this time. Platform as a Service (PaaS, for development tools, databases, middleware) also will see a boost — eight percent of the total cloud workloads will be PaaS workloads, down from nine percent in 2015. However,  IaaS workloads will total 17% of the total cloud workloads, down from 26%.

The Cisco analysts explain that the lower percentage growth for IaaS may be attributable to the growing shift away from private cloud to public cloud providers. For starters, IaaS was far less disruptive to the business — a rearrangement of data center resources, if you will. As SaaS offerings gain in sophistication, those providers may offer IaaS support behind the scenes. “In the private cloud, initial deployments were predominantly IaaS. Test and development types of cloud services were the first to be used in the enterprise; cloud was a radical change in deploying IT services, and this use was a safe and practical initial use of private cloud for enterprises. It was limited, and it did not pose a risk of disrupting the workings of IT resources in the enterprise. As trust in adoption of SaaS or mission-critical applications builds over time with technology enablement in processing power, storage advancements, memory advancements, and networking advancements, we foresee the adoption of SaaS type applications to accelerate over the forecast period, while shares of IaaS and PaaS workloads decline.”

On the consumer side, video and social networking will lead the increase in consumer workloads. By 2020, consumer cloud storage traffic per user will be 1.7 GB per month, compared to 513 MB per month in 2015. By 2020, video streaming workloads will account for 34% of total consumer workloads, compared to 29% in 2015. Social networking workloads will account for 24% of total consumer workloads, up from 20 percent in 2015.  In the next four years, 59% (2.3 billion users) of the consumer Internet population will use personal cloud storage up from 47% (1.3 billion users) in 2015.

Original article here.


standard

Artificial Intelligence, Hybrid Cloud & IoT Are High on Amazon’s Agenda

2016-12-24 - By 

At the AWS re:Invent event, Amazon has announced a host of new services that highlight its commitment to enterprises. Andy Jassy, CEO of AWS, emphasized on the innovation in the areas of artificial intelligence, analytics, and hybrid cloud.

Amazon has been using deep learning and artificial intelligence in its retail business for enhancing the customer experience. The company claims that it has thousands of engineers working on artificial intelligence to improve search and discovery, fulfillment and logistics, product recommendations, and inventory management. Amazon is now bringing the same expertise to the cloud to expose the APIs that developers can consume to build intelligent applications. Dubbed as Amazon AI, the new service offers powerful AI capabilities such as image analysis, text to speech conversion, and natural language processing.

Amazon Rekognition is the rich image analysis service that can identify various attributes of an image. Amazon Polly is a service that accepts text or a string and returns an MP3 audio file containing the speech. With support for 47 different voices in 23 different languages, the service exposes rich cognitive speech capabilities. Amazon Lex is the new service for natural language processing and automatic speech recognition. It is the same service that powers Alexa and Amazon Echo. The service converts text or voice to a set of actions that developers can parse to perform a set of actions.

Amazon is also investing in MXNet, a deep learning framework that can run in a variety of environments. Apart from this, Amazon is also optimizing EC2 images to run popular deep learning frameworks including CNTK, TensorFlow, and Theano.

In the last decade, Amazon has added many services and features to its platform. While customers appreciate the pace of innovation, first-time users often complain about the overwhelming number of options and choices. Even to launch a simple virtual machine that runs a blog or a development environment in EC2, users may have to choose from a variety of options. To simplify the experience of launching non-mission critical workloads in EC2, AWS has announced a new service called Amazon Lightsail. Customers can launch a VM with just a few clicks without worrying about the complex choices that they need to make. When they get familiar with EC2, they can start integrating with other services such as EBS and Elastic IP. Starting at $5 a month, this is the cheapest compute service available in AWS. Amazon calls Lightsail as the express mode for EC2 as dramatically reduces the launch time of a VM.

Amazon Lightsail competes with the VPS providers such as DigitalOcean and Linode. The sweet spot of these vendors has been developers and non-technical users who need a virtual private server to run a workload in the cloud. With Amazon Lightsail, AWS wants to attract developers, small and medium businesses, and digital agencies that typically use a VPS service for their needs.

On the analytics front, Amazon is adding a new interactive, serverless query service called Amazon Athena that can be used to retrieve data stored in Amazon S3. The service supports complex SQL queries including joins to return data from Amazon S3. Customers can use custom metadata to perform complex queries. Amazon Athena’s pricing is based on per query model.

Last month, AWS and VMware partnered to bring hybrid cloud capabilities to customers. With this, customers can run and manage workloads in the cloud, seamlessly from existing VMware tools.

Amazon claims that the customers will be able to use VMware’s virtualization and management software to seamlessly deploy and manage VMware workloads across all of their on-premises and AWS environments. This offering allows customers to leverage their existing investments in VMware skills and tooling to take advantage of the flexibility of the AWS Cloud.

Pat Gelsinger, CEO of VMware was on stage with Andy Jassy talking about the value that this partnership brings to customers.

In a surprising move, Amazon is making its serverless computing framework, AWS Lambda available outside of its cloud environment. Extending Lambda to connected devices, AWS has announced AWS Greengrass – an embedded Lambda compute environment that can be installed in IoT devices and hubs. It delivers local compute, storage, and messaging infrastructure in environments that demand offline access. Developers can use the simple Lambda programming model to develop applications for both offline and online scenarios. Amazon Greengrass Core is designed to run on hub and gateways while Greengrass runtime will power low-end, resource-constrained devices.

Extending the hybrid scenarios to industrial IoT, Amazon has also announced a new appliance called Snowball Edge that runs Greengrass Core. This appliance is expected to be deployed in environments that generate extensive offline data. It exposes an S3-compatible endpoint for developers to use the same ingestion API as the cloud. Since the device runs Lambda, developers can create functions that respond to events locally. Amazon Snowball Edge ships with 100TB capacity, hi-speed Ethernet Wi-Fi, and 3G cellular connectivity. When the ingestion process is completed, customers can send the appliance to AWS for uploading the data.

Pushing the limits of data migration to the cloud, Amazon is also launching a specialized truck called AWS Snowmobile that can move Exabytes of data to AWS. The truck carries a 48-foot long container that can hold up to 100 Petabytes of data. Customers must call AWS to open the vestibule of the truck to start ingesting the data. They just need to plug the fiber cable and the power cable to start loading the data. Amazon estimates that the loading and unloading process takes about three months on each side.

Apart from these services, Andy Jassy has also announced a slew of enhancements to Amazon EC2 and RDS.

Original article here.


standard

Gartner’s Top 10 Strategic Technology Trends for 2017

2016-12-05 - By 

Artificial intelligence, machine learning, and smart things promise an intelligent future.

Today, a digital stethoscope has the ability to record and store heartbeat and respiratory sounds. Tomorrow, the stethoscope could function as an “intelligent thing” by collecting a massive amount of such data, relating the data to diagnostic and treatment information, and building an artificial intelligence (AI)-powered doctor assistance app to provide the physician with diagnostic support in real-time. AI and machine learning increasingly will be embedded into everyday things such as appliances, speakers and hospital equipment. This phenomenon is closely aligned with the emergence of conversational systems, the expansion of the IoT into a digital mesh and the trend toward digital twins.

Three themes — intelligent, digital, and mesh — form the basis for the Top 10 strategic technology trends for 2017, announced by David Cearley, vice president and Gartner Fellow, at Gartner Symposium/ITxpo 2016 in Orlando, Florida. These technologies are just beginning to break out of an emerging state and stand to have substantial disruptive potential across industries.

Intelligent

AI and machine learning have reached a critical tipping point and will increasingly augment and extend virtually every technology enabled service, thing or application.  Creating intelligent systems that learn, adapt and potentially act autonomously rather than simply execute predefined instructions is primary battleground for technology vendors through at least 2020.

Trend No. 1: AI & Advanced Machine Learning

AI and machine learning (ML), which include technologies such as deep learning, neural networks and natural-language processing, can also encompass more advanced systems that understand, learn, predict, adapt and potentially operate autonomously. Systems can learn and change future behavior, leading to the creation of more intelligent devices and programs.  The combination of extensive parallel processing power, advanced algorithms and massive data sets to feed the algorithms has unleashed this new era.

In banking, you could use AI and machine-learning techniques to model current real-time transactions, as well as predictive models of transactions based on their likelihood of being fraudulent. Organizations seeking to drive digital innovation with this trend should evaluate a number of business scenarios in which AI and machine learning could drive clear and specific business value and consider experimenting with one or two high-impact scenarios..

Trend No. 2: Intelligent Apps

Intelligent apps, which include technologies like virtual personal assistants (VPAs), have the potential to transform the workplace by making everyday tasks easier (prioritizing emails) and its users more effective (highlighting important content and interactions). However, intelligent apps are not limited to new digital assistants – every existing software category from security tooling to enterprise applications such as marketing or ERP will be infused with AI enabled capabilities.  Using AI, technology providers will focus on three areas — advanced analytics, AI-powered and increasingly autonomous business processes and AI-powered immersive, conversational and continuous interfaces. By 2018, Gartner expects most of the world’s largest 200 companies to exploit intelligent apps and utilize the full toolkit of big data and analytics tools to refine their offers and improve customer experience.

Trend No. 3: Intelligent Things

New intelligent things generally fall into three categories: robots, drones and autonomous vehicles. Each of these areas will evolve to impact a larger segment of the market and support a new phase of digital business but these represent only one facet of intelligent things.  Existing things including IoT devices will become intelligent things delivering the power of AI enabled systems everywhere including the home, office, factory floor, and medical facility.

As intelligent things evolve and become more popular, they will shift from a stand-alone to a collaborative model in which intelligent things communicate with one another and act in concert to accomplish tasks. However, nontechnical issues such as liability and privacy, along with the complexity of creating highly specialized assistants, will slow embedded intelligence in some scenarios.

Digital

The lines between the digital and physical world continue to blur creating new opportunities for digital businesses.  Look for the digital world to be an increasingly detailed reflection of the physical world and the digital world to appear as part of the physical world creating fertile ground for new business models and digitally enabled ecosystems.

Trend No. 4: Virtual & Augmented Reality

Virtual reality (VR) and augmented reality (AR) transform the way individuals interact with each other and with software systems creating an immersive environment.  For example, VR can be used for training scenarios and remote experiences. AR, which enables a blending of the real and virtual worlds, means businesses can overlay graphics onto real-world objects, such as hidden wires on the image of a wall.  Immersive experiences with AR and VR are reaching tipping points in terms of price and capability but will not replace other interface models.  Over time AR and VR expand beyond visual immersion to include all human senses.  Enterprises should look for targeted applications of VR and AR through 2020.

Trend No. 5: Digital Twin

Within three to five years, billions of things will be represented by digital twins, a dynamic software model of a physical thing or system. Using physics data on how the components of a thing operate and respond to the environment as well as data provided by sensors in the physical world, a digital twin can be used to analyze and simulate real world conditions, responds to changes, improve operations and add value. Digital twins function as proxies for the combination of skilled individuals (e.g., technicians) and traditional monitoring devices and controls (e.g., pressure gauges). Their proliferation will require a cultural change, as those who understand the maintenance of real-world things collaborate with data scientists and IT professionals.  Digital twins of physical assets combined with digital representations of facilities and environments as well as people, businesses and processes will enable an increasingly detailed digital representation of the real world for simulation, analysis and control.

Trend No. 6: Blockchain

Blockchain is a type of distributed ledger in which value exchange transactions (in bitcoin or other token) are sequentially grouped into blocks.  Blockchain and distributed-ledger concepts are gaining traction because they hold the promise of transforming industry operating models in industries such as music distribution, identify verification and title registry.  They promise a model to add trust to untrusted environments and reduce business friction by providing transparent access to the information in the chain.  While there is a great deal of interest the majority of blockchain initiatives are in alpha or beta phases and significant technology challenges exist.

Mesh

The mesh refers to the dynamic connection of people, processes, things and services supporting intelligent digital ecosystems.  As the mesh evolves, the user experience fundamentally changes and the supporting technology and security architectures and platforms must change as well.

Trend No. 7: Conversational Systems

Conversational systems can range from simple informal, bidirectional text or voice conversations such as an answer to “What time is it?” to more complex interactions such as collecting oral testimony from crime witnesses to generate a sketch of a suspect.  Conversational systems shift from a model where people adapt to computers to one where the computer “hears” and adapts to a person’s desired outcome.  Conversational systems do not use text/voice as the exclusive interface but enable people and machines to use multiple modalities (e.g., sight, sound, tactile, etc.) to communicate across the digital device mesh (e.g., sensors, appliances, IoT systems).

Trend No. 8: Mesh App and Service Architecture

The intelligent digital mesh will require changes to the architecture, technology and tools used to develop solutions. The mesh app and service architecture (MASA) is a multichannel solution architecture that leverages cloud and serverless computing, containers and microservices as well as APIs and events to deliver modular, flexible and dynamic solutions.  Solutions ultimately support multiple users in multiple roles using multiple devices and communicating over multiple networks. However, MASA is a long term architectural shift that requires significant changes to development tooling and best practices.

Trend No. 9: Digital Technology Platforms

Digital technology platforms are the building blocks for a digital business and are necessary to break into digital. Every organization will have some mix of five digital technology platforms: Information systems, customer experience, analytics and intelligence, the Internet of Things and business ecosystems. In particular new platforms and services for IoT, AI and conversational systems will be a key focus through 2020.   Companies should identify how industry platforms will evolve and plan ways to evolve their platforms to meet the challenges of digital business.

Trend No. 10: Adaptive Security Architecture

The evolution of the intelligent digital mesh and digital technology platforms and application architectures means that security has to become fluid and adaptive. Security in the IoT environment is particularly challenging. Security teams need to work with application, solution and enterprise architects to consider security early in the design of applications or IoT solutions.  Multilayered security and use of user and entity behavior analytics will become a requirement for virtually every enterprise.

Original article here.


standard

Cloud compute pricing bakeoff: Google vs. AWS vs. Microsoft Azure

2016-12-04 - By 

Like everything in enterprise technology, pricing can be a bit complicated. Here’s an analysis from RightScale looking at how discounts alter the cloud pricing equation. Google comes out cheapest in most scenarios.

With Amazon Web Services hosting its annual conference this week, talk about the price for performance and agility equation will be everywhere.

Knowing AWS’ re:Invent is kicking off this week, the largest cloud service provider has been busy cutting prices for various instances. Rest assured that Google and Microsoft are likely to toss in their own price cuts, as AWS speaks to its base.

But the cloud pricing equation is getting complicated for compute instances. Not so shockingly, these price discussions have to include discounts. Like everything in enterprise technology, there’s the street price and your price. Comparing the cloud providers on pricing is tricky given Microsoft, Google, and AWS all have different approaches to discounts.

Also: IaaS: What you need to know about pricing, options, best practices | Eight questions to ask before choosing an IaaS vendor | IaaS checklist: Best practices for picking an IaaS vendor

Fortunately, RightScale on Monday will outline a study on cloud compute prices. Generally speaking, AWS won’t be your cheapest option for compute. AWS typically lands in the middle between Microsoft Azure and Google Cloud.

The bottom line is that AWS uses reserved instances in one-year and three-year terms to offer discounts. Microsoft requires an enterprise agreement for its Azure discounts. Google has sustained usage discounts that are relatively easy to follow.

Overall, RightScale found that Google will be cheapest in most scenarios because sustained usage discounts are automatically applied. Among the key takeaways:

  • If you need solid state drive performance instead of attached storage, Google will charge you a premium.
  • Azure matches or beats AWS for on-demand pricing consistently.
  • AWS won’t be the cheapest alternative in many scenarios. Then again — AWS has a bigger menu, more advanced cloud services, and the customer base where it doesn’t have to go crazy on pricing. AWS just has to be fair.
  • Your results will vary based on the level of your Microsoft enterprise agreement and what reserved instances were purchased on AWS.

Here are three slides to ponder from RightScale.

 

 

Add it up and you’d be advised to make your own comparisons; check out RightScale’s SlideShare, and then crunch some numbers. In the end, enterprises may have to have all three cloud providers in their company — if only to play them off each other.

Original article here.


standard

Google, Facebook, and Microsoft Are Remaking Themselves Around AI

2016-11-24 - By 

FEI-FEI LI IS a big deal in the world of AI. As the director of the Artificial Intelligence and Vision labs at Stanford University, she oversaw the creation of ImageNet, a vast database of images designed to accelerate the development of AI that can “see.” And, well, it worked, helping to drive the creation of deep learning systems that can recognize objects, animals, people, and even entire scenes in photos—technology that has become commonplace on the world’s biggest photo-sharing sites. Now, Fei-Fei will help run a brand new AI group inside Google, a move that reflects just how aggressively the world’s biggest tech companies are remaking themselves around this breed of artificial intelligence.

Alongside a former Stanford researcher—Jia Li, who more recently ran research for the social networking service Snapchat—the China-born Fei-Fei will lead a team inside Google’s cloud computing operation, building online services that any coder or company can use to build their own AI. This new Cloud Machine Learning Group is the latest example of AI not only re-shaping the technology that Google uses, but also changing how the company organizes and operates its business.

Google is not alone in this rapid re-orientation. Amazon is building a similar group cloud computing group for AI. Facebook and Twitter have created internal groups akin to Google Brain, the team responsible for infusing the search giant’s own tech with AI. And in recent weeks, Microsoft reorganized much of its operation around its existing machine learning work, creating a new AI and research group under executive vice president Harry Shum, who began his career as a computer vision researcher.

Oren Etzioni, CEO of the not-for-profit Allen Institute for Artificial Intelligence, says that these changes are partly about marketing—efforts to ride the AI hype wave. Google, for example, is focusing public attention on Fei-Fei’s new group because that’s good for the company’s cloud computing business. But Etzioni says this is also part of very real shift inside these companies, with AI poised to play an increasingly large role in our future. “This isn’t just window dressing,” he says.

The New Cloud

Fei-Fei’s group is an effort to solidify Google’s position on a new front in the AI wars. The company is challenging rivals like Amazon, Microsoft, and IBM in building cloud computing services specifically designed for artificial intelligence work. This includes services not just for image recognition, but speech recognition, machine-driven translation, natural language understanding, and more.

Cloud computing doesn’t always get the same attention as consumer apps and phones, but it could come to dominate the balance sheet at these giant companies. Even Amazon and Google, known for their consumer-oriented services, believe that cloud computing could eventually become their primary source of revenue. And in the years to come, AI services will play right into the trend, providing tools that allow of a world of businesses to build machine learning services they couldn’t build on their own. Iddo Gino, CEO of RapidAPI, a company that helps businesses use such services, says they have already reached thousands of developers, with image recognition services leading the way.

When it announced Fei-Fei’s appointment last week, Google unveiled new versions of cloud services that offer image and speech recognition as well as machine-driven translation. And the company said it will soon offer a service that allows others to access to vast farms of GPU processors, the chips that are essential to running deep neural networks. This came just weeks after Amazon hired a notable Carnegie Mellon researcher to run its own cloud computing group for AI—and just a day after Microsoft formally unveiled new services for building “chatbots” and announced a deal to provide GPU services to OpenAI, the AI lab established by Tesla founder Elon Musk and Y Combinator president Sam Altman.

The New Microsoft

Even as they move to provide AI to others, these big internet players are looking to significantly accelerate the progress of artificial intelligence across their own organizations. In late September, Microsoft announced the formation of a new group under Shum called the Microsoft AI and Research Group. Shum will oversee more than 5,000 computer scientists and engineers focused on efforts to push AI into the company’s products, including the Bing search engine, the Cortana digital assistant, and Microsoft’s forays into robotics.

The company had already reorganized its research group to move quickly into new technologies into products. With AI, Shum says, the company aims to move even quicker. In recent months, Microsoft pushed its chatbot work out of research and into live products—though not quite successfully. Still, it’s the path from research to product the company hopes to accelerate in the years to come.

“With AI, we don’t really know what the customer expectation is,” Shum says. By moving research closer to the team that actually builds the products, the company believes it can develop a better understanding of how AI can do things customers truly want.

The New Brains

In similar fashion, Google, Facebook, and Twitter have already formed central AI teams designed to spread artificial intelligence throughout their companies. The Google Brain team began as a project inside the Google X lab under another former Stanford computer science professor, Andrew Ng, now chief scientist at Baidu. The team provides well-known services such as image recognition for Google Photos and speech recognition for Android. But it also works with potentially any group at Google, such as the company’s security teams, which are looking for ways to identify security bugs and malware through machine learning.

Facebook, meanwhile, runs its own AI research lab as well as a Brain-like team known as the Applied Machine Learning Group. Its mission is to push AI across the entire family of Facebook products, and according chief technology officer Mike Schroepfer, it’s already working: one in five Facebook engineers now make use of machine learning. Schroepfer calls the tools built by Facebook’s Applied ML group “a big flywheel that has changed everything” inside the company. “When they build a new model or build a new technique, it immediately gets used by thousands of people working on products that serve billions of people,” he says. Twitter has built a similar team, called Cortex, after acquiring several AI startups.

The New Education

The trouble for all of these companies is that finding that talent needed to drive all this AI work can be difficult. Given the deep neural networking has only recently entered the mainstream, only so many Fei-Fei Lis exist to go around. Everyday coders won’t do. Deep neural networking is a very different way of building computer services. Rather than coding software to behave a certain way, engineers coax results from vast amounts of data—more like a coach than a player.

As a result, these big companies are also working to retrain their employees in this new way of doing things. As it revealed last spring, Google is now running internal classes in the art of deep learning, and Facebook offers machine learning instruction to all engineers inside the company alongside a formal program that allows employees to become full-time AI researchers.

Yes, artificial intelligence is all the buzz in the tech industry right now, which can make it feel like a passing fad. But inside Google and Microsoft and Amazon, it’s certainly not. And these companies are intent on pushing it across the rest of the tech world too.

Original article here.


standard

IBM’s SoftLayer shift resonates

2016-10-14 - By 

Three years after its purchase of SoftLayer, IBM has finally coalesced its cloud computing strategy around the cloud platform and is winning new IT fans.

As more enterprise IT shops embrace shifting critical workloads to the cloud, Big Blue may finally be flexing its muscles.

Users’ growing acceptance of IBM’s cloud initiative is building, based on an increasing number of deals with larger enterprises and industry observations. A big factor in this momentum is that a handful of the company’s cloud-based technologies now work better in concert with each other. These include its data analytics software, Bluemix cloud application development platform, and even IBM Watson — all of which now operate with SoftLayer cloud platform.

“We have a lot invested in their legacy hardware and software for quite some time,” said one project manager with a large transportation company in northern Florida who requested anonymity. “I like the story from them [about SoftLayer] because it gives us more options to pursue with cloud, like bare-metal, that can improve the performance of some of the data-intensive apps we have.”

IBM came out of the blocks slowly with its first cloud initiatives several years ago and has mostly trailed market leaders Amazon Web Services, Microsoft and Google. But as its legacy hardware and software business continues to sag, the company has gradually focused more of its existing and new cloud applications and tools around SoftLayer — and some believe IBM can offer corporate users a more compelling cloud narrative.

“They are selling more [products and services] on top of SoftLayer, which has allowed them to make progress with a legitimate cloud strategy,” said Carl Brooks, an analyst with 451 Research. “They might be able to put out some of the engine fires they had with that business.”

IBM gaining new cloud users

Earlier this year, Halliburton, a service provider to the oil and gas industry, implemented IBM’s cloud platform to run its reservoir simulation software designed to help the company better understand how complex oil and gas fields might behave given different development scenarios.

Using the CPU, GPU and storage capabilities of IBM Cloud, which were delivered as a service, the company can now run through hundreds of simulations that help it better forecast the potential of complex oil and gas fields, using both bare-metal and virtual servers.

Using both kinds of servers gives Halliburton more flexibility to scale compute power up and down depending on customers’ requirements, as well as switch from a Capex to an Opex model, company officials said. The company also worked with IBM to set up a compute cluster in the IBM Cloud that was then connected to Halliburton’s global network.

“We knew the more computing power we had, the more efficient a job we could do,” said Steven Knabe, Halliburton’s regional director for consulting for Latin America. “With the cluster, we can build and run much more complex simulation models and increase our chances of winning projects [more] than we had before.”

Another new IBM Cloud customer is JFE Steel Co., one of the largest steel makers in the world, which inked a five-year agreement last month to migrate its core legacy systems to the IBM Cloud while also consolidating its infrastructure. The Japan-based company will deploy SoftLayer as its cloud infrastructure, along with IBM’s Cloud Orchestrator to automate cloud services and Control Desk for IT management services.

Driving the company’s decision was a need to establish a more efficient business model, which meant swapping in a more flexible IT infrastructure to more quickly adjust to market changes brought on by rapidly declining steel prices.

“The company realized it had to modernize its IT infrastructure to take advantage of some new business processes and modernize the business,” said Charles King, president and principal analyst with Pund-IT. “But it didn’t want to pay a lot up front to do that, and IBM Cloud gave them a way forward to get all that underway.”

King also believes IBM has done a better job at winning over both existing and new IT shops to its cloud strategy, not just because of SoftLayer, but its investment in its now 48 data centers around the world. These data centers are used not just for hosting but also for joint development of cloud-based software between IBM engineers and local developers.

“You have to give them credit for the considerable amount of money spent in expanding their cloud data centers,” King said. “That sort of global footprint helps when reaching out to new [cloud] customers like JFE.”

User cloud confidence grows

Many corporate users are more confident to move generous chunks of their business anchored on premises to the cloud — and that helps IBM’s cloud initiative as well as those of all its major competitors. Decade-long fears about the lack of reliability and security of mission-critical data and applications in the cloud appear to be melting away.

“The enterprise three years ago was new for us, from a SoftLayer perspective,” said Marc Jones, CTO and IBM Distinguished Engineer for IBM Cloud. Now, though, enterprises are not only more open to actually bringing those workloads to the cloud, but it’s a first-choice destination for their applications and services. “Before, it was a lot of research and, ‘Could I, would I?’ But now, it’s, ‘Let’s go,'” he said.

Some analysts believe the industry is rapidly approaching a tipping point in the widespread adoption of cloud computing to where it becomes the primary way IT shops conduct business.

“Enterprises are now more comfortable with cloud computing with on-demand infrastructure and servers replacing colocations,” 451 Research’s Brooks said. “These [IBM deals] reflect the mainstreaming of cloud computing, more than any particular pizazz on the part of IBM.”

Further indication that cloud adoption is reaching a tipping point is the heightened interest among resellers and business partners of top-tier vendors. In many cases, it’s the channel, not the vendors, directly selling and servicing a range of cloud computing offerings to IT shops.

“Mainstream IT computing is partaking of [Amazon Web Services] and Azure in ways it was not two years ago,” Brooks said, “and they don’t want to do it all themselves. They want the people in the middle [resellers] to do it for them and IBM is picking up on this trend.”

Original article here.


standard

Cloud Computing Embraced As Cost-Cutting Measure

2016-10-07 - By 

When it comes to implementing a cloud infrastructure, whether it’s public, private, or hybrid, most IT departments view the technology as a way to cut costs and save money, according to a recent analysis from CompTIA. The report also shows that SaaS is seen as the most useful cloud service.

When it comes to cloud computing adoption, money is still the main motivator.

In fact, many large IT departments view the cloud, whether it’s public, private, or a hybrid combination, as a way to cut costs and save money, according to a new analysis from CompTIA.

The study found that 47% of large enterprises, 44% of medium businesses, and 41% of small firms surveyed reported that slashing costs outweighed other factors such as speed, modernization, and reducing complexity.

The report, “Trends in Cloud Computing,” is based on the online responses from 500 business and IT executives conducted during July. The study also quotes Gartner numbers that forecast the public cloud will generate $204 billion in worldwide revenue in 2016, which is a 16.5% increase over last year’s figure.

Overall, the Sept. 27 CompTIA report finds a robust, if somewhat maturing market for all different types of cloud structures and services. The study finds than nine out of ten companies are using cloud computing in at least one way. Additionally, 71% of respondents reported that their business is using cloud either in production or at least for non-critical parts of the company.

This, the report finds, is a sign of maturity:

The familiarity with technical details has grown, and while business opportunities may still flourish around models that are mislabeled as cloud, the market is growing more savvy. End users from both the IT function and business units are growing more aware of the tools they are using and how those tools compare to other options that are available.

While the cloud is popular, its ability to trim costs in the face of mounting pressure on budgets and bottom lines is what’s driving the adoption of more of the technology. In an email exchange with InformationWeek, Seth Robinson, senior director of technology analysis for CompTIA, writes that IT sees cloud as the ultimate way to do more with less.

“Cost savings are important for IT as they rethink what a modern architecture looks like, but that is only one factor for building a new IT approach,” Robinson wrote.

No matter what the motive, Robison writes that IT needs to view cloud as a way to deliver value, not only to the tech department, but to the whole business as well. This is a major concern as IT is asked to deliver solutions across the enterprise in many different sections.

“Cloud offerings can deliver cost efficiency, but they can also simplify workflow, speed up operations, introduce new features, or lead to new business products/services,” Robinson wrote. “The role of the IT team is not to simply implement a cloud component to perform a discrete function, but to drive business objectives forward by utilizing the right mix of cloud solutions.”

For those surveyed in the CompTIA study, private cloud remains the primary option, although that is expected to change soon. The report found that 46% of respondents are using private cloud, compared to 28% using public cloud, and 26% working with a hybrid option. Those numbers will shift as companies get more familiar with what is a true cloud platform and what is not, according to Robinson.

“Private cloud usage is probably still somewhat exaggerated even as companies are becoming more precise in their terminology,” Robison writes. “Long-term, we expect that companies will migrate towards a hybrid or multi-cloud model, utilizing public cloud, private cloud, and [on-premises] resources.”

When it comes to the different types of cloud technologies that businesses are using, Software-as-a-Service (SaaS), is the most popular — a finding that supports other recent studies that show IT departments are increasingly adopting SaaS. It also highlights the popularity of companies such asSalesforce, and also Microsoft, which uses a SaaS model to push out versions of its Office 365 suite, as well as Windows 10.

“SaaS options give IT more flexibility across many areas when trying to build an overall technology environment,” Robison writes. “There are benefits to be had by replacing individual applications with SaaS, but greater benefits to understanding how multiple SaaS applications work together to enable operations.”

[Read more about the public cloud market.]

The most popular cloud-based application is email, which 51% respondents reported using. Other top-ranking applications include:

  • Business productivity suites — Office 365 and Google Apps — at 45%
  • Web presence at 46%
  • Collaboration at 39%
  • CRM at 37%
  • Financial management at 32%
  • VoIP at 31%
  • Virtual desktop at 30%

Another finding of the report suggests that for most applications, there was a dramatic drop in the number of companies that say they are using a cloud solution from 2014 to 2016, along with corresponding jumps in the number of companies reporting use of on-premises systems. However, the fluctuations could be attributed to the fact that what IT and businesses call “cloud” has changed as the technology has matured.

“In the early days of cloud, employees likely assumed that any [off-­premises] application was cloud-based (or may have even assumed the use of SaaS applications without considering where software was hosted),” the report stated. “With a greater appreciation for cloud-specific characteristics, employees are honing their assessment.”

Original article here.


standard

UK-based hyper-convergence startup bets on ARM processors

2016-10-03 - By 

Cambridge, U.K.-based startup Kaleao Ltd.  is entering the hyper-converged systems market today with a platform based on the ARM chip architecture that it claims can achieve unparalleled scalability and performance at a fraction of the cost of competing systems.

The KMAX platform features an integrated OpenStack cloud environment and miniature hypervisors that dynamically defines physical computing resources and assigns them directly to virtual machines and applications. These “microvisors,” as Kaleao calls them, dynamically orchestrate global pools of software-defined and hardware-accelerated resources with much lower overhead than that of typical hypervisors. Users can still run the KVM hypervisor if they want.

The use of the ARM 64-bit processor distinguishes Kaleao from the pack of other hyper-converged vendors such as VMware Inc., Nutanix Inc. and SimpliVity Inc., which use Intel chips. ARM is a reduced instruction set computing-based architecture that is commonly used in mobile devices because of its low power consumption.

“We went with ARM because the ecosystem allows for more differentiation and it’s a more open platform,” said Giovanbattista Mattiussi, principal marketing manager at Kaleao. “It enabled us to rethink the architecture itself.”

One big limitation of ARM is that it’s unable to support the Windows operating system or VMware vSphere virtualization manager. Instead, Kaleao is bundling Ubuntu Linux and OpenStack, figuring those are the preferred choices for cloud service providers and enterprises that are building private clouds. Users can also install any other Linux distribution.

Kaleao said the low-overhead of its microvisors, combined with the performance of RAM processors, enables it to deliver 10 times the performance of competing systems at least than one-third of the energy consumption. Users can run four to six times as many microvisors as hypervisors, Mattiussi said. “It’s like the VM is running on the hardware with no software layers in between,” he said. “We can pick up a piece of the CPU here, a piece of storage there. It’s like having a bare-bones server running under the hypervisor.”

The platform provides up to 1,536 CPU cores, 370 TB of all-flash storage with 960 gigabytes per second of networking in a 3u rack. Energy usage is less than 15 watts per eight-core server. “Scalability is easy,” Mattiussi said. “You just need to add pieces of hardware.”

KMAX will be available in January in server and appliance versions. The company hasn’t released pricing but said its cost structure enables prices in the range of $600 to $700 per server, or about $10,000 for a 16-server blade. It plans to sell direct and through distributors. The company has opened a U.S. office in Charlotte, NC and has European outposts in Italy, Greece and France.

Co-founders Giampietro Tecchiolli and John Goodacre have a long track record of work in hardware and chip design, and both are active in the Euroserver green computing project. Goodacre continues to serve as director of technology and systems at ARM Holdings plc, which make the ARM processor.

Kaleao has raised €3 million and said it’s finalizing a second round of €5 million.

Original article here


standard

Big Data and Cloud – Are You Ready to Embrace Both?

2016-09-23 - By 

This week’s Economist magazine has the cover story about Uber; the world’s most valuable startup that symbolizes disruptive innovation. The race to reinvent transportation service worldwide is so fast that it’ll dramatically change the way we travel, in the next 5-10 years. While studying the success study of Uber, I was more interested in factors that led to the exceptional growth of the company – spreading to 425 global cities in 7 years, with a market cap of $70 billion.

There are surely multiple factors that contributed to its success, but what made me surprised was its capitalization of data analytics. In 2014, Uber launched UberPool, which uses algorithms to match riders based on location and sets the price based on the likelihood of picking up another passenger. It analyzes consumers’ transaction history and spending patterns and provides intelligent recommendations for personalized services.

Uber is just one example; thousands of enterprises have already embraced big data and predictive analytics for HR management, hiring, financial management, and employee relations management. Latest startups are already leveraging analytics to bring data-driven and practical recommendations for the market. However, this does not mean that situation is ideal.

According to MIT Technology Review, roughly 0.5 percent of digital data is analyzed, which means, companies are losing millions of opportunities to make smart decisions, improve efficiency, attract new prospects and achieve business goals. The reason is simple; they are not leveraging the potential offered by data analytics.

Though the percentage of data being analyzed is disappointing, research endorses the growing realization in businesses about the adoption of analytics. By 2020, around 1.7 megabytes of new information will be created every single second, for every human being on the planet.

Another thing that is deeply associated with the growing data asset is a cloud. As the statistics endorse, data creation is on the rise; it’ll lead to storage and security issues for the businesses. Though there are five free cloud services, the adoption rate is still disappointing.

When we explore why big data analysis is lagging behind and how to fix the problem, it’s vital to assess the storage services too. Though there are organizations that have been using cloud storage for years, the adoption of the same is slow. It’s usually a good option to host general data on the cloud while keeping sensitive information on the premise.

Big Data and Cloud for Business:

As we noted in the previous post, private cloud adoption increased from 63% to 77%, which has driven hybrid cloud adoption up from 58% to 71% year-over-year. There are enough reasons and stats to explain the need for cloud storage and big data analytics for small businesses. Here are three fundamental reasons why companies need some reliable cloud technology to carry out big data analytics exercise.

1. Cost:

Looking at the available options at this point, there are two concerns. Some are either too costly and time-consuming or just unreliable and insecure. Without a clear solution, the default has been to do the bare minimum with the available data. If we can successfully integrate data into the cloud, the ultimate cost of both (storage & analytics) services will turn flat and benefit the business.

2. Security:

We have already discussed that companies have a gigantic amount of data, but they have no clue as to what to do with it. The first thing they need is to keep their data in a secure environment where no breach could occur. Look at recent revelations about Dropbox hack, which is now being reported to have happened. It affected over 65 million accounts associated with the service. Since moving significant amounts of data in and out of the cloud comes with security risks, one has to ensure that the cloud service he/she is using is reliable.

See, there are concerns and risks but thanks to big players IBM, Microsoft, and Google; trust in cloud services is increasing day by day and adoption is on the rise.

3. Integration:

If you look at the different sales, marketing, and social media management tools, they all offer integration with other apps. For example, you can integrate Facebook with MailChimp, Salesforce with MailChimp; which means, your (marketing/sales) cloud offers two-in-one service. It not only processes your data and provides analytics but also ensures that findings and data remain in a secure environment.

4. Automation:

Once you can remove uncertainty, and find a reliable but cost-effective solution for the business, the next comes is feature set. There are cloud services that offer wider automation features, enabling users to save their time and use it for some more important stuff. Data management, campaign management, data downloads, real-time analytics, automatic alerts, and drip management are some of the key automation features that any data analytics architect will be looking forward to.

While integrating cloud with data analytics, make sure that it serves your purpose while keeping the cost under control. Otherwise, the entire objective of the exercise will be lost. As big data becomes an integral part of any business, data management applications will turn user-friendlier and equally affordable. It is a challenge, but there are a lot of opportunities for small businesses to take big data into account and achieve significant results.

Original article here.


standard

PUBLIC CLOUD MARKET TO EXCEED $236B BY 2020

2016-09-09 - By 

The biggest disruptive force in the global tech market over the past two decades is about to get a lot bigger.

In a new report, researchers at Forrester predict the public cloud services market will grow to $236 billion by 2020, more than double the $114 billion public cloud spend worldwide this year.

This dramatic uptick—at an annual growth rate of 23 percent—reflects the massive IT modernization effort amid private sector companies and, to a lesser extent, government.

» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.

According to Forrester, North American and European countries have migrated and are already running approximately 18 percent of their custom-built application software on public cloud platforms, and businesses are increasingly inclined to rent processing and storage from vendors rather than stand up infrastructure themselves.

Forrester also reports many companies are “challenging the notion that public clouds are not suited for core business applications,” opting to move mission-critical workloads to the public cloud for increased agility and efficiency despite perceived myths that public cloud platforms aren’t as secure as internal data centers might be.

Forrester compares the public cloud of today to adolescent children on the fast track to adulthood, suggesting it will be “the dominant technology model in a little over three years.”

“Today’s public cloud services are like teenagers—exuberant, sometimes awkward, and growing rapidly,” the report states. “By 2020, public cloud services will be like adults, with serious [enterprise] responsibilities and slower growth.”

The report has many implications for government. The Obama administration’s fiscal 2017 budget calls for $7.3 billion in spending on provisioned services like cloud computing, and true federal cloud spending is on the rise across civilian, military and intel agencies. An analysis from big data and analytics firm Govini states the federal government spent $3.3 billion on cloud in fiscal 2015 on the backs of infrastructure-as-a-service offerings.

Federal agencies have inched forward in cloud, first with email-as-a-service offerings and later with a growing number of infrastructure-, platform- and software-as-a-service offerings, but they’ve been slowed in part by lagging legacy technologies that tend to make up their enterprise.

The government’s aging systems—some of which date back to the 1970s—are in dire need of modernization, and Congress is currently reviewing legislation that could greatly speed up the effort. One initiative would create a $3.1 billion IT Modernization Fund from which agencies could borrow against, and another that would direct agencies to establish working capital funds for IT.

If either piece of legislation is enacted—or some combination of both—the government’s spend on cloud computing is likely to increase and fall more in line with what industry is doing, using cloud computing as the base for IT enterprises.

Original article here.


standard

What’s The Return On Developing In The Cloud?

2016-09-09 - By 

Cloud integration has come and gone; the cloud now fully envelops modern business. However, gauging a return on investment (ROI) for the cloud is difficult and oftentimes too subjective, leaving many businesses in the dark about whether they spent their time, money, and energies wisely … or if they should even consider using the cloud to run their applications. Luckily, we can shine a light on cloud ROI once and provide some guidance based on what we’ve seen with our customers.

Understand the impact of the cloud on your corporation

According to a report by RightScale, 93% of businesses use the cloud in some form. It’s safe to say cloud computing is a mainstay. But after the initial struggle of cloud adoption, many businesses’ tangible ROIs fell short of expectations. From IT teams that lacked knowledge about the right Web application program interfaces to using cloud technology for more than critical functions, companies stretched the cloud to its limits – and then found they couldn’t gauge accurate returns.

Simple cloud ROI calculators in hardware or software form will provide inaccurate results. Take the time to measure utilization among your servers, look at network consumption, and adjust your efforts accordingly. Elasticity in the cloud allows you to resize cloud instances to meet your disk usage measurements.

Platform as a Service (PaaS) is seen in the industry as a cloud computing service model that helps organizations create and test apps without changing existing architectural landscapes. Many businesses just beginning to understand the value PaaS delivers are trying to calculate how PaaS helps business growth. But there’s no set formula to calculate cloud ROIs; each corporation must analyze the benefits and determine the value for itself based on its business goals – whether for today or for three years from now. However, this doesn’t mean there aren’t tools out there today to make specific calculations possible.

Measure your cloud ROI accurately

To measure cloud ROI with any degree of accuracy, you must look at the change in your technical infrastructure from different standpoints – both tangible and non-tangible. Consider the financial returns that are clearly tangible: better use of resources with fewer FTEs for manual tasks, greater scalability, etc. Non-tangible return factors would consist of speed, reliability, user friendliness, and risk management. Computing cloud ROI requires a holistic view of your infrastructure and an assessment of how it transforms your enterprise as a whole.

Look at your corporation’s cloud computing goals and benefits differently from other technology adoptions. Factor in the value of the cloud’s competitive advantage, which most would agree is agility. Agility is a benefit that is relative to each business and is a reason why companies opt to use a PaaS solution. There are others. IT managers need to step back from cloud computing systems to analyze and assign value to several individual points. These include:

  • Speed of development and increased productivity: A main draw for integrating the cloud is to streamline and enhance employee productivity, so it’s important to assess boosts in organizational speed and agility. 
  • Streamlined costs and increased profits: Measuring expenditures is the only way to find your ROI. Look at the actual costs of cloud migration in dollar amounts, as well as the opportunity costs, then see where your company can trim the fat. 
  • Improvements in customer service: The cloud gives businesses the ability to respond better and faster to customer problems, making it easy to expand customer reach and build retention. 
  • Additional opportunities for innovation and growth: The cloud provides scalability to meet corporate needs, as well as the increased ability to create and test new ideas and solutions. 

Each of these benefits comes with an ROI measure. Assign an estimated dollar amount to each benefit, then consider other benefits that may result from cloud integration, such as headcount reductions and improvements in market intelligence. Think architecturally when calculating cloud ROI, and don’t forget that cloud benefits can multiply and compound – improving one area of business often enhances other areas.

Original article here.


standard

How Cloud Computing Is Changing the Software Stack

2016-09-01 - By 

Are sites, applications, and IT infrastructures leaving the LAMP stack (Linux, Apache, MySQL, PHP) behind? How have the cloud and service-oriented, modular architectures facilitated the shift to a modern software stack?

As more engineers and startups are asking the question “Is the LAMP stack dead?”—on which the jury is still out—let’s take a look at “site modernization,” the rise of cloud-based services, and the other ever-changing building blocks of back-end technology.

From the LAMP Era to the Cloud

Stackshare.io recently published its findings about the most popular components in many tech companies’ software stacks these days—stacks that are better described as “ecosystems” due to their integrated, interconnected array of modular components, software-as-a-service (SaaS) providers, and open-source tools, many of which are cloud-based.

It’s an interesting shift. Traditional software stacks used to be pretty cut and dry. Acronyms like LAMP, WAMP, and MEAN neatly described a mix of onsite databases, servers, and operating systems built with server-side scripts and frameworks. When these systems grow too complex, though, the productivity they enable can be quickly eclipsed by the effort it takes to maintain them. This is up for debate, though, and anything that’s built well from the ground up should be sturdy and scalable. However, a more modular stack approach still prompted many to make the shift.

A shift in the software stack status quo?

For the last five or so years, the monolith, LAMP-style approach has come more into question whether it’s the best possible route. Companies are migrating data and servers to the cloud, opting for streamlined API-driven data exchange, and using SaaS and PaaS solutions as super-scalable ways to build applications. In addition, they’re turning to a diverse array of technologies that can be more easily customized and integrated with one another—mainly JavaScript libraries and frameworks—allowing companies to be more nimble, and less reliant on big stack architectures.

But modularity is not without its complexities, and it’s also not for everyone. SaaS, mobile, and cloud-computing companies are more likely to take a distributed approach, while financial, healthcare, big data, and e-commerce organizations are less likely to. With the right team, skills, and expectations, however, it can be a great fit.

New, scalable building blocks like Nginx, New Relic, Amazon EC2, and Redis are stealing the scene as tech teams work toward more modular, software-based ecosystems—and here are a few reasons why.

What are some of the key drivers of this shift?

1. Continuous deployment

What’s the benefit of continuous deployment? Shorter concept-to-market development cycles that allow businesses to give customers new features faster, or adjust to what’s happening with traffic.

It’s possible to continuously deploy with a monolith architecture, but certain organizations are finding this easier to do beyond a LAMP-style architecture. Having autonomous microservices allows companies to deploy in chunks continuously, without dependencies and the risk of one failure causing another related failure. Tools like GitHub, Amazon EC2, and Heroku allow teams to continuously deploy software, for example, in an Agile sprint-style workflow.

2. The cloud is creating a new foundation

Cloud providers have completely shaken up the LAMP paradigm. Providers like Amazon Web Services (AWS) are creating entirely new foundations with cloud-based modules that don’t require constant attention, upgrades, and fixes. Whereas stacks used to comprise a language (Perl, Python, or PHP), a database (MySQL), a server, operating system, application servers, and middleware, now there are cloud modules, APIs, and microservices taking their place.

3. Integration is simplified

Tools need to work together, and thanks to APIs and modular services, they can—and without a lot of hassle. Customer service platforms need to integrate with email and databases, automatically. Many of the new generation of software solutions not only work well together, they build on one another and can become incredibly powerful when paired up, for example, Salesforce’s integrated SaaS.

4. Elasticity and affordable scalability

Cloud-based servers, databases, email, and data processing allow companies to rapidly scale up—something you can learn more in this Intro to Cloud Bursting article. Rather than provision more hardware and more time (and space) that it takes to set that hardware up, companies can purchase more space in the cloud on demand. This makes it easier to ramp up data processing. AWS really excels here, and is a top choice for companies like Upwork, Netflix, Adobe and Comcast have built their stacks with its cloud-based tools.

For areas like customer service, testing, analytics, and big data processing, modular components and services also rise to the occasion when demand spikes.

5. Flexibility and customization

The beauty of many of these platforms is that they come ready to use out the box—but with lots of room to tweak things to suit your needs. Because the parts are autonomous, you also have the flexibility to mix and match your choice of technologies—whether those are different programming languages or frameworks and databases that are particularly well-suited to certain apps or projects.

Another thing many organizations love is the ability to swap out one component for another without a lot of back-end reengineering. It is possible to replace parts in a monolith architecture, but for companies that need to get systems up and running fast—and anticipate a spike in growth or a lack of resources—modular components make it easy to swap out one for another. Rather than trying to adapt legacy technology for new purposes, companies are beginning to build, deploy, and run applications in the cloud.

6. Real-time communication and collaboration

Everyone wants to stay connected and communicate—especially companies with distributed engineering teams. Apps that let companies communicate internally and share updates, information, and more are some of the most important parts of modern software stacks. Here’s where a chat app like HipChat comes in, and other software like Atlassian’s JIRA, Confluence, Google Apps, Trello, and Basecamp. Having tools like these helps keep everyone on the same page, no matter what time zone they’re in.

7. Divvying up work between larger teams and distributed teams

By moving architectures to distributed systems, it’s important to remember that the more complicated a system is, the more a team will have to keep up with a new set of challenges: things that come along with cloud-based systems like failures, eventual consistency, and monitoring. Moving away from the LAMP-style stack is as much a technical change as it is a cultural one; be sure you’re engaging MEAN stack engineers and DevOps professionals who are skilled with this new breed of stack.

So what are the main platforms shaking up the stack landscape?

The Stackshare study dubbed this new generation of tech companies leaving LAMP behind as “GECS companies”—named for their predominant use of GitHub, Amazon EC2, and Slack, although there are many same-but-different tools like these three platforms.

Upwork has moved its stack to AWS, a shift that the Upwork engineering team is documenting on the Upwork blog. These new platforms offer startups and other businesses more democratization of options—with platforms, cloud-based servers, programming languages, and frameworks that can be combined to suit their specific needs.

  • JavaScript: JavaScript is the biggest piece of the new, post-LAMP pie. Think of it as the replacement for the “P” (PHP) in LAMP. It’s a front-end scripting language, but it’s so much more—it’s a stack-changer. JavaScript is powerful for both the front-end and back-end, thanks to Node.js, and is even outpacing some mobile technologies. Where stacks were once more varied between client and server, JavaScript is creating a more fluid, homogeneous stack, with a multitude of frameworks like Backbone, Express, Koa, Meteor, React, and Angular.
  • Ruby and Python also dominate the new back-end stack, along with Node.js.
  • Amazon Web Services (AWS): The AWS cloud-based suite of products is the new foundation for many organizations, offering everything from databases and developer tools to analytics, mobile and IoT support, and networking.
  • Computing platforms: Amazon EC2, Heroku, and Microsoft Azure
  • Databases: PostgreSQL, with some MongoDB and MySQL.

The good news? There’s no shortage of Amazon Web Services pros, freelance DevOps engineers, and freelance data scientists who are skilled in all of these newer platforms and technologies and poised to help companies get new, cloud-based stacks up and running.

Read more at http://www.business2community.com/brandviews/upwork/cloud-computing-changing-software-stack-01644544#kEgMIdXIW7Q0ZpOt.99


Site Search

Search
Exact matches only
Search in title
Search in content
Search in comments
Search in excerpt
Filter by Custom Post Type
 

BlogFerret

Help-Desk
X
Sign Up

Enter your email and Password

Log In

Enter your Username or email and password

Reset Password

Enter your email to reset your password

X
<-- script type="text/javascript">jQuery('#qt_popup_close').on('click', ppppop);