AWS Lambda has stamped a big DEPRECATED on containers – Welcome to “Serverless Superheroes”! In this space, I chat with the toolmakers, innovators, and developers who are navigating the brave new world of “serverless” cloud applications.
In this edition, I chatted with Steven Faulkner, a senior software engineer at LinkedIn and the former director of engineering at Bustle. The following interview has been edited and condensed for clarity.
Forrest Brazeal: At Bustle, your previous company, I heard you cut your hosting costs by about forty percent when you switched to serverless. Can you speak to where all that money was going before, and how you were able to make that type of cost improvement?
Steven Faulkner: I believe 40% is where it landed. The initial results were even better than that. We had one service that was costing about $2500 a month and it went down to about $500 a month on Lambda.
Bustle is a media company — it’s got a lot of content, it’s got a lot of viral, spiky traffic — and so keeping up with that was not always the easiest thing. We took advantage of EC2 auto-scaling, and that worked … except when it didn’t. But when we moved to Lambda — not only did we save a lot of money, just because Bustle’s traffic is basically half at nighttime what it is during the day — we saw that serverless solved all these scaling headaches automatically.
On the flip side, did you find any unexpected cost increases with serverless?
There are definitely things that cost more or could be done cheaper not on serverless. When I was at Bustle they were looking at some stuff around data pipelines and settled on not using serverless for that at all, because it would be way too expensive to go through Lambda.
Ultimately, although hosting cost was an interesting thing out of the gate for us, it quickly became a relative non-factor in our move to serverless. It was saving us money, and that was cool, but the draw of serverless really became more about the velocity with which our team could develop and deploy these applications.
At Bustle, we only have ever had one part-time “ops” person. With serverless, those responsibilities get diffused across our team, and that allowed us all to focus more on the application and less on how to get it deployed.
Any of us who’ve been doing serverless for a while know that the promise of “NoOps” may sound great, but the reality is that all systems need care and feeding, even ones you have little control over. How did your team keep your serverless applications running smoothly in production?
I am also not a fan of the term “NoOps”; it’s a misnomer and misleading for people. Definitely out of the gate with serverless, we spent time answering the question: “How do we know what’s going on inside this system?”
IOPipe was just getting off the ground at that time, and so we were one of their very first customers. We were using IOPipe to get some observability, then CloudWatch sort of got better, and X-Ray came into the picture which made things a little bit better still. Since then Bustle also built a bunch of tooling that takes all of the Lambda logs and data and does some transformations — scrubs it a little bit — and sends it to places like DataDog or to Scalyr for analysis, searching, metrics and reporting.
But I’m not gonna lie, I still don’t think it’s super great. It got to the point where it was workable and we could operate and not feel like we were always missing out on what was actually going on, but there’s a lot of room for improvement.
Another common serverless pain point is local development and debugging. How did you handle that?
I wrote a framework called Shep that Bustle still uses to deploy all of our production applications, and it handles the local development piece. It allows you to develop a NodeJS application locally and then deploy it to Lambda. It could do environment variables before Lambda had environment variables, and have some sanity around versioning and using webpack to bundle. All the the stuff that you don’t really want the everyday developer to have to worry about.
I built Shep in my first couple of months at Bustle, and since then, the Serverless Framework has gotten better. SAM has gotten better. The whole entire ecosystem has leveled up. If I was doing it today I probably wouldn’t need to write Shep. But at the time, that’s definitely when we had to do.
You’re putting your finger on an interesting reality with the serverless space, which is: it’s evolving so fast that it’s easy to create a lot of tooling and glue code that becomes obsolete very quickly. Did you find this to be true?
That’s extremely fair to say. I had a little Twitter thread around this a couple months ago, having a bit of a realization myself that Shep is not the way I would do deployments anymore. When AWS releases their own tooling, it always seems to start out pretty bad, so the temptation is to fill in those gaps with your own tool.
But AWS services change and get better at a very rapid rate. So I think the lesson I learned is lean on AWS as much as possible, or build on top of their foundation and make it pluggable in a way that you can just revert to the AWS tooling when it gets better.
Honestly, I don’t envy a lot of the people who sliced their piece of the serverless pie based on some tool they’ve built. I don’t think that’s necessarily a long term sustainable thing.
As I talk to developers and sysadmins, I feel like I encounter a lot of rage about serverless as a concept. People always want to tell me the three reasons why it would never work for them. Why do you think this concept inspires so much animosity and how do you try to change hearts and minds on this?
A big part of it is that we are deprecating so many things at one time. It does feel like a very big step to me compared to something like containers. Kelsey Hightower said something like this at one point: containers enable you to take the existing paradigm and move it forward, whereas serverless is an entirely new paradigm.
And so all these things that people have invented and invested time and money and resources in are just going away, and that’s traumatic, that’s painful. It won’t happen overnight, but anytime you make something that makes people feel like what they’ve maybe spent the last 10 years doing is obsolete, it’s hard. I don’t really know if I have a good way to fix that.
My goal with serverless was building things faster. I’m a product developer; that’s my background, that’s what I like to do. I want to make cool things happen in the world, and serverless allows me to do that better and faster than I can otherwise. So when somebody comes to me and says “I’m upset that this old way of doing things is going away”, it’s hard for me to sympathize.
It sounds like you’re making the point that serverless as a movement is more about business value than it is about technology.
Exactly! But the world is a big tent and there’s room for all kinds of stuff. I see this movement around OpenFaaS and the various Functions as a Service on Kubernetes and I don’t have a particular use for those things, but I can see businesses where they do, and if it helps get people transitioned over to serverless, that’s great.
So what is your definition of serverless, then?
I always joke that “cloud native” would have been a much better term for serverless, but unfortunately that was already taken. I think serverless is really about the managed services. Like, who is responsible for owning whether this thing that my application depends on stays up or not? And functions as a service is just a small piece of that.
The way I describe it is: functions as a service are cloud glue. So if I’m building a model airplane, well, the glue is a necessary part of that process, but it’s not the important part. Nobody looks at your model airplane and says: “Wow, that’s amazing glue you have there.” It’s all about how you craft something that works with all these parts together, and FaaS enables that.
And, as Joe Emison has pointed out, you’re not just limited to one cloud provider’s services, either. I’m a big user of Algolia with AWS. I love using Algolia with Firebase, or Netlify. Serverless is about taking these pieces and gluing them together. Then it’s up to the service provider to really just do their job well. And over time hopefully the providers are doing more and more.
We’re seeing that serverless mindset eat all of these different parts of the stack. Functions as a service was really a critical bit in order to accelerate the process. The next big piece is the database. We’re gonna see a lot of innovation there in the next year. FaunaDB is doing some cool stuff in that area, as is CosmosDB. I believe there is also a missing piece of the market for a Redis-style serverless offering, something that maybe even speaks Redis commands but under the hood is automatically distributed and scalable.
What is a legitimate barrier to companies that are looking to adopt serverless at this point?
Probably the biggest is: how do you deal with the migration of legacy things? At Bustle we ended up mostly re-architecting our entire platform around serverless, and so that’s one option, but certainly not available to everybody. But even then, the first time we launched a serverless service, we brought down all of our Redis instances — because Lambda spun up all these containers and we hit connection limits that you would never expect to hit in a normal app.
So if you’ve got something sitting on a mainframe somewhere that is used to only having 20 connections and then you moved over some upstream service to Lambda and suddenly it has 10,000 connections instead of 20. You’ve got a problem. If you’ve bought into service-oriented architecture as a whole over the last four or five years, then you might have a better time, because you can say “Well, all these things do is talk to each other via an API, so we can replace a single service with serverless functions.”
Any other emerging serverless trends that interest you?
We’ve solved a lot of the easy, low-hanging fruit problems with serverless at this point. Like how you do environment variables, or how you’re gonna structure a repository and enable developers to quickly write these functions, We’re starting to establish some really good best practices.
What’ll happen next is we’ll get more iteration around architecture. How do I glue these four services together, and how do the Lambda functions look that connect them? We don’t yet have the Rails of serverless — something that doesn’t necessarily expose that it’s actually a Lambda function under the hood. Maybe it allows you to write a bunch of functions in one file that all talk to each other, and then use something like webpack that splits those functions and deploys them in a way that makes sense for your application.
We could even respond to that at runtime. You could have an application that’s actually looking at what’s happening in the code and saying: “Wow this one part of your code is taking a long time to run; we should make that its own Lambda function and we should automatically deploy that and set up this SNS trigger for you.” That’s all very pie in the sky, but I think we’re not that far off from having these tools.
Because really, at the end of the day, as a developer I don’t care about Lambda, right? I mean, I have to care right now because it’s the layer in which I work, but if I can move one layer up where I’m just writing business logic and the code gets split up appropriately, that’s real magic.
Forrest Brazeal is a cloud architect and serverless community advocate at Trek10. He writes the Serverless Superheroes series and draws the ‘FaaS and Furious’ cartoon series at A Cloud Guru. If you have a serverless story to tell, please don’t hesitate to let him know.
Amazon’s digital assistant Alexa might show up in a lot of new devices soon.
That’s because the online retail giant has decided to open up what amounts to Alexa’s ears, her 7-Mic Voice Processing Technology, to third party hardware makers who want to build the digital brain into their devices. The new development kit also includes access to Amazon’s proprietary software for wake word recognition, beamforming, noise reduction, and echo cancellation as well as reference client software for local device control and communication with the Alexa Voice Service.
The move will make it easier and less expensive for hardware makers to build Alexa into their products.
“Since the introduction of Amazon Echo and Echo Dot, device makers have been asking us to provide the technology and tools to enable a far-field Alexa experience for their products,” said Priya Abani, director of Amazon Alexa said in a statement. “With this new reference solution, developers can design products with the same unique 7-mic circular array, beamforming technology, and voice processing software that have made Amazon Echo so popular with customers. It’s never been easier for device makers to integrate Alexa and offer their customers world-class voice experiences.”
Amazon said the new development kit will be invitation only. Device makers can sign up here for an invite and to learn more about the technology.
A similar decision in 2015 to give developers the opportunity to build new capabilities for Alexa through the Alexa Skills Kit helped push Amazon into the early lead in the competitive voice assistant market. Developers who want to add to Alexa’s abilities can write code that works with Alexa in the cloud, letting the smart assistant do the heavy lifting of understanding and deciphering spoken commands.
Former Amazon executive John Rossman shares his checklist for developing an internet of things strategy for your organization.
The internet of things (IoT) may present the biggest opportunity to enterprises since the dawn of the internet age, and perhaps it will be bigger. Research firm Gartner predicts there will be nearly 20 billion devices on the IoT by 2020, and IoT product and service suppliers will generate $300 billion+ in revenue.
Successfully leveraging that opportunity — bringing together sensors, connectivity, cloud storage, processing, analytics and machine learning to transform business models and processes — requires a plan.
“In the course of my career, I’ve estimated and planned hundreds of projects,” John Rossman, who spent four years launching and then running Amazon’s Marketplace business (which represents more than 50 percent of all Amazon units sold today), writes in his new book, The Amazon Way on IoT: 10 Principles for Every Leader from the World’s Leading Internet of Things Strategies. “I’ve learned that, even before you start seeking answers, it’s imperative to understand the questions. Guiding a team to a successful outcome on a complex project requires understanding of the steps and deliverables, necessary resources, and roles and every inherent risk and dependency.”
Before you start the hardware and software design, and before you figure out how to engage developers, he says, you need to start with a better set of questions.
Rossman says there are three key phases to building a successful IoT strategy. While he presents the steps sequentially, he notes that many steps are actually taken concurrently in practice and can be approached in many different ways.
Part 1. Develop and articulate your strategy
First and foremost, Rossman says, you must narrow and prioritize your options. IoT presents a broad swathe of opportunities. Success depends upon understanding your market, evaluating the opportunities with deliberation and attacking in the right place.
It all begins with a landscape analysis. You need to thoroughly understand your industry and competitors — strengths, weaknesses, opportunities and threats (SWOT). This will help you see the megatrends and forces at play in your market.
“Creating a landscape analysis and value chain of your industry is a very important thing to do,” Rossman tells CIO.com. “Studying the market: What are they saying about IoT in your industry? Truly understanding what is your worst customer moment: Where do customers get frustrated? What data or what event improves that customer experience? What’s the sensor or IoT opportunity that provides that data?”
Value-chain analysis and profit-pool analysis
The next step, Rossman says, is to create a value-chain analysis and profit-pool analysis of your industry. It should be a broad view of the industry, don’t give in to tunnel-vision with a narrow view of your current business. In some cases, this may involve launching a business in one part of the value chain as a way to gain perspective on the rest of the value chain and to identify other business opportunities.
Partner, competitor and vendor analysis
Create a map of other solutions providers in your space to develop a clear understanding of what exactly each one does, who their key clients are and what their IoT use cases are. Rossman says you should even pick a few to interview. Use this process to understand the needs of customers, the smart way those needs are already being met and where the gaps are.
The next step, Rossman says, is to document specific unmet customer needs and identify the key friction points your future customers are currently experiencing.
“Following the path from start to your desired outcome can help you identify details and priorities that might otherwise be dealt with at too high a level or skipped over entirely,” he writes.
Rossman warns that crafting strong customer personas and journeys is hard work, and you may need to start over several times to get it right.
“The biggest mistake you can make on these is to build them for show rather than for work,” he writes. “Don’t worry about the beauty of these deliverables until things are getting baked (if at all). Do worry about getting at insights, talking to customers and validating your findings with others who can bring insights and challenges to your work.”
Evaluation framework and scoring
Design ways to assess the success of your work.
“This includes understanding a project’s feasibility and transition points and how it will tie into other corporate strategies at your company,” Rossman writes. “Sometimes, especially if your organization is new to the field of connected devices, the success of your project should be measured in terms of what you can learn from the project rather than whether or not it can be classically considered a success.”
You might undertake some early IoT initiatives purely to gain experience, with no expected ROI, he says.
Once you have all these analyses under your belt, you need share what you’ve learned with the rest of your team. Rossman says he’s had the most success articulating these learnings by building a flywheel model of business systems and by developing a business model.
Part 2. Build your IoT roadmap
Once you’ve explained your big idea and why your organization should pursue it, you need an IoT roadmap that helps you plan and communicate to others what the journey will be like, what is being built and how it will work.
“In creating your roadmap, embrace on of Amazon’s favorite strategies — think big, but bet small,” Rossman writes.
In other words, you need a big vision, but you don’t want to “bet big.” Make small bets to test your thinking. This can involve creating a prototype, a minimally viable product or jointly developing a project with existing customers and partners.
Rossman suggests four methods that can help you articulate your roadmap:
The future press release. Develop a simple but specific product announcement. This forces you to clarify your vision, Rossman says.
A FAQ for your IoT plan. Forecast some of the questions you’re likely to get about your product and create a frequently asked questions (FAQ) document to answer them.
A user manual. Develop a preliminary user manual for your IoT device. It should address the end user. If the product includes an API, you should also build a user manual for the developer.
A project charter. Write a project charter. This is a written project overview that outlines the key facets of the project. It should help you understand the resources you need to undertake the project, what the key milestones are and the schedule.
Part 3. Identify and map your IoT requirements
The last step is to identify and map your IoT requirements — the technical capabilities you need to make your solution a success.
“Companies use many different types of approaches, such as use cases, user stories, process flows, personas, architecture specifications and so on to document their requirements,” Rossman writes.
Regardless of the requirements methodology you settle on, Rossman says it’s important to answer questions around insights (data and events), analytics and recommendations, performance and environment and operating costs.
For example, under ‘insights,’ it’s important to answer questions like these:
What problem, event or insight is the end user solving for?
What insights would be valuable to the customer?
What recommendation or optimization using the data would be valuable to a customer?
What data needs to be collected?
Analytics and recommendations questions might include the following:
How responsive will “adjustments” or optimizations need to be (specify in time range)?
How complex will the “math” be? Write the math equation or pseudologic code if you can.
Will notifications, logic, “math,” or algorithms be consistent and fixed, or will they need to be configurable, updated and managed?
Performance questions might include these:
Estimate the amount of data transmitted over a period of time (hour, day).
What are the consequences of data not being collected?
What are the consequences of data being collected but not transmitted?
Environment and operating requirements questions might include these:
What operating conditions will the device and sensor be in? Temperature, moisture, pressure, access and vibration are example conditions.
What device physical security needs or risks are there?
Will the IoT device or sensors be embedded within another device, or will they be independent and a primary physical device themselves?
Costs questions might include these:
What is the cost per device target range?
What is the cost per device for connectivity target range?
What is the additional operating cost range the business can support for ongoing operating infrastructure?
“As you build your plans, remember that though IoT can provide key pieces to the puzzle, it’s no golden ticket,” Rossman writes. “Simply creating an IoT solution will not bring you success. However, if you focus on providing strong value to your customers through new or updated products and services, improving company operations or creating new or more-efficient business models, you’ll be much more likely to find success.”
Amazon’s cloud provider is the biggest player in the rapidly growing cloud infrastructure market, according to new data.
Amazon Web Services (AWS) accounts for one third of the cloud infrastructure market, more than the value generated by its next three biggest rivals combined.
AWS dominates, with a 33.8 percent global market share, while its three nearest competitors — Microsoft, Google, and IBM — together accounted for 30.8 percent of the market, according to calculations by analyst Canalys.
The four leading service providers were followed by Alibaba and Oracle, which made up 2.4 percent and 1.7 percent of the total respectively, with rest of the market made up of a number of smaller players.
According to the researchers, total spending on cloud infrastructure services, which stood at $10.3bn in the fourth quarter of last year (up 49 percent year-on-year) will hit $55.8bn in 2017 — up 46 percent on 2016’s total of $38.1bn.
Continuing demand is leading the cloud companies to accelerate their data centre expansion. Canalys said AWS launched 11 new availability zones globally in 2016, four of which were established in Canada and the UK in the past quarter. IBM also opened its new data centre in the UK, bringing its total cloud data centres to 50 worldwide, while Microsoft also added with new facilities in the UK and Germany.
Google and Oracle set up their first infrastructure in Japan and China respectively, aiming at expanding their footprint in the Asia Pacific region, while Alibaba also unveiled the availability of its four new data centres in Australia, Japan, Germany, and the United Arab Emirates.
Strict data sovereignty laws — under which personal data has to be stored in servers that are physically located within the country — mean cloud service providers have to build data centres in key markets, such as Germany, Canada, Japan, the UK, China, and the Middle East, said Canalys research analyst Daniel Liu.
Amazon Web Services, or AWS, the current king of Infrastructure as a Service (IaaS), has once again proved that their leadership position remains undisputed. AWS continued its tradition of posting quarterly sequential growth during the fourth quarter of 2016, reaching $3.546 billion in revenues, representing growth of 47% compared to Q4-15 and $305 million more than their Q3-16 numbers.
Just a quick look at the last three quarters will reveal that their sequential sales expansion increased in 2016 compared to 2015. Despite reaching a run rate of more than $14 billion, Amazon Web Services continues to expand at an extremely fast pace, posting nearly 50% growth year over year.
The fact that they are able to add +$300 million in sales sequentially (from one quarter to the next) shows that the growth momentum is still very much intact and that a slowdown in the short term is out of the question.
If Amazon Web Services continues the current trajectory, an annual revenue figure close to or more than $20 billion will be possible in the next four to six quarters. That will be a monumental achievement for a hard core retailer competing with top tech companies of the world.
During the recent quarter, AWS’s competitor Microsoft announced that annualized commercial cloud run rate has exceeded $14 billion. Amazon possible makes a little bit more from cloud than Microsoft, but the latter makes a hefty sum from its SaaS product line-up led by Office 365. The fact that Amazon is still a little ahead of Microsoft without a significant SaaS portfolio is testament to the kind of strength AWS has in the IaaS space.
AWS continues to be the most profitable unit for Amazon with an operating income of $926 million, compared to $816 million from Amazon’s North America retail unit. During the fourth quarter earnings call Amazon CFO Brian Olsavsky told analysts that Amazon had cut prices seven times during fourth quarter, and added more than 1000 services and features in 2016 compared to over 700 in 2015.
The process of cutting prices and adding more services helped AWS operating numbers nicely, with the segment reporting an operating margin of 31.3%.
At these levels Amazon does remains the company to beat because, with such fat margins, engaging Amazon in a price war is not going to work for any competitor. The unit has enough bandwidth to withstand any price onslaught.
The only way to compete with Amazon is to have a better offering and add more value to your services. But even here, Amazon is leaving no wiggle room. How does a competitor match up to a company that has added thousands of services, is laser-focused on cloud infrastructure, keeps cutting prices without waiting for someone else to do it first, keeps slowly but steadily expanding its data center footprint and has $14 billion to show for it.
The relentless Amazon is possibly the best thing that happened to the cloud industry because they will keep everyone on their toes, including themselves.
You can access Amazon’s fourth quarter 2016 earnings report here.
From data scooping to facial recognition, Amazon’s latest additions give devs new, wide-ranging powers in the cloud
In the beginning, life in the cloud was simple. Type in your credit card number and—voilà—you had root on a machine you didn’t have to unpack, plug in, or bolt into a rack.
That has changed drastically. The cloud has grown so complex and multifunctional that it’s hard to jam all the activity into one word, even a word as protean and unstructured as “cloud.” There are still root logins on machines to rent, but there are also services for slicing, dicing, and storing your data. Programmers don’t need to write and install as much as subscribe and configure.
Here, Amazon has led the way. That’s not to say there isn’t competition. Microsoft, Google, IBM, Rackspace, and Joyent are all churning out brilliant solutions and clever software packages for the cloud, but no company has done more to create feature-rich bundles of services for the cloud than Amazon. Now Amazon Web Services is zooming ahead with a collection of new products that blow apart the idea of the cloud as a blank slate. With the latest round of tools for AWS, the cloud is that much closer to becoming a concierge waiting for you to wave your hand and give it simple instructions.
Here are 10 new services that show how Amazon is redefining what computing in the cloud can be.
Anyone who has done much data science knows it’s often more challenging to collect data than it is to perform analysis. Gathering data and putting it into a standard data format is often more than 90 percent of the job.
Glue is a new collection of Python scripts that automatically crawls your data sources to collect data, apply any necessary transforms, and stick it in Amazon’s cloud. It reaches into your data sources, snagging data using all the standard acronyms, like JSON, CSV, and JDBC. Once it grabs the data, it can analyze the schema and make suggestions.
The Python layer is interesting because you can use it without writing or understanding Python—although it certainly helps if you want to customize what’s going on. Glue will run these jobs as needed to keep all the data flowing. It won’t think for you, but it will juggle many of the details, leaving you to think about the big picture.
Field Programmable Gate Arrays have long been a secret weapon of hardware designers. Anyone who needs a special chip can build one out of software. There’s no need to build custom masks or fret over fitting all the transistors into the smallest amount of silicon. An FPGA takes your software description of how the transistors should work and rewires itself to act like a real chip.
Amazon’s new AWS EC2 F1 brings the power of FGPA to the cloud. If you have highly structured and repetitive computing to do, an EC2 F1 instance is for you. With EC2 F1, you can create a software description of a hypothetical chip and compile it down to a tiny number of gates that will compute the answer in the shortest amount of time. The only thing faster is etching the transistors in real silicon.
Who might need this? Bitcoin miners compute the same cryptographically secure hash function a bazillion times each day, which is why many bitcoin miners use FPGAs to speed up the search. Anyone with a similar compact, repetitive algorithm you can write into silicon, the FPGA instance lets you rent machines to do it now. The biggest winners are those who need to run calculations that don’t map easily onto standard instruction sets—for example, when you’re dealing with bit-level functions and other nonstandard, nonarithmetic calculations. If you’re simply adding a column of numbers, the standard instances are better for you. But for some, EC2 with FGPA might be a big win.
As Docker eats its way into the stack, Amazon is trying to make it easier for anyone to run Docker instances anywhere, anytime. Blox is designed to juggle the clusters of instances so that the optimum number are running—no more, no less.
Blox is event driven, so it’s a bit simpler to write the logic. You don’t need to constantly poll the machines to see what they’re running. They all report back, so the right number can run. Blox is also open source, which makes it easier to reuse Blox outside of the Amazon cloud, if you should need to do so.
Monitoring the efficiency and load of your instances used to be simply another job. If you wanted your cluster to work smoothly, you had to write the code to track everything. Many people brought in third parties with impressive suites of tools. Now Amazon’s X-Ray is offering to do much of the work for you. It’s competing with many third-party tools for watching your stack.
When a website gets a request for data, X-Ray traces as it as flows your network of machines and services. Then X-Ray will aggregate the data from multiple instances, regions, and zones so that you can stop in one place to flag a recalcitrant server or a wedged database. You can watch your vast empire with only one page.
Rekognition is a new AWS tool aimed at image work. If you want your app to do more than store images, Rekognition will chew through images searching for objects and faces using some of the best-known and tested machine vision and neural-network algorithms. There’s no need to spend years learning the science; you simply point the algorithm at an image stored in Amazon’s cloud, and voilà, you get a list of objects and a confidence score that ranks how likely the answer is correct. You pay per image.
The algorithms are heavily tuned for facial recognition. The algorithms will flag faces, then compare them to each other and references images to help you identify them. Your application can store the meta information about the faces for later processing. Once you put a name to the metadata, your app will find people wherever they appear. Identification is only the beginning. Is someone smiling? Are their eyes closed? The service will deliver the answer, so you don’t need to get your fingers dirty with pixels. If you want to use impressive machine vision, Amazon will charge you not by the click but by the glance at each image.
Working with Amazon’s S3 has always been simple. If you want a data structure, you request it and S3 looks for the part you want. Amazon’s Athena now makes it much simpler. It will run the queries on S3, so you don’t need to write the looping code yourself. Yes, we’ve become too lazy to write loops.
Athena uses SQL syntax, which should make database admins happy. Amazon will charge you for every byte that Athena churns through while looking for your answer. But don’t get too worried about the meter running out of control because the price is only $5 per terabyte. That’s about 50 billionths of a cent per byte. It makes the penny candy stores look expensive.
The original idea of a content delivery network was to speed up the delivery of simple files like JPG images and CSS files by pushing out copies to a vast array of content servers parked near the edges of the Internet. Amazon is taking this a step further by letting us push Node.js code out to these edges where they will run and respond. Your code won’t sit on one central server waiting for the requests to poke along the backbone from people around the world. It will clone itself, so it can respond in microseconds without being impeded by all that network latency.
Amazon will bill your code only when it’s running. You won’t need to set up separate instances or rent out full machines to keep the service up. It is currently in a closed test, and you must apply to get your code in their stack.
If you want some kind of physical control of your data, the cloud isn’t for you. The power and reassurance that comes from touching the hard drive, DVD-ROM, or thumb drive holding your data isn’t available to you in the cloud. Where is my data exactly? How can I get it? How can I make a backup copy? The cloud makes anyone who cares about these things break out in cold sweats.
The Snowball Edge is a box filled with data that can be delivered anywhere you want. It even has a shipping label that’s really an E-Ink display exactly like Amazon puts on a Kindle. When you want a copy of massive amounts of data that you’ve stored in Amazon’s cloud, Amazon will copy it to the box and ship the box to wherever you are. (The documentation doesn’t say whether Prime members get free shipping.)
Snowball Edge serves a practical purpose. Many developers have collected large blocks of data through cloud applications and downloading these blocks across the open internet is far too slow. If Amazon wants to attract large data-processing jobs, it needs to make it easier to get large volumes of data out of the system.
If you’ve accumulated an exabyte of data that you need somewhere else for processing, Amazon has a bigger version called Snowmobile that’s built into an 18-wheel truck complete with GPS tracking.
Oh, it’s worth noting that the boxes aren’t dumb storage boxes. They can run arbitrary Node.js code too so you can search, filter, or analyze … just in case.
Once you’ve amassed a list of customers, members, or subscribers, there will be times when you want to push a message out to them. Perhaps you’ve updated your app or want to convey a special offer. You could blast an email to everyone on your list, but that’s a step above spam. A better solution is to target your message, and Amazon’s new Pinpoint tool offers the infrastructure to make that simpler.
You’ll need to integrate some code with your app. Once you’ve done that, Pinpoint helps you send out the messages when your users seem ready to receive them. Once you’re done with a so-called targeted campaign, Pinpoint will collect and report data about the level of engagement with your campaign, so you can tune your targeting efforts in the future.
Who gets the last word? Your app can, if you use Polly, the latest generation of speech synthesis. In goes text and out comes sound—sound waves that form words that our ears can hear, all the better to make audio interfaces for the internet of things.
Operator and vendor revenues across the main cloud services and infrastructure market segments hit $148 billion (£120.5bn) in 2016 growing at 25% annually, according to the latest note from analyst firm Synergy Research.
Infrastructure as a service (IaaS) and platform as a service (PaaS) experienced the highest growth rates at 53%, followed by hosted private cloud infrastructure services, at 35%, and enterprise SaaS, at 34%. Amazon Web Services (AWS) and Microsoft lead the way in IaaS and PaaS, with IBM and Rackspace on top for hosted private cloud.
In the four quarters ending September (Q3) 2016, total spend on hardware and software to build cloud infrastructure exceeded $65bn, according to the researchers. Spend on private cloud accounts for more than half of the overall total, but public cloud spend is growing much more rapidly. The note also argues unified comms as a service (UCaaS) is growing ‘steadily’.
“We tagged 2015 as the year when cloud became mainstream and I’d say that 2016 is the year that cloud started to dominate many IT market segments,” said Jeremy Duke, Synergy Research Group founder and chief analyst in a statement. “Major barriers to cloud adoption are now almost a thing of the past, especially on the public cloud side.
“Cloud technologies are now generating massive revenues for technology vendors and cloud service providers and yet there are still many years of strong growth ahead,” Duke added.
The most recent examination of the cloud infrastructure market by Synergy back in August argued AWS, Microsoft, IBM and Google continue to grow more quickly than their smaller competitors and, between them, own more than half of the global cloud infrastructure service market.
At the AWS re:Invent event, Amazon has announced a host of new services that highlight its commitment to enterprises. Andy Jassy, CEO of AWS, emphasized on the innovation in the areas of artificial intelligence, analytics, and hybrid cloud.
Amazon has been using deep learning and artificial intelligence in its retail business for enhancing the customer experience. The company claims that it has thousands of engineers working on artificial intelligence to improve search and discovery, fulfillment and logistics, product recommendations, and inventory management. Amazon is now bringing the same expertise to the cloud to expose the APIs that developers can consume to build intelligent applications. Dubbed as Amazon AI, the new service offers powerful AI capabilities such as image analysis, text to speech conversion, and natural language processing.
Amazon Rekognition is the rich image analysis service that can identify various attributes of an image. Amazon Polly is a service that accepts text or a string and returns an MP3 audio file containing the speech. With support for 47 different voices in 23 different languages, the service exposes rich cognitive speech capabilities. Amazon Lex is the new service for natural language processing and automatic speech recognition. It is the same service that powers Alexa and Amazon Echo. The service converts text or voice to a set of actions that developers can parse to perform a set of actions.
Amazon is also investing in MXNet, a deep learning framework that can run in a variety of environments. Apart from this, Amazon is also optimizing EC2 images to run popular deep learning frameworks including CNTK, TensorFlow, and Theano.
In the last decade, Amazon has added many services and features to its platform. While customers appreciate the pace of innovation, first-time users often complain about the overwhelming number of options and choices. Even to launch a simple virtual machine that runs a blog or a development environment in EC2, users may have to choose from a variety of options. To simplify the experience of launching non-mission critical workloads in EC2, AWS has announced a new service called Amazon Lightsail. Customers can launch a VM with just a few clicks without worrying about the complex choices that they need to make. When they get familiar with EC2, they can start integrating with other services such as EBS and Elastic IP. Starting at $5 a month, this is the cheapest compute service available in AWS. Amazon calls Lightsail as the express mode for EC2 as dramatically reduces the launch time of a VM.
Amazon Lightsail competes with the VPS providers such as DigitalOcean and Linode. The sweet spot of these vendors has been developers and non-technical users who need a virtual private server to run a workload in the cloud. With Amazon Lightsail, AWS wants to attract developers, small and medium businesses, and digital agencies that typically use a VPS service for their needs.
On the analytics front, Amazon is adding a new interactive, serverless query service called Amazon Athena that can be used to retrieve data stored in Amazon S3. The service supports complex SQL queries including joins to return data from Amazon S3. Customers can use custom metadata to perform complex queries. Amazon Athena’s pricing is based on per query model.
Last month, AWS and VMware partnered to bring hybrid cloud capabilities to customers. With this, customers can run and manage workloads in the cloud, seamlessly from existing VMware tools.
Amazon claims that the customers will be able to use VMware’s virtualization and management software to seamlessly deploy and manage VMware workloads across all of their on-premises and AWS environments. This offering allows customers to leverage their existing investments in VMware skills and tooling to take advantage of the flexibility of the AWS Cloud.
Pat Gelsinger, CEO of VMware was on stage with Andy Jassy talking about the value that this partnership brings to customers.
In a surprising move, Amazon is making its serverless computing framework, AWS Lambda available outside of its cloud environment. Extending Lambda to connected devices, AWS has announced AWS Greengrass – an embedded Lambda compute environment that can be installed in IoT devices and hubs. It delivers local compute, storage, and messaging infrastructure in environments that demand offline access. Developers can use the simple Lambda programming model to develop applications for both offline and online scenarios. Amazon Greengrass Core is designed to run on hub and gateways while Greengrass runtime will power low-end, resource-constrained devices.
Extending the hybrid scenarios to industrial IoT, Amazon has also announced a new appliance called Snowball Edge that runs Greengrass Core. This appliance is expected to be deployed in environments that generate extensive offline data. It exposes an S3-compatible endpoint for developers to use the same ingestion API as the cloud. Since the device runs Lambda, developers can create functions that respond to events locally. Amazon Snowball Edge ships with 100TB capacity, hi-speed Ethernet Wi-Fi, and 3G cellular connectivity. When the ingestion process is completed, customers can send the appliance to AWS for uploading the data.
Pushing the limits of data migration to the cloud, Amazon is also launching a specialized truck called AWS Snowmobile that can move Exabytes of data to AWS. The truck carries a 48-foot long container that can hold up to 100 Petabytes of data. Customers must call AWS to open the vestibule of the truck to start ingesting the data. They just need to plug the fiber cable and the power cable to start loading the data. Amazon estimates that the loading and unloading process takes about three months on each side.
Apart from these services, Andy Jassy has also announced a slew of enhancements to Amazon EC2 and RDS.
It is the end of 2016! Tina Barr has a great roundup of all the startups we featured this year. Check it out and see if you managed to read about them all, then come back in January for when we start up all over again.
Also- I wanted to thank Elsa Mayer for her hard work in helping out with the startup posts.
What a year it has been for startups! We began the Hot Startups series in March as a way to feature exciting AWS-powered startups and the motivation behind the products and services they offer. Since then we’ve featured 27 startups across a wide range of industries including healthcare, commerce, social, finance, and many more. Each startup offers a unique product to its users – whether it’s an entertaining app or website, an educational platform, or a product to help businesses grow, startups have the ability to reach customers far and wide.
The startups we showcased this year are headquartered all over the world. Check them out on the map below!
In case you missed any of the posts, here is a list of all the hot startups we featured in 2016:
Like everything in enterprise technology, pricing can be a bit complicated. Here’s an analysis from RightScale looking at how discounts alter the cloud pricing equation. Google comes out cheapest in most scenarios.
With Amazon Web Services hosting its annual conference this week, talk about the price for performance and agility equation will be everywhere.
Knowing AWS’ re:Invent is kicking off this week, the largest cloud service provider has been busy cutting prices for various instances. Rest assured that Google and Microsoft are likely to toss in their own price cuts, as AWS speaks to its base.
But the cloud pricing equation is getting complicated for compute instances. Not so shockingly, these price discussions have to include discounts. Like everything in enterprise technology, there’s the street price and your price. Comparing the cloud providers on pricing is tricky given Microsoft, Google, and AWS all have different approaches to discounts.
Fortunately, RightScale on Monday will outline a study on cloud compute prices. Generally speaking, AWS won’t be your cheapest option for compute. AWS typically lands in the middle between Microsoft Azure and Google Cloud.
The bottom line is that AWS uses reserved instances in one-year and three-year terms to offer discounts. Microsoft requires an enterprise agreement for its Azure discounts. Google has sustained usage discounts that are relatively easy to follow.
Overall, RightScale found that Google will be cheapest in most scenarios because sustained usage discounts are automatically applied. Among the key takeaways:
If you need solid state drive performance instead of attached storage, Google will charge you a premium.
Azure matches or beats AWS for on-demand pricing consistently.
AWS won’t be the cheapest alternative in many scenarios. Then again — AWS has a bigger menu, more advanced cloud services, and the customer base where it doesn’t have to go crazy on pricing. AWS just has to be fair.
Your results will vary based on the level of your Microsoft enterprise agreement and what reserved instances were purchased on AWS.
Here are three slides to ponder from RightScale.
Add it up and you’d be advised to make your own comparisons; check out RightScale’s SlideShare, and then crunch some numbers. In the end, enterprises may have to have all three cloud providers in their company — if only to play them off each other.
The market for cloud IaaS has consolidated significantly around two leading service providers. The future of other service providers is increasingly uncertain and customers must carefully manage provider-related risks.
Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using internet technologies. Cloud infrastructure as a service (IaaS) is a type of cloud computing service; it parallels the infrastructure and data center initiatives of IT. Cloud compute IaaS constitutes the largest segment of this market (the broader IaaS market also includes cloud storage and cloud printing). Only cloud compute IaaS is evaluated in this Magic Quadrant; it does not cover cloud storage providers, platform as a service (PaaS) providers, SaaS providers, cloud service brokerages (CSBs) or any other type of cloud service provider, nor does it cover the hardware and software vendors that may be used to build cloud infrastructure. Furthermore, this Magic Quadrant is not an evaluation of the broad, generalized cloud computing strategies of the companies profiled.
In the context of this Magic Quadrant, cloud compute IaaS (hereafter referred to simply as “cloud IaaS” or “IaaS”) is defined as a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near real time, and metered by use. Self-service interfaces are exposed directly to the customer, including a web-based UI and an API. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s data center. Thus, this Magic Quadrant covers both public and private cloud IaaS offerings.
Cloud IaaS includes not just the resources themselves, but also the automated management of those resources, management tools delivered as services, and cloud software infrastructure services. The last category includes middleware and databases as a service, up to and including PaaS capabilities. However, it does not include full stand-alone PaaS capabilities, such as application PaaS (aPaaS) and integration PaaS (iPaaS).
We draw a distinction between cloud infrastructure as a service , and cloud infrastructure as an enabling technology ; we call the latter “cloud-enabled system infrastructure” (CESI). In cloud IaaS, the capabilities of a CESI are directly exposed to the customer through self-service. However, other services, including noncloud services, may be delivered on top of a CESI; these cloud-enabled services may include forms of managed hosting, data center outsourcing and other IT outsourcing services. In this Magic Quadrant, we evaluate only cloud IaaS offerings; we do not evaluate cloud-enabled services.
Gartner’s clients are mainly enterprises, midmarket businesses and technology companies of all sizes, and the evaluation focuses on typical client requirements. This Magic Quadrant covers all the common use cases for cloud IaaS, including development and testing, production environments (including those supporting mission-critical workloads) for both internal and customer-facing applications, batch computing (including high-performance computing [HPC]) and disaster recovery. It encompasses both single-application workloads and virtual data centers (VDCs) hosting many diverse workloads. It includes suitability for a wide range of application design patterns, including both cloud-native application architectures and enterprise application architectures.
Customers typically exhibit a bimodal IT sourcing pattern for cloud IaaS (see “Bimodal IT: How to Be Digitally Agile Without Making a Mess” and “Best Practices for Planning a Cloud Infrastructure-as-a-Service Strategy — Bimodal IT, Not Hybrid Infrastructure” ). Most cloud IaaS is bought for Mode 2 agile IT, emphasizing developer productivity and business agility, but an increasing amount of cloud IaaS is being bought for Mode 1 traditional IT, with an emphasis on cost reduction, safety and security. Infrastructure and operations (I&O) leaders typically lead the sourcing for Mode 1 cloud needs. By contrast, sourcing for Mode 2 offerings is typically driven by enterprise architects, application development leaders and digital business leaders. This Magic Quadrant considers both sourcing patterns and their associated customer behaviors and requirements.
This Magic Quadrant strongly emphasizes self-service and automation in a standardized environment. It focuses on the needs of customers whose primary need is self-service cloud IaaS, although this may be supplemented by a small amount of colocation or dedicated servers. In self-service cloud IaaS, the customer retains most of the responsibility for IT operations (even if the customer subsequently chooses to outsource that responsibility via third-party managed services).
Organizations that need significant customization or managed services for a single application, or that are seeking cloud IaaS as a supplement to a traditional hosting solution (“hybrid hosting”), should consult the Magic Quadrants for managed hosting instead ( “Magic Quadrant for Cloud-Enabled Managed Hosting, North America,” “Magic Quadrant for Managed Hybrid Cloud Hosting, Europe” and “Magic Quadrant for Cloud-Enabled Managed Hosting, Asia/Pacific” ). Organizations that want a fully custom-built solution, or managed services with an underlying CESI, should consult the Magic Quadrants for data center outsourcing and infrastructure utility services ( “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, North America,” “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, Europe” and “Magic Quadrant for Data Center Outsourcing and Infrastructure Utility Services, Asia/Pacific” ).
This Magic Quadrant evaluates all industrialized cloud IaaS solutions, whether public cloud (multitenant or mixed-tenancy), community cloud (multitenant but limited to a particular customer community), or private cloud (fully single-tenant, hosted by the provider or on-premises). It is not merely a Magic Quadrant for public cloud IaaS. To be considered industrialized, a service must be standardized across the customer base. Although most of the providers in this Magic Quadrant do offer custom private cloud IaaS, we have not considered these nonindustrialized offerings in our evaluations. Organizations that are looking for custom-built, custom-managed private clouds should use our Magic Quadrants for data center outsourcing and infrastructure utility services instead (see above).
Understanding the Vendor Profiles, Strengths and Cautions
Cloud IaaS providers that target enterprise and midmarket customers generally offer a high-quality service, with excellent availability, good performance, high security and good customer support. Exceptions will be noted in this Magic Quadrant’s evaluations of individual providers. Note that when we say “all providers,” we specifically mean “all the evaluated providers included in this Magic Quadrant,” not all cloud IaaS providers in general. Keep the following in mind when reading the vendor profiles:
All the providers have a public cloud IaaS offering. Many also have an industrialized private cloud offering, where every customer is on standardized infrastructure and cloud management tools, although this may or may not resemble the provider’s public cloud service in either architecture or quality. A single architecture and feature set and cross-cloud management, for both public and private cloud IaaS, make it easier for customers to combine and migrate across service models as their needs dictate, and enable the provider to use its engineering investments more effectively. Most of the providers also offer custom private clouds.
Most of the providers have offerings that can serve the needs of midmarket businesses and enterprises, as well as other companies that use technology at scale. A few of the providers primarily target individual developers, small businesses and startups, and lack the features needed by larger organizations, although that does not mean that their customer base is exclusively small businesses.
Most of the providers are oriented toward the needs of Mode 1 traditional IT, especially IT operations organizations, with an emphasis on control, governance and security; many such providers have a “rented virtualization” orientation, and are capable of running both new and legacy applications, but are unlikely to provide transformational benefits. A much smaller number of providers are oriented toward the needs of Mode 2 agile IT; these providers typically emphasize capabilities for new applications and a DevOps orientation, but are also capable of running legacy applications and being managed in a traditional fashion.
All the providers offer basic cloud IaaS — compute, storage and networking resources as a service. A few of the providers offer additional value-added capabilities as well, notably cloud software infrastructure services — typically middleware and databases as a service — up to and including PaaS capabilities. These services, along with IT operations management (ITOM) capabilities as a service (especially DevOps-related services) are a vital differentiator in the market, especially for Mode 2 agile IT buyers.
We consider an offering to be public cloud IaaS if the storage and network elements are shared; the compute can be multitenant, single-tenant or both. Private cloud IaaS uses single-tenant compute and storage, but unless the solution is on the customer’s premises, the network is usually still shared.
In general, monthly compute availability SLAs of 99.95% and higher are the norm, and they are typically higher than availability SLAs for managed hosting. Service credits for outages in a given month are typically capped at 100% of the monthly bill. This availability percentage is typically non-negotiable, as it is based on an engineering estimate of the underlying infrastructure reliability. Maintenance windows are normally excluded from the SLA.
Some providers have a compute availability SLA that requires the customer to use compute capabilities in at least two fault domains (sometimes known as “availability zones” or “availability sets”); an SLA violation requires both fault domains to fail. Providers with an SLA of this type are explicitly noted as having a multi-fault-domain SLA.
Very few of the providers have an SLA for compute or storage performance. However, most of the providers do not oversubscribe compute or RAM resources; providers that do not guarantee resource allocations are noted explicitly.
Many providers have additional SLAs covering network availability and performance, customer service responsiveness and other service aspects.
Infrastructure resources are not normally automatically replicated into multiple data centers, unless otherwise noted; customers are responsible for their own business continuity. Some providers offer optional disaster recovery solutions.
All providers offer, at minimum, per-hour metering of virtual machines (VMs), and some can offer shorter metering increments, which can be more cost-effective for short-term batch jobs. Providers charge on a per-VM basis, unless otherwise noted. Some providers offer either a shared resource pool (SRP) pricing model or are flexible about how they price the service. In the SRP model, customers contract for a certain amount of capacity (in terms of CPU and RAM), but can allocate that capacity to VMs in an arbitrary way, including being able to oversubscribe that capacity voluntarily; additional capacity can usually be purchased on demand by the hour.
Some of the providers are able to offer bare-metal physical servers on a dynamic basis. Due to the longer provisioning times involved for physical equipment (two hours is common), the minimum billing increment for such servers is usually daily, rather than hourly. Providers with a bare-metal option are noted as such.
All the providers offer an option for colocation, unless otherwise noted. Many customers have needs that require a small amount of supplemental colocation in conjunction with their cloud — most frequently for a large-scale database, but sometimes for specialized network equipment, software that cannot be licensed on virtualized servers, or legacy equipment. Colocation is specifically mentioned only when a service provider actively sells colocation as a stand-alone service; a significant number of midmarket customers plan to move into colocation and then gradually migrate into that provider’s IaaS offering. If a provider does not offer colocation itself but can meet such needs via a partner exchange, this is explicitly noted.
All the providers claim to have high security standards. The extent of the security controls provided to customers varies significantly, though. All the providers evaluated can offer solutions that will meet common regulatory compliance needs, unless otherwise noted. All the providers have SSAE 16 audits for their data centers (see Note 1). Some may have security-specific third-party assessments such as ISO 27001 or SOC 2 for their cloud IaaS offerings (see Note 2), both of which provide a relatively high level of assurance that the providers are adhering to generally accepted practices for the security of their systems, but do not address the extent of controls offered to customers. Security is a shared responsibility; customers need to correctly configure controls and may need to supply additional controls beyond what their provider offers.
Some providers offer a software marketplace where software vendors specially license and package their software to run on that provider’s cloud IaaS offering. Marketplace software can be automatically installed with a click, and can be billed through the provider. Some marketplaces also contain other third-party solutions and services.
All providers offer enterprise-class support with 24/7 customer service, via phone, email and chat, along with an account manager. Most providers include this with their offering. Some offer a lower level of support by default, but allow customers to pay extra for enterprise-class support.
All the providers will sign contracts with customers can invoice, and can consolidate bills from multiple accounts. While some may also offer online sign-up and credit card billing, they recognize that enterprise buyers prefer contracts and invoices. Some will sign “zero dollar” contracts that do not commit a customer to a certain volume.
Many of the providers have white-label or reseller programs, and some may be willing to license their software. We mention software licensing only when it is a significant portion of the provider’s business; other service providers, not enterprises, are usually the licensees. We do not mention channel programs; potential partners should simply assume that all these companies are open to discussing a relationship.
Most of the providers offer optional managed services on IaaS. However, not all offer the same type of managed services on IaaS as they do in their broader managed hosting or data center outsourcing services. Some may have managed service provider (MSP) or system integrator (SI) partners that provide managed and professional services.
All the evaluated providers offer a portal, documentation, technical support, customer support and contracts in English. Some can provide one or more of these in languages other than English. Most providers can conduct business in local languages, even if all aspects of service are English-only. Customers who need multilingual support will find it very challenging to source an offering.
All the providers are part of very large corporations or otherwise have a well-established business. However, many of the providers are undergoing significant re-evaluation of their cloud IaaS businesses. Existing and prospective customers should be aware that such providers may make significant changes to the strategy and direction of their cloud IaaS business, including replacing their current offering with a new platform, or exiting this business entirely in favor of partnering with a more successful provider.
In previous years, this Magic Quadrant has provided significant technical detail on the offerings. These detailed evaluations are now published in “Critical Capabilities for Public Cloud Infrastructure as a Service, Worldwide” instead.
The service provider descriptions are accurate as of the time of publication. Our technical evaluation of service features took place between January 2016 and April 2016.
Format of the Vendor Descriptions
When describing each provider, we first summarize the nature of the company and then provide information about its industrialized cloud IaaS offerings in the following format:
Offerings: A list of the industrialized cloud IaaS offerings (both public and private) that are directly offered by the provider. Also included is commentary on the ways in which these offerings deviate from the standard capabilities detailed in the Understanding the Vendor Profiles, Strengths and Cautions section above. We also list related capabilities of interest, such as object storage, content delivery network (CDN) and managed services, but this is not a comprehensive listing of the provider’s offerings.
Locations: Cloud IaaS data center locations by country, languages that the company does business in, and languages that technical support can be conducted in.
Recommended mode: We note whether the vendor’s offerings are likely to appeal to Mode 1 safety-and-efficiency-oriented IT, Mode 2 agility-oriented IT, or both. We also note whether the offerings are likely to be useful for organizations seeking IT transformation. This recommendation reflects the way that a provider goes to market, provides service and support, and designs its offerings. All such statements are specific to the provider’s cloud IaaS offering, not the provider as a whole.
Recommended uses: These are the circumstances under which we recommend the provider. These are not the only circumstances in which it may be a useful provider, but these are the use cases it is best used for. For a more detailed explanation of the use cases, see the Recommended Uses section below.
In the list of offerings, we state the basis of each provider’s virtualization technology and, if relevant, its cloud management platform (CMP). We also state what APIs it supports — the Amazon Web Services (AWS), OpenStack and vCloud APIs are the three that have broad adoption, but many providers also have their own unique API. Note that supporting one of the three common APIs does not provide assurance that a provider’s service is compatible with a specific tool that purports to support that API; the completeness and accuracy of API implementations vary considerably. Furthermore, the use of the same underlying CMP or API compatibility does not indicate that two services are interoperable. Specifically, OpenStack-based clouds differ significantly from one another, limiting portability; the marketing hype of “no vendor lock-in” is, practically speaking, untrue.
For many customers, the underlying hypervisor will matter, particularly for those that intend to run commercial software on IaaS. Many independent software vendors (ISVs) support only VMware virtualization, and those vendors that support Xen may support only Citrix XenServer, not open-source Xen (which is often customized by IaaS providers and is likely to be different from the current open-source version). Similarly, some ISVs may support the Kernel-based Virtual Machine (KVM) hypervisor in the form of Red Hat Enterprise Virtualization, whereas many IaaS providers use open-source KVM.
For a detailed technical description of public cloud IaaS offerings, along with a use-case-focused technical evaluation, see“Critical Capabilities for Public Cloud Infrastructure as a Service, Worldwide.”
We also provide a detailed list of evaluation criteria in “Evaluation Criteria for Cloud Infrastructure as a Service.” We have used those criteria to perform in-depth assessments of several providers: see “In-Depth Assessment of Amazon Web Services,” “In-Depth Assessment of Google Cloud Platform,” “In-Depth Assessment of SoftLayer, an IBM Company” and “In-Depth Assessment of Microsoft Azure IaaS.”
For each vendor, we provide recommendations for use. The most typical recommended uses are:
Cloud-native applications. These are applications specifically architected to run in a cloud IaaS environment, using cloud-native principles and design patterns.
E-business hosting. These are e-marketing sites, e-commerce sites, SaaS applications, and similar modern websites and web-based applications. They are usually internet-facing. They are designed to scale out and are resilient to infrastructure failure, but they might not use cloud transaction processing principles.
General business applications. These are the kinds of general-purpose workloads typically found in the internal data centers of most traditional businesses; the application users are usually located within the business. Many such workloads are small, and they are often not designed to scale out. They are usually architected with the assumption that the underlying infrastructure is reliable, but they are not necessarily mission-critical. Examples include intranet sites, collaboration applications such as Microsoft SharePoint and many business process applications.
Enterprise applications. These are general-purpose workloads that are mission-critical, and they may be complex, performance-sensitive or contain highly sensitive data; they are typical of a modest percentage of the workloads found in the internal data centers of most traditional businesses. They are usually not designed to scale out, and the workloads may demand large VM sizes. They are architected with the assumption that the underlying infrastructure is reliable and capable of high performance.
Development environments. These workloads are related to the development and testing of applications. They are assumed not to require high availability or high performance. However, they are likely to require governance for teams of users.
Batch computing. These workloads include high-performance computing (HPC), big data analytics and other workloads that require large amounts of capacity on demand. They do not require high availability, but may require high performance.
Internet of Things (IoT) applications. IoT applications typically combine the traits of cloud-native applications with the traits of big data applications. They typically require high availability, flexible and scalable capacity, interaction with distributed and mobile client devices, and strong security; many such applications also have significant regulatory compliance requirements.
For all the vendors, the recommended uses are specific to self-managed cloud IaaS. However, many of the providers also have managed services, as well as other cloud and noncloud services that may be used in conjunction with cloud IaaS. These include hybrid hosting (customers sometimes blend solutions, such as an entirely self-managed front-end web tier on public cloud IaaS, with managed hosting for the application servers and database), as well as hybrid IaaS/PaaS solutions. Even though we do not evaluate managed services, PaaS and the like in this Magic Quadrant, they are part of a vendor’s overall value proposition and we mention them in the context of providing more comprehensive solution recommendations.
Figure 1. Magic Quadrant for Cloud Infrastructure as a Service, Worldwide