Top 17 Public Cloud Platforms
Last updated: March 05, 2020
Public Cloud platforms provide on-demand storage and computer resources for enterprise data and applications that allow to save money and enhance data security.
Access a reliable, on-demand infrastructure to power your applications, from hosted internal applications to SaaS offerings. Scale to meet your application demands, whether one server or a large cluster. Leverage scalable database solutions. Utilize cost-effective solutions for storing and retrieving any amount of data, any time, anywhere.
Microsoft Azure is an open and flexible cloud platform that enables you to quickly build, deploy and manage applications across a global network of Microsoft-managed datacenters. You can build applications using any alternative language, tool or framework. And you can integrate your public cloud applications with your existing IT environment.
Google Cloud Platform is a set of modular cloud-based services that allow you to create anything from simple websites to complex applications. Cloud Platform provides the building blocks so you can quickly develop everything from simple websites to complex applications. Explore how you can make Cloud Platform work for you.
Heroku is the leading platform as a service in the world and supports Ruby, Java, Python, Scala, Clojure, and Node.js. Deploying an app is simple and easy. No special alternative tools needed, just a plain git push. Deployment is instant, whether your app is big or small.
Rackspace Cloud offers four alternative hosting products: Cloud Servers for on-demand computing power; Cloud Sites for robust web hosting; Cloud Load Balancers for easy, on-demand load balancing and high availability; and Cloud Files for elastic online file storage and CDN.Rackspace Cloud hosting customers never need to worry about buying new hardware to meet increasing traffic demands or huge traffic spikes.
The developer cloud helping millions of developers easily build, test, manage, and scale applications of any size – faster than ever before.
Dell's Virtustream Enterprise Class Cloud provides a secure, highly available, Infrastructure as a Service (IaaS) to enterprises and government customers.
Salesforce Lightning Platform is the proven cloud platform to automate and extend your business and deliver the social enterprise. Salesforce Lightning Platform is an extremely powerful, scalable and secure cloud platform, delivering a complete technology stack covering the ground from database and security to workflow and user interface. Build the social, mobile apps you need to power your Social Enterprise.
Oracle Public Cloud provides customers and partners with a high-performance, reliable, elastic, and secure infrastructure for their critical business applications and offers customers a complete range of business applications and technology solutions, avoiding the problems of data and business process fragmentation when customers use multiple siloed public clouds.
Get the best of both worlds – the power of real time + the simplicity of the cloud – with our cloud-based deployment option for SAP Business Suite powered by SAP HANA, SAP NetWeaver BW powered by SAP HANA, and the SAP HANA platform.
on Live Enterprise
IBM Cloud offers open cloud infrastructure services for IT operations. The IBM Cloud gives you the flexibility to have public, private or hybrid clouds, depending on your business needs. With the IBM Cloud you can unlock more value in your business and in the technology you already have. It’s the cloud that can integrate enterprise-grade services and help speed up the way you innovate.
Skytap provides Environments as a Service to the enterprise. Skytap Environments as a Service remove the biggest constraints slowing development teams down. Our solution removes the inefficiencies and constraints that companies have within the software development lifecycle.
Alibaba Cloud offers a integrated suite of cloud products and services that are reliable and secure, to help you build cloud infrastructure, data centers in multi regions empower your global business.
Joyent is the high-performance cloud computing infrastructure and big data analytics platform, offering organizations of any size the best public and hybrid cloud infrastructure for today's demanding real-time web and mobile applications.
CloudShare provides a secure, self-service public cloud that extends internal IT capabilities. CloudShare enables you to build complete production-like environments in minutes. Deliver fully-functional demos, proof-of-concepts, and evaluation environments on demand and online. Provide effective hands-on technical training to employees, partners, and customers. Create virtual environments while gaining access to IT resources at the speed of agile development.
AppHarbor is a fully hosted .NET Platform as a Service. AppHarbor can deploy and scale any standard .NET application to the cloud.
SuiteCloud is a comprehensive offering of cloud development tools, applications and infrastructure that enables customers and software developers to maximize the benefits of cloud computing. SuiteCloud comprises a multi-tenant cloud platform that consists of Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). The SuiteCloud Developer Tools are uniquely built on NetSuite's leading cloud business management suite.
Latest news about Public Cloud Platforms
2020. CloudShare extends virtual IT Labs solutions to Google Cloud Platform
CloudShare, a provider of specialized cloud environments, announced that it is an official Google Cloud Partner and that its scalable, hands-on virtual training solution is now available on Google Cloud Platform (GCP). Software companies can now run comprehensive training on top of their commodity cloud. CloudShare is also developing transparent training solutions for AWS and Azure, which are scheduled for release in the coming months.
2019. Google Cloud gets a new family of cheaper general-purpose compute instances
Google Cloud announced the launch of its new E2 family of compute instances. These new instances, which are meant for general-purpose workloads, offer a significant cost benefit, with saving of around 31% compared to the current N1 general-purpose instances. The new system is also smarter about where it places VMs, with the added flexibility to move them to other hosts as necessary. To achieve all of this, Google built a custom CPU scheduler. Google says that “unlike comparable options from other cloud providers, E2 VMs can sustain high CPU load without artificial throttling or complicated pricing. It’ll be interesting to see some benchmarks that pit the E2 family against similar offerings from AWS and Azure.
2019. Google Cloud adds a managed service for Microsoft’s Active Directory
Microsoft’s Active Directory remains one of the most-used identity services in the enterprise. Google Cloud Platform has long allowed you to manually set up an Active Directory deployment, but today, Google is taking this a step further by announcing the beta of a managed service. As the name implies, Google will manage this service and automate everything from server maintenance to security configurations. Given Google’s recent focus on hybrid-cloud deployments, you also can use this service to extend your existing on-premises Active Directory domains to the cloud.
2019. Google launched its coldest storage service yet
Google launched a new archival cold storage service. This new service, which doesn’t seem to have a fancy name, will complement the company’s existing Nearline and Coldline services for storing vast amounts of infrequently used data at an affordable low cost. The new archive class takes this one step further, though. It’s cheap, with prices starting at $0.0012 per gigabyte and month. That’s $1.23 per terabyte and month. What makes Google cold storage different from the likes of AWS S3 Glacier, for example, is that the data is immediately available, without millisecond latency. Glacier and similar service typically make you wait a significant amount of time before the data can be used. The new service will become available later this year.
2019. AWS launches fully-managed backup service for business
Amazon’s AWS cloud platform has added a new service Backup, that allows companies to back up their data from various AWS services and their on-premises apps. To back up on-premises data, businesses can use the AWS Storage Gateway. The service allows users to define their various backup policies and retention periods, including the ability to move backups to cold storage (for EFS data) or delete them completely after a certain time. By default, the data is stored in Amazon S3 buckets. Most of the supported services, except for EFS file systems, already feature the ability to create snapshots. Backup essentially automates that process and creates rules around it, so it’s no surprise that the pricing for Backup is the same as for using those snapshot features (with the exception of the file system backup, which will have a per-GB charge).
2018. Microsoft Azure gets new high-performance storage options
Microsoft Azure is getting a number of new storage options that mostly focus on use cases where disk performance matters. The first of these is Azure Ultra SSD Managed Disks, which are now in public preview. Microsoft says that these drives will offer “sub-millisecond latency,” which unsurprisingly makes them ideal for workloads where latency matters. Standard SSD Managed Disks are now generally available after only three months in preview. To top things off, all of Azure’s storage tiers (Premium and Standard SSD, as well as Standard HDD) now offer 8, 16 and 32 TB storage capacity. Also new today is Azure Premium files, which is now in preview. This, too, is an SSD-based service. Azure Files itself isn’t new, though. It offers users access to cloud storage using the standard SMB protocol. This new premium offering promises higher throughput and lower latency for these kind of SMB operations.
2018. Rackspace acquired Salesforce specialist RelationEdge
Rackspace has acquired RelationEdge, a Salesforce implementation partner . Rackspace is still best known for its hosting and managed cloud and infrastructure services. So the company clearly wants to expand its portfolio, though, and add managed services for SaaS applications to its lineup. It made the first step in this direction with the acquisition of TriCore last year, another company in the enterprise application management space. Today’s acquisition builds upon this theme.
2018. Google Compute Engine adds simple machine learning service
Google launched AutoML - a new service on Google Compute Engine that helps developers — including those with no machine learning (ML) expertise - build custom image recognition models. It’s no secret that it’s virtually impossible for businesses to hire machine learning experts and data scientists these days. There is simply too much demand and not enough supply. The new service allow virtually anybody to bring their images, upload them (and import their tags or create them in the app) and then have Google’s systems automatically create a customer machine learning model for them. The whole process, from importing data to tagging it and training the model, is done through a drag and drop interface. We’re not talking about something akin to Microsoft’s Azure ML studio here, though, where you can use a Yahoo Pipes-like interface to build, train and evaluate models.
2017. AWS launched browser-based IDE for cloud developers
2017. Kubernetes comes to Amazon Web Services
Amazon Web Services added long-awaited support for the Kubernetes container orchestration system on top of its Elastic Container Service (ECS). Kubernetes has become something of a de facto standard for container orchestration. It already had the backing of Google (which incubated it), as well as Microsoft and virtually every other major cloud player. So AWS is relatively late to the party here but it does already have over 100,000 active container clusters on its service and that these users spin up millions of containers already. AWS’s users are clearly interested in running containers and indeed, many of them already ran Kubernetes on top of AWS, but without the direct support of AWS. But with this new service, AWS will manage the container orchestration system for its users. ECS for Kubernetes will support the latest versions of Kubernetes and AWS will handle upgrades and all of the management of the service and its clusters.
2017. Google Cloud Platform cuts the price of GPUs by up to 36 percent
Google is cutting the price of using Nvidia’s Tesla GPUs through its Compute Engine by up to 36 percent. In U.S. regions, using the somewhat older K80 GPUs will now cost $0.45 per hour while using the newer and more powerful P100 machines will cost $1.46 per hour (all with per-second billing). Thus Google is aiming this feature at developers who want to run their own machine learning workloads on its cloud, though there also are a number of other applications — including physical simulations and molecular modeling — that greatly benefit from the hundreds of cores that are now available on these GPUs.
2017. Cloud Foundry adds native Kubernetes support
Cloud Foundry, the open-source platform as a service (PaaS) offering for enterprise made an early bet on Docker containers, but with Kubo, which Pivotal and Google donated to the project last year, the project gained a new tool for allowing its users to quickly deploy and manage a Kubernetes cluster (Kubernetes being the Google-backed open-source container orchestration tool that itself is becoming the de facto standard for managing containers). The project is now taking Kubo, renaming it to “Cloud Foundry Container Runtime” (because who needs cute names, after all), and making it a core part of the Cloud Foundry platform. Unsurprisingly, Google and Pivotal worked with Cloud Foundry on building this integration.
2017. Following AWS, Google Compute Engine also moves to per-second billing
A week ago Amazon Web Services added per-second billing for users of its EC2 service. And Google today announced a very similar move. Google Compute Engine, Container Engine, Cloud Dataproc, and App Engine’s flexible environment virtual machines (VMs) will now feature per-second billing. This new pricing scheme extends to preemptible machines and VMs that run premium operating systems, including Windows Server, Red Hat Enterprise Linux and SUSE Enterprise Linux Server. With that, it one-ups AWS, which only offers per-second billing for basic Linux instances and not for Windows Server and other Linux distributions on its platform that currently feature a separate hourly charge. Like AWS, Google will charge for a minimum of one minute.
2017. AWS introduced per-second billing for EC2 instances
Over the last few years, some alternative cloud platforms moved to more flexible billing models (mostly per-minute billing) and now AWS is one-upping many of them by moving to per-second billing for its Linux-based EC2 instances. This new per-second billing model will apply to on-demand, reserved and spot instances, as well as provisioned storage for EBS volumes. Amazon EMR and AWS Batch are also moving to this per-second model. it’s worth noting, though, that there is a one-minute minimum charge per instance and that this doesn’t apply to machines that run Windows or some of the Linux distributions that have their own separate hourly charges.
2017. AWS offers a virtual machine with over 4TB of memory
Amazon’s AWS launched its largest EC2 machine (in terms of memory size) yet: the x1e.32xlarge instance with a whopping 4.19TB of RAM. Previously, EC2’s largest instance only featured just over 2TB of memory. These machines feature quad-socket Intel Xeon processors running at 2.3 GHz, up to 25 Gbps of network bandwidth and two 1,920GB SSDs. There are obviously only a few applications that need this kind of memory. It’s no surprise, then, that these instances are certified to run SAP’s HANA in-memory database and its various tools and that SAP will offer direct support for running these applications on these instances. It’s worth noting that Microsoft Azure’s largest memory-optimized machine currently tops out at just over 2TB and that Google already calls it quits at 416GB of RAM.
2017. Rackspace acquires multi-platform hybrid IT management solution Datapipe
Rackspace to acquire Datapipe, one of its largest competitors in the managed public and private cloud services business. While Datapipe has been extremely successful in the enterprise and with government customers, Rackspace has traditionally focused more on the mid-market segment. The two companies didn’t typically compete on every deal and he stressed that even their product portfolios are quite different, too. While Rackspace could have gained similar technical capabilities by making a number of smaller acquisitions, that process would have taken much longer and wouldn’t necessarily have given Rackspace access to the kind of customers that Datapipe currently works with. Those customers include a large number of large public-sector companies, but also the U.S. departments of defense, energy and justice, in addition to the U.K.’s cabinet office, ministry of justice and department of transportation.
2017. VMware Cloud is now live on Amazon Web Services
Last fall VMware announced partnership with AWS, and now the two companies uveiled combined solution for Enterprise - VMware Cloud on AWS. VMware Cloud on AWS gives customers a seamlessly integrated hybrid cloud that delivers the same architecture, capabilities and operational experience across both their vSphere-based on-premises environment and AWS. While AWS runs its own VMs, it’s not the same as those that VMware runs in a data center, and that creates a management headache for companies trying to run both. By letting companies move to AWS and continue to run the VMware VMs in the public cloud, they get the best of both worlds without the management problems.
2017. Google App Engine gets a firewall
Google App Engine is finally getting a fully featured firewall. Until now, developers couldn’t easily restrict access to their applications on the service to only a small set of IP addresses or address ranges for testing, for example. Instead, they had to hard-code a similar solution into their applications and — because those requests would still hit their applications in some form — even those rejected requests would still incur costs. Now, they’ll be able to use the Google Cloud Console, App Engine Admin API or even the gcloud command-line tool to set up access restrictions that block or allow specific IP addresses. Because the firewall obviously sits in front of the application, rejected requests never touch the application and App Engine never needs to spin up an idle resource only to then reject the request.
2017. Microsoft launched new archival storage option for Azure
Microsoft introduced a new storage option for its Azure cloud computing platform - Azure Archive Blob Storage. This will give developers a cheaper alternative for the long-term storage of large amounts of archival data like logs, raw camera footage, audio recordings, transcripts and medical documents and images. The main difference between the cool and archive tiers is that while archival storage is cheaper, the data retrieval costs are higher. Data that’s stored in the archive tier is also not immediately available for retrieval. The blobs first have to be “rehydrated” and that can take up to 15 hours for blobs that hold less than 50GB of data. It’s worth noting, though, that alternative cold strorage services Amazon Glacier and Google Near have been around for years now.
2017. Google Cloud Platform gets a cheaper, lower-performance networking tier
Google is giving its Cloud Platform users a new, cheaper networking option. Developers can now choose between a premium tier, which routes traffic to their users over Google’s own high-speed networks for as long as possible to minimize hops and distance, and a standard tier, which routes traffic over the public internet, with all the potential slowdowns and extra hops this entails. Pricing for the standard tier is 24-33 percent lower than for the premium tier in North America and Europe. Google uses different pricing models for these two tiers, though. Prices for premium traffic is based on the traffic’s source and destination, so you pay for the distance your traffic travels over Google’s network, while the standard tier’s prices are only based on where the source is.