Amazon Web Services vs Google Cloud Platform
Last updated: August 18, 2022
Access a reliable, on-demand infrastructure to power your applications, from hosted internal applications to SaaS offerings. Scale to meet your application demands, whether one server or a large cluster. Leverage scalable database solutions. Utilize cost-effective solutions for storing and retrieving any amount of data, any time, anywhere.
Google Cloud Platform is a set of modular cloud-based services that allow you to create anything from simple websites to complex applications. Cloud Platform provides the building blocks so you can quickly develop everything from simple websites to complex applications. Explore how you can make Cloud Platform work for you.
Amazon Web Services vs Google Cloud Platform in our news:
2022. Google Cloud will shutter its IoT Core service next year
Google Cloud announced this week that it’s shutting down its IoT Core service, giving customers a year to move to a partner to manage their IoT devices. It believes that having partners manage the process for customers is a better way to go. “Since launching IoT Core, it has become clear that our customers’ needs could be better served by our network of partners that specialize in IoT applications and services. We have worked extensively to provide customers with migration options and solution alternatives, and are providing a year-long runway before IoT Core is discontinued” a Google spokesperson explained.
2022. Google expands Vertex, its managed AI service, with new features
Roughly a year ago, Google announced the launch of Vertex AI, a managed AI platform designed to help companies to accelerate the deployment of AI models. Today the company announced new features heading to Vertex, including a dedicated server for AI system training and “example-based” explanations. As Google has historically pitched it, the benefit of Vertex is that it brings together Google Cloud services for AI under a unified UI and API. Customers including Ford, Seagate, Wayfair, Cashapp, Cruise and Lowe’s use the service to build, train and deploy machine learning models in a single environment, Google claims — moving models from experimentation to production.
2020. AWS launches Amazon AppFlow, its new SaaS integration service
AWS launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand. Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows, and, while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.
2019. Google Cloud gets a new family of cheaper general-purpose compute instances
Google Cloud announced the launch of its new E2 family of compute instances. These new instances, which are meant for general-purpose workloads, offer a significant cost benefit, with saving of around 31% compared to the current N1 general-purpose instances. The new system is also smarter about where it places VMs, with the added flexibility to move them to other hosts as necessary. To achieve all of this, Google built a custom CPU scheduler. Google says that “unlike comparable options from other cloud providers, E2 VMs can sustain high CPU load without artificial throttling or complicated pricing. It’ll be interesting to see some benchmarks that pit the E2 family against similar offerings from AWS and Azure.
2019. AWS launches fully-managed backup service for business
Amazon’s AWS cloud platform has added a new service Backup, that allows companies to back up their data from various AWS services and their on-premises apps. To back up on-premises data, businesses can use the AWS Storage Gateway. The service allows users to define their various backup policies and retention periods, including the ability to move backups to cold storage (for EFS data) or delete them completely after a certain time. By default, the data is stored in Amazon S3 buckets. Most of the supported services, except for EFS file systems, already feature the ability to create snapshots. Backup essentially automates that process and creates rules around it, so it’s no surprise that the pricing for Backup is the same as for using those snapshot features (with the exception of the file system backup, which will have a per-GB charge).
2018. Google Cloud adds new applications performance monitoring tool
Google added a key ingredient for developers building applications on the Google Cloud Platform - a suite of application performance management tools called Stackdriver APM. It is designed for developers to track issues in the applications they have built instead of passing that responsibility onto operations. The thinking is that the developers who built the applications and are closest to the code are therefore best suited to understand the signals coming from it. StackDriver APM is made up of three main tools: Profiler, Trace and Debugger. Trace and Debugger have already been available, but by putting them together with Profiler, the three tools work together to identify, track and repair code issues.
2017. AWS launched browser-based IDE for cloud developers
2017. Google Cloud Platform cuts the price of GPUs by up to 36 percent
Google is cutting the price of using Nvidia’s Tesla GPUs through its Compute Engine by up to 36 percent. In U.S. regions, using the somewhat older K80 GPUs will now cost $0.45 per hour while using the newer and more powerful P100 machines will cost $1.46 per hour (all with per-second billing). Thus Google is aiming this feature at developers who want to run their own machine learning workloads on its cloud, though there also are a number of other applications — including physical simulations and molecular modeling — that greatly benefit from the hundreds of cores that are now available on these GPUs.
2017. AWS introduced per-second billing for EC2 instances
Over the last few years, some alternative cloud platforms moved to more flexible billing models (mostly per-minute billing) and now AWS is one-upping many of them by moving to per-second billing for its Linux-based EC2 instances. This new per-second billing model will apply to on-demand, reserved and spot instances, as well as provisioned storage for EBS volumes. Amazon EMR and AWS Batch are also moving to this per-second model. it’s worth noting, though, that there is a one-minute minimum charge per instance and that this doesn’t apply to machines that run Windows or some of the Linux distributions that have their own separate hourly charges.
2017. AWS offers a virtual machine with over 4TB of memory
Amazon’s AWS launched its largest EC2 machine (in terms of memory size) yet: the x1e.32xlarge instance with a whopping 4.19TB of RAM. Previously, EC2’s largest instance only featured just over 2TB of memory. These machines feature quad-socket Intel Xeon processors running at 2.3 GHz, up to 25 Gbps of network bandwidth and two 1,920GB SSDs. There are obviously only a few applications that need this kind of memory. It’s no surprise, then, that these instances are certified to run SAP’s HANA in-memory database and its various tools and that SAP will offer direct support for running these applications on these instances. It’s worth noting that Microsoft Azure’s largest memory-optimized machine currently tops out at just over 2TB and that Google already calls it quits at 416GB of RAM.