Amazon Web Services vs Cloud Foundry
Last updated: April 23, 2020
Access a reliable, on-demand infrastructure to power your applications, from hosted internal applications to SaaS offerings. Scale to meet your application demands, whether one server or a large cluster. Leverage scalable database solutions. Utilize cost-effective solutions for storing and retrieving any amount of data, any time, anywhere.
Open Source Cloud Application Platform that makes it faster and easier to build, test, deploy and scale applications, providing a choice of clouds, developer frameworks, and application services. It is an open source project and is available through a variety of private cloud distributions and public cloud instances.
Amazon Web Services vs Cloud Foundry in our news:
2020 - Cloud Foundry renews its focus on developer experience as it looks beyond the enterprise – TechCrunch
2020 - AWS launches Amazon AppFlow, its new SaaS integration service
AWS launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand. Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows, and, while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.
2019 - AWS launches fully-managed backup service for business
Amazon’s AWS cloud platform has added a new service Backup, that allows companies to back up their data from various AWS services and their on-premises apps. To back up on-premises data, businesses can use the AWS Storage Gateway. The service allows users to define their various backup policies and retention periods, including the ability to move backups to cold storage (for EFS data) or delete them completely after a certain time. By default, the data is stored in Amazon S3 buckets. Most of the supported services, except for EFS file systems, already feature the ability to create snapshots. Backup essentially automates that process and creates rules around it, so it’s no surprise that the pricing for Backup is the same as for using those snapshot features (with the exception of the file system backup, which will have a per-GB charge).
2017 - AWS launched browser-based IDE for cloud developers
2017 - Cloud Foundry adds native Kubernetes support to compete with Pivotal
Cloud Foundry, the open-source platform as a service (PaaS) offering for enterprise made an early bet on Docker containers, but with Kubo, which Pivotal and Google donated to the project last year, the project gained a new tool for allowing its users to quickly deploy and manage a Kubernetes cluster (Kubernetes being the Google-backed open-source container orchestration tool that itself is becoming the de facto standard for managing containers). The project is now taking Kubo, renaming it to “Cloud Foundry Container Runtime” (because who needs cute names, after all), and making it a core part of the Cloud Foundry platform. Unsurprisingly, Google and Pivotal worked with Cloud Foundry on building this integration.
2017 - AWS introduced per-second billing for EC2 instances. Your move, Skytap !
Over the last few years, some alternative cloud platforms moved to more flexible billing models (mostly per-minute billing) and now AWS is one-upping many of them by moving to per-second billing for its Linux-based EC2 instances. This new per-second billing model will apply to on-demand, reserved and spot instances, as well as provisioned storage for EBS volumes. Amazon EMR and AWS Batch are also moving to this per-second model. it’s worth noting, though, that there is a one-minute minimum charge per instance and that this doesn’t apply to machines that run Windows or some of the Linux distributions that have their own separate hourly charges.
2017 - AWS offers a virtual machine with over 4TB of memory to challenge
Amazon’s AWS launched its largest EC2 machine (in terms of memory size) yet: the x1e.32xlarge instance with a whopping 4.19TB of RAM. Previously, EC2’s largest instance only featured just over 2TB of memory. These machines feature quad-socket Intel Xeon processors running at 2.3 GHz, up to 25 Gbps of network bandwidth and two 1,920GB SSDs. There are obviously only a few applications that need this kind of memory. It’s no surprise, then, that these instances are certified to run SAP’s HANA in-memory database and its various tools and that SAP will offer direct support for running these applications on these instances. It’s worth noting that Microsoft Azure’s largest memory-optimized machine currently tops out at just over 2TB and that Google already calls it quits at 416GB of RAM.
2014 - AWS now supports Docker containers to defeate Cloud Foundry
Amazon announced the preview availability of EC2 Container Services – the new service for managing Docker containers that boosts Amazon Web Services support for hybrid cloud. This bring the benefits of easy development management, portability between environments, lower risk in deployments, smoother maintenance and management of application components, and the ability for it all to work together. AWS isn’t the first cloud provider to offer Docker’s open source engine support. Google has extended its support for Docker containers with its new Google Container Engine powered by its own Kubernetes, announced just last week during the Google Cloud Platform Live event. And, back in August, Microsoft announced its support for Kubernetes in managing Docker containers in Azure.
2014 - Amazon and Microsoft drop cloud prices
Cloud computing is becoming cheaper and cheaper. So, if you once (for example, a year ago) calculated whether it was cost-effective to migrate your IT infrastructure to the cloud and decided that it was still expensive, then recalculate again. Since then, cloud platform reduced prices two or three times. Another round of happening now. Since tomorrow Amazon S3 cloud storage pricing will decrease by 6-22 % (depending on the used space), and the cost of cloud server hard drives (Amazon EBS) will fall by 50%. And a month later Microsoft's cloud platform Windows Azure will reduce its prices by 20% to keep them a little lower than Amazon's. So think once again, why buy an in-house server if the cost of the cloud tends to zero.
2012 - Google and Amazon reduce cloud storage prices. Launch new cloud services
Competition - is good for customers. On Monday, Google reduced prices for its Google Cloud Storage by over 20%, and today, in response, Amazon has reduced prices for its S3 storage by 25%. Obviously, in the near future, Microsoft will also reduce prices for Windows Azure, to bring them to the competitive level - about $0.09/month per GB. The same story occured in March when Amazon lowered prices, and then Microsoft and Google aligned their pricing with Amazon. Because on the cloud platforms market the price is no longer a competitive advantage, but your pricing is higher than the competition - is't a big disadvantage. Some experts already doubt that Amazon and the contenders are earning something on selling gigabytes and gigahertzs. Like in case with the mobile market, the main task of cloud vendors - is to hook up large companies and SaaS-providers to their platforms, even if they should sell computing resources at a loss.
All the talks about open cloud platforms, open cloud standards and free migration between clouds - most likely will remain just talks. OpenStack is trying to build the communism in the Cloud, but with its communist-like business organization, it will hardly succeed. Meanwhile, Amazon, Google, Microsoft are build cloud platforms with their own standards, with unique features, and can afford to reduce prices for computer resources. They can afford because customers will remain and pay for additional features. Migrating to another platform will be very difficult.
In addition to new pricing, Google and Amazon introduced the new cloud services. Google launched the clone of Amazon's Glacier - Durable Reduced Availability Storage (cheap storage for very large amounts of data with slow data access). And Amazon played its muscles. It's new service Redshift allow to host databases the size of which is measured in petabytes. It's difficult to say about the demand for such a service, but it should definitely make a positive impact on Amazon's reputation. If they can play with petabyte-databases, than your little project will work on Amazon without a hitch.