Amazon Web Services vs H2O.ai


27
Amazon Web Services
Access a reliable, on-demand infrastructure to power your applications, from hosted internal applications to SaaS offerings. Scale to meet your application demands, whether one server or a large cluster. Leverage scalable database solutions. Utilize cost-effective solutions for storing and retrieving any amount of data, any time, anywhere.
3
H2O.ai
H2O is open-source software for big-data analysis. It is produced by the company H2O.ai. H2O allows users to fit thousands of potential models as part of discovering patterns in data.
Amazon Web Services vs H2O.ai in our news:


2020 - AWS launches Amazon AppFlow, its new SaaS integration service



AWS launched Amazon AppFlow, a new integration service that makes it easier for developers to transfer data between AWS and SaaS applications like Google Analytics, Marketo, Salesforce, ServiceNow, Slack, Snowflake and Zendesk. Like similar services, including Microsoft Azure’s Power Automate, for example, developers can trigger these flows based on specific events, at pre-set times or on-demand. Unlike some of its competitors, though, AWS is positioning this service more as a data transfer service than a way to automate workflows, and, while the data flow can be bi-directional, AWS’s announcement focuses mostly on moving data from SaaS applications to other AWS services for further analysis. For this, AppFlow also includes a number of tools for transforming the data as it moves through the service.

2019 - AWS launches fully-managed backup service for business



Amazon’s AWS cloud platform has added a new service Backup, that allows companies to back up their data from various AWS services and their on-premises apps. To back up on-premises data, businesses can use the AWS Storage Gateway. The service allows users to define their various backup policies and retention periods, including the ability to move backups to cold storage (for EFS data) or delete them completely after a certain time. By default, the data is stored in Amazon S3 buckets. Most of the supported services, except for EFS file systems, already feature the ability to create snapshots. Backup essentially automates that process and creates rules around it, so it’s no surprise that the pricing for Backup is the same as for using those snapshot features (with the exception of the file system backup, which will have a per-GB charge).

2017 - AWS launched browser-based IDE for cloud developers



Amazon Web Services launched a new browser-based IDE, AWS Cloud9. It isn’t all that different from similar IDEs and editors like Sublime Text, but as AWS stressed during today’s keynote, it allows for collaborative editing and it’s also deeply integrated into the AWS ecosystem. The tool comes with built-in support for languages like JavaScript, Python, PHP and others. Cloud9 also includes pre-installed debugging tools. AWS argues that this is the first “cloud native” IDE, though I’m sure some of its competitors will take issue with this description. Either way, though, Cloud9 is deeply integrated with AWS and developers can create cloud environments and start new instances right from the tool.

2017 - AWS introduced per-second billing for EC2 instances. Your move, Skytap !



Over the last few years, some alternative cloud platforms moved to more flexible billing models (mostly per-minute billing) and now AWS is one-upping many of them by moving to per-second billing for its Linux-based EC2 instances. This new per-second billing model will apply to on-demand, reserved and spot instances, as well as provisioned storage for EBS volumes. Amazon EMR and AWS Batch are also moving to this per-second model. it’s worth noting, though, that there is a one-minute minimum charge per instance and that this doesn’t apply to machines that run Windows or some of the Linux distributions that have their own separate hourly charges.

2017 - AWS offers a virtual machine with over 4TB of memory to challenge



Amazon’s AWS launched its largest EC2 machine (in terms of memory size) yet: the x1e.32xlarge instance with a whopping 4.19TB of RAM. Previously, EC2’s largest instance only featured just over 2TB of memory. These machines feature quad-socket Intel Xeon processors running at 2.3 GHz, up to 25 Gbps of network bandwidth and two 1,920GB SSDs. There are obviously only a few applications that need this kind of memory. It’s no surprise, then, that these instances are certified to run SAP’s HANA in-memory database and its various tools and that SAP will offer direct support for running these applications on these instances. It’s worth noting that Microsoft Azure’s largest memory-optimized machine currently tops out at just over 2TB and that Google already calls it quits at 416GB of RAM.

2014 - AWS now supports Docker containers to defeate Cloud Foundry



Amazon announced the preview availability of EC2 Container Services – the new service for managing Docker containers that boosts Amazon Web Services support for hybrid cloud. This bring the benefits of easy development management, portability between environments, lower risk in deployments, smoother maintenance and management of application components, and the ability for it all to work together. AWS isn’t the first cloud provider to offer Docker’s open source engine support. Google has extended its support for Docker containers with its new Google Container Engine powered by its own Kubernetes, announced just last week during the Google Cloud Platform Live event. And, back in August, Microsoft announced its support for Kubernetes in managing Docker containers in Azure.

2014 - Amazon and Microsoft drop cloud prices



Cloud computing is becoming cheaper and cheaper. So, if you once (for example, a year ago) calculated whether it was cost-effective to migrate your IT infrastructure to the cloud and decided that it was still expensive, then recalculate again. Since then, cloud platform reduced prices two or three times. Another round of happening now. Since tomorrow  Amazon S3 cloud storage pricing will decrease by 6-22 % (depending on the used space), and the cost of cloud server hard drives (Amazon EBS) will fall by 50%. And a month later Microsoft's cloud platform Windows Azure  will reduce its prices by 20% to keep them a little lower than Amazon's. So think once again, why buy an in-house server if the cost of the cloud tends to zero.

2012 - Google and Amazon reduce cloud storage prices. Launch new cloud services



Competition - is good for customers. On Monday, Google reduced prices for its Google Cloud Storage by over 20%, and today, in response, Amazon has reduced prices for its S3 storage by 25%. Obviously, in the near future, Microsoft will also reduce prices for Windows Azure, to bring them to the competitive level - about $0.09/month per GB. The same story occured in March when Amazon lowered prices, and then Microsoft and Google aligned their pricing with Amazon. Because on the cloud platforms market the price is no longer a competitive advantage, but your pricing is higher than the competition - is't a big disadvantage. Some experts already doubt that Amazon and the contenders are earning something on selling gigabytes and gigahertzs. Like in case with the mobile market, the main task of cloud vendors - is to hook up large companies and SaaS-providers to their platforms, even if they should sell computing resources at a loss.

All the talks about open cloud platforms, open cloud standards and free migration between clouds - most likely will remain just talks. OpenStack is trying to build the communism in the Cloud, but with its communist-like business organization, it will hardly succeed. Meanwhile, Amazon, Google, Microsoft are build cloud platforms with their own standards, with unique features, and can afford to reduce prices for computer resources. They can afford because customers will remain and pay for additional features. Migrating to another platform will be very difficult.

In addition to new pricing, Google and Amazon introduced the new cloud services. Google launched the clone of Amazon's Glacier - Durable Reduced Availability Storage (cheap storage for very large amounts of data with slow data access). And Amazon played its muscles. It's new service Redshift allow to host databases the size of which is measured in petabytes. It's difficult to say about the demand for such a service, but it should definitely make a positive impact on Amazon's reputation. If they can play with petabyte-databases, than your little project will work on Amazon without a hitch.

2012 - Amazon Glacier: Cloud storage service using Humanoid robots to stand out over Heroku



Humanoid robots - is just our assumption, but it's first idea, that comes to mind when looking at the new service Amazon Glacier. This is a solution for the long-term storage of archives and backups, which are needed for business very rare, or may be never used, but should be stored because of some state or corporate guidelines. The point is that storing data in Amazon Glacier is very cheap. Only 1 cent per month for 1 GB (10 times less than in the Amazon S3). But if you want to get any file - you need to order it first and wait 3-5 hours until it becomes available. (We think that during this time the robot can find the hard drive in the data center and bring it to the control panel). In addition, Amazon Glacier customers will be able to download only 5% of their data per month and will pay $0.12 per GB for data transfer exceeding 1 GB per month.

2012 - OpenStack launches. CloudStack departs. Amazon adapts SAP. Azure rebrands to keep up competition with Amazon Web Services



Here is the news digest from the leading cloud platforms. First of all, the open-source platform OpenStack (aka Linux for the clouds) which had been developed for two years by the alliance of IT giants (Rackspace, NASA, Citrix, Intel, AMD, Cisco, Dell, HP, IBM ...) - finally comes to production. Since May 1, it was adapted by RackSpace for its service Rackspace Cloud Files and last week HP launched the public beta of its HP Cloud platform, based on OpenStack. However, a week before the launch the trouble (common for open-source projects) occurred with OpenStack. Citrix, which has been one of the first participants in OpenStack, suddenly decided to grant its own cloud platform - CloudStack - to Apache Software Foundation. Thus, CloudStack not flowed into OpenStack but became a rival project. Citrix explained this decision by the slow OpenStack development and unwillingness of other parties to integrate with Amazon Web Services APIs.

As for Amazon, it's secured from such conflicts, and that's why is busy with more useful occupations - i.e. adaption of the world's largest ERP system SAP All-in-One to Amazon's cloud. Nothing can be more cool than SAP All-in-One in the Cloud, so the appearance of the first customer, using this cloud-based SAP will mean the great win to all cloud industry.

By the way a year ago SAP was going to port its ERP system not only to AWS, but also to the cloud platform of its main partner - Microsoft (Windows Azure). As now it turned out, that AWS - was the first. If in the near future SAP for Windows Azure won't appear, it will be a disaster for Microsoft's cloud business.

But maybe Microsoft has more important things to do. For example, rebranding. Recently the company announced that it will ditch the Windows Live brand. And then it came to Windows Azure. It's already known that a number of services will be renamed as follows: SQL Azure -> SQL Database, Azure Compute -> Cloud Services, Azure Storage -> Storage. It's still unknown whether the Azure brand will remain in the platform title. Why rename? Microsoft says, to erase the boundaries between the cloud and local IT infrastructure.