I grew up watching a TV show called Shazam! which was based on a comic I also read by the same name. The main protagonist was a superhero called Captain Marvel, who was given his superpowers by a wizard named Shazam. Captain Marvel used the power of Shazam to fight evil and to help save the human race.
At the first keynote for AWS re:Invent 2016, Andy Jassy, CEO of Amazon Web Services, played the part of the wizard who could give everyone cloudy superpowers as he wrapped the keynote around the theme of superpowers. You can view the keynote in its entirety below. You can also read on to get a digest of Jassey’s keynote along with links to get more information about the announced new services.
To set the table, Jassy started the keynote with a business update before giving what everyone in attendance and tuning in was waiting for – a litany of new AWS features and capabilities.
Amazon Web Services continues to grow at an astounding rate with no let up in sight. It is by far the fastest growing billion dollar enterprise IT company in the world, suggesting that it is a safe choice for enterprises.
And the growth is not just coming from startups anymore but includes a growing stable of enterprise customers.
While the keynote included something for everyone, Jassy clearly had new enterprise customers in mind as he walked through the value proposition for AWS, explained basic AWS services, unveiled new services and directed his ire at Larry Ellison and Oracle. And to frame the rest of his keynote, Jassy assumed his Shazam wizard persona and explained what AWS can do for customers to give them cloudy superpowers.
The first superpower theme to be highlighted was supersonic speed and how AWS enables customers to move more quickly. This not only refers to customers being able to launch thousands of cloud instances in minutes but the ability to go from conception to realization of an idea by taking advantage of all the many services that AWS has to offer.
While AWS already boasts more services than any other cloud provider, Jassy pointed out that their pace of innovation has been increasing to the rate of 1,000+ new features or significant capabilities rolled out in 2016. That equates to an average of 3 new capabilities added per day.
Continuing the focus on supersonic speed, Jassy followed with announcements about new EC2 instance types to add to the already burgeoning compute catalog. In particular, updates to four instance type families, to meet varying compute use cases, were announced.
Two new extra-large instance types were added to the T2 family which doubled and quadrupled respectively the large instance type. T2 instances are suited for general purpose workloads that require occasional bursting and the extra large instances give users more bang for their buck while providing even more burst capacity. You can read about the new T2 instance types here.
For memory intensive workloads, a new R4 instance type was announced which effectively doubled the capabilities of the previous R3 instance type. This memory-optimized instance type is suitable for any workload that benefits most from in-memory processing. You can read more about the new R4 instance type here.
A new I3 instance type was introduced that is optimized for I/O intensive workloads. This new instance type will use SSDs to increase IOPS capabilities by orders of magnitude over the current I2 instance type. The I3 will be ideally suited for transaction oriented workloads such as databases and analytics. You can read more about the new I3 instance type here.
Next up was the new C5 compute-optimized instance type using the new Intel Skylake CPU. The C5 will be suitable for workloads that require CPU-intensive workloads such as machine learning and financial operations requiring fast floating point calculations. You can read more about the new C5 instance type here.
Another area where speed is important are computational workloads that require a Graphic Processing Unit (GPU) to offload processing from the CPU. Jassy announced that AWS is working on a feature called Elastic GPUs For EC2. This will allow GPUs to be attached to any instance type as workload demands require, similar in concept to Elastic Block Storage. You can read more about the Elastic GPUs here.
The last new instance type to be announced was the F1 instance type utilizing customizable FPGAs which will give developers the flexibility to program these instances to meet specific workload demands in a way that could not be done with standard CPUs. You can read more about the new F1 instance type here.
Accelerating how fast users can move goes beyond new hardware and new instance types. There is also the need to simplify complex tasks whenever possible. Cloud providers like Digital Ocean have carved out a strong niche market by specializing in offering no-frills Virtual Private Servers (VPS). A VPS is a low-cost hosted virtual server that is designed to be easy for users to set up and suitable for running applications that do not have high performance requirements.
AWS is taking VPS providers like Digital Ocean head on with their new Amazon Lightsail service. For as little as $5 a month , users can launch new instances in their VPC and do so by walking through minimal configuration steps.
Behind the scenes, Lightsail will create a VPS preconfigured with SSD-based storage, DNS management, and a static IP address. As underscored below, all the steps in the box are performed on behalf of the user. You can read more about Amazon Lightsail here.
Moving on the next superpower that AWS can give users, Jassy talked about x-ray vision and how it can benefit cloud users. The first benefit was mainly a not so subtle dig at Larry Ellison and Oracle and other legacy vendors.
Jassy’s argument was that on AWS, users can run their own tests and benchmarks on true production like environments instead of accepting the word of untrustworthy vendors. It was one of many negative attacks on Oracle during Jassy’s keynote.
Getting back on point, Jassy talked about the benefit for users of being able to perform business analytics on the data they’ve uploaded to AWS as part of the x-ray vision power that AWS gives to them. Jassy then highlighted the breadth of the existing AWS services for doing analytics to help users better understand their customers.
Enhancing this portfolio, Jassy unveiled a new service called Amazon Athena. Athena is a new query service for analyzing stored S3 data using standard SQL. In essence, users can treat their S3 as a data lake and perform queries against unstructured data to unearth actionable intelligence. You can read more about Amazon Athena here.
Another benefit of “x-ray vision” which Jassy presented was the ability for users to see meaning inside their data through artificial intelligence. Jassy pointed out that Amazon, the parent company, has been leveraging artificial intelligence and deep learning for their own businesses.
Naturally, AWS is leveraging the learnings and tools of Amazon to create a suite of new services focused on artificial intelligence called Amazon AI.
The first service in the suite is Amazon Rekognition for image recognition and analysis. This service is powered by deep learning technology developed inside Amazon that is already being used to analyze billions of images daily. Users can leverage Rekognition to create applications for use cases such as visual surveillance or user authentication. You can read more about Amazon Rekognition here.
Moving from image to voice AI, Jassy next introduced Amazon Polly, a service for converting text to speech. Polly initially supports 24 different languages and can speak in 47 different voices. Powered also by deep learning technology created by Amazon, Polly can translate text that may have ambiguous meanings by understanding the context of the text. User can leverage Polly to create applications that require all types of computer generated speech. You can read more about Amazon Polly here.
Rounding out the new AI suite, Jassy introduced Amazon Lex for natural language understand and for voice recognition. Based on the same deep learning technology behind Alexa, which powers the Amazon Echo, users can build Lex based applications such as chatbots or anything that supports conversational engagement between humans and software. You can read more about Amazon Lex here.
Another superpower trumpeted by Jassy was that of flight, which he used as a metaphor for having the freedom to build fast, to understand data better and most importantly, to escape from hostile database vendors. To incentivize users to leave their traditional database vendors, AWS had previously introduced a Database Migration service and the Amazon Aurora MySQL-Compatible database service. As it turned out, enterprises liked Aurora but also wanted support for PostgreSQL. So Jassy took this opportunity to announced a new Amazon Aurora PostgreSQL-Compatible database service.
This new service uses a modified version of the PostgreSQL database that is more scalable and has 2x the performance of the open source version of PostgreSQL but maintains 100% API compatibility. You can read more about PostgreSQL for Aurora here.
The last superpower discussed by Jassy was shape-shifting, which was another metaphor, this time for AWS’ ability to integrate with on-premises infrastructures. To kick off this section of the keynote, Jassy revisited an announcement that had been made previously of a joint service called VMware Cloud on AWS. This service is simply a managed offering, running on AWS, that supports VMware technologies such as vSphere, vSAN and NSX. You can read more about VMware Cloud on AWS here.
Then in perhaps a somewhat tortured attempt to keep to the current theme, Jassy tried to expand the meaning of on-premises infrastructure beyond servers in the data center to sensors and IoT devices.
Making the transition to talking IoT services, Jassy discussed the challenges of running device on the edge of the network in order to collect and to process data from these sensors and growing number of IoT devices.
To help address these challenges, Jassy announced their new AWS Greengrass service which embeds AWS services like Lambda in field devices. Manufacturers can OEM Greengrass for their devices and users can leverage Greengrass to collect date in the field, process the date locally and forward them to the cloud for long-term storage and further processing. You can read more about AWS Greengrass here.
Of course, any discussion about on-premises infrastructure by AWS ultimately leads back to their desire to move all on-premises workloads to what they consider the only true cloud – AWS. So perhaps it’s no surprise that Jassy would wrap up his keynote with two solutions for expediting the migration of data to AWS.
During the last re:Invent in 2015, AWS announced the Snowball which is a 50 TB appliance for import/export of data to and from AWS. As these Snowball appliances have been put to use, customers have expressed a desire for additional capabilities such as local processing of data on the appliance. To facilitate these new capabilities, Jassy announced the new Amazon Snowball Edge.
The Snowball Edge adds more connectivity, doubles the storage capacity, enables clustering of two appliances, adds new storage endpoints that can be accessed from existing S3 and NFS clients and adds Lambda-powered local processing. You can read more about the AWS Snowball Edge here.
Going back to the enterprise and rounding out the keynote, Jassy asked the question, “What about for Exabytes (of data)?” The answer, Jassy proposed, is a bigger box. Then in a demonstration of showmanship worthy of any legacy vendor, out came the new Amazon Snowmobile.
The proposition of the Snowmobile is very simple. Enterprises will be able to move 100 PBs of data at a time so that an exabyte-scale data transfer that would take ~26 years to do over a 10 Gbps dedicated connection can be completed in ~6 months using Snowmobiles. You can read more about the AWS Snowmobile here.
The spectacle of the Snowmobile being driven on stage proved to be an appropriate capper to the morning keynote with Andy Jassy’s turn as the superpower-giving wizard, Shazam.