Amazon Location Service Is Now Generally Available with New Routing and Satellite Imagery Capabilities

In December of 2020, we made Amazon Location Service available in preview form for you to start building web and mobile applications with location-based features. Today I’m pleased to announce that we are making Amazon Location generally available along with two new features: routing and satellite imagery.

I have been a full-stack developer for over 15 years. On multiple occasions, I was tasked with creating location-based applications. The biggest challenges I faced when I worked with location providers were integrating the applications into the existing application backend and frontend and keeping the data shared with the location provider secure. When Amazon Location was made available in preview last year, I was so excited. This service makes it possible to build location-based applications with a native integration with AWS services. It uses trusted location providers like Esri and HERE and customers remain in control of their data.

Amazon Location includes the following features:

  • Maps to visualize location information.
  • Places to enable your application to offer point-of-interest search functionality, convert addresses into geographic coordinates in latitude and longitude (geocoding), and convert a coordinate into a street address (reverse geocoding).
  • Routes to use driving distance, directions, and estimated arrival time in your application.
  • Trackers to allow you to retrieve the current and historical location of the devices running your tracking-enabled application.
  • Geofences to give your application the ability to detect and act when a tracked device enters or exits a geographical boundary you define as a geofence. When a breach of the geofence is detected, Amazon Location will send an event to Amazon EventBridge, which can trigger a downstream set of actions, like invoking an AWS Lambda function or sending a notification using Amazon Simple Notification Service (SNS). This level of integration with AWS services is one of the most powerful features of Amazon Location. It will help shorten your application’s time to production.

In the preview announcement blog post, Jeff introduced the service functionality in a lot of detail. In this blog post, I want to focus on the new two features: satellite imagery and routing.

Satellite Imagery

You can use satellite imagery to pack your maps with information and provide more context to the map users. It helps the map users answer questions like “Is there a swamp in that area?” or “What does that building look like?”

To get started with satellite imagery maps, go to the Amazon Location console. On Create a new map, choose Esri Imagery. 

Creating a new map with satellite imagery

Routing
With Amazon Location Routes, your application can request the travel time, distance, and all directions between two locations. This makes it possible for your application users to obtain accurate travel-time estimates based on live road and traffic information.

If you provide these extra attributes when you use the route feature, you can get very tailored information including:

  • Waypoints: You can provide a list of ordered intermediate positions to be reached on the route. You can have up to 25 stopover points including the departure and destination.
  • Departure time: When you specify the departure time for this route, you will receive a result optimized for the traffic conditions at that time.
  • Travel mode: The mode of travel you specify affects the speed and the road compatibility. Not all vehicles can travel on all roads. The available travel modes are car, truck and walking. Depending on which travel mode you select, there are parameters that you can tune. For example, for car and truck, you can specify if you want a route without ferries or tolls. But the most interesting results are when you choose the truck travel mode. You can define the truck dimensions and weight and then get a route that is optimized for these parameters. No more trucks stuck under bridges!

Amazon Location Service and its features can be used for interesting use cases with low effort. For example, delivery companies using Amazon Location can optimize the order of the deliveries, monitor the position of the delivery vehicles, and inform the customers when the vehicle is arriving. Amazon Location can be also used to route medical vehicles to optimize the routing of patients or medical supplies. Logistic companies can use the service to optimize their supply chain by monitoring all the delivery vehicles.

To use the route feature, start by creating a route calculator. In the Amazon Location console, choose Route calculators. For the provider of the route information, choose Esri or HERE.

Screenshot of create a new routing calculator

You can use the route calculator from the AWS SDKs, AWS Command Line Interface (CLI) or the Amazon Location HTTP API.

For example, to calculate a simple route between departure and destination positions using the CLI, you can write something like this:

aws location 
    calculate-route 
        --calculator-name MyExampleCalculator 
        --departure-position -123.1376951951309 49.234371474778385 
        --destination-position -122.83301379875074 49.235860182576886

The departure-position and destination-positions are defined as longitude, latitude.

This calculation returns a lot of information. Because you didn’t define the travel mode, the service assumes that you are using a car. You can see the total distance of the route (in this case, 29 kilometers). You can change the distance unit when you do the calculation. The service also returns the duration of the trip (in this case, 29 minutes). Because you didn’t define when to depart, Amazon Location will assume that you want to travel when there is the least amount of traffic.

{
    "Legs": [{
        "Distance": 26.549,
        "DurationSeconds": 1711,
        "StartPosition":[-123.1377012, 49.2342994],
        "EndPosition": [-122.833014,49.23592],
        "Steps": [{
            "Distance":0.7,
            "DurationSeconds":52,
            "EndPosition":[-123.1281,49.23395],
            "GeometryOffset":0,
            "StartPosition":[-123.137701,49.234299]},
            ...
        ]
    }],
    "Summary": {
        "DataSource": "Esri",
        "Distance": 29.915115551209176,
        "DistanceUnit": "Kilometers",
        "DurationSeconds": 2275.5813682980006,
        "RouteBBox": [
            -123.13769762299995,
            49.23068000000006,
            -122.83301399999999,
            49.258440000000064
        ]
    }
}

It will return an array of steps, which form the directions to get from departure to destination. The steps are represented by a starting position and end position. In this example, there are 11 steps and the travel mode is a car.

Screenshot of route drawn in map

The result changes depending on the travel mode you selected. For example, if you do the calculation for the same departure and destination positions but choose a travel mode of walking, you will get a series of steps that draw the map as shown below. The travel time and distance are different: 24.1 kilometers and 6 hours and 43 minutes.

Map of route when walking

Available Now
Amazon Location Service is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo) Regions.

Learn about the pricing models of Amazon Location Service. For more about the service, see Amazon Location Service

Marcia

In the Works – AWS Region in the United Arab Emirates (UAE)

We are currently building AWS regions in Australia, Indonesia, Spain, India, and Switzerland.

UAE in the Works
I am happy to announce that the AWS Middle East (UAE) Region is in the works and will open in the first half of 2022. The new region is an extension of our existing investment, which already includes two AWS Direct Connect locations and two Amazon CloudFront edge locations, all of which have been in place since 2018. The new region will give AWS customers in the UAE the ability to run workloads and to store data that must remain in-country, in addition to the ability to serve local customers with even lower latency.

The new region will have three Availability Zones, and will be the second AWS Region in the Middle East, joining the existing AWS Region in Bahrain. There are 80 Availability Zones within 25 AWS Regions in operation today, with 15 more Availability Zones and five announced regions underway in the locations that I listed earlier.

As is always the case with an AWS Region, each of the Availability Zones will be a fully isolated part of the AWS infrastructure. The AZs in this region will be connected together via high-bandwidth, low-latency network connections to support applications that need synchronous replication between AZs for availability or redundancy.

AWS in the UAE
In addition to the upcoming AWS Region and the Direct Connect and CloudFront edge locations, we continue to build our team of account managers, partner managers, data center technicians, systems engineers, solutions architects, professional service providers, and more (check out our current positions).

We also plan to continue our on-going investments in education initiatives, training, and start-up enablement to support the UAE’s plans for economic development and digital transformation.

Our customers in the UAE are already using AWS to drive innovation! For example:

Mohammed Bin Rashid Space Centre (MBRSC) – Founded in 2006, MBSRC is home to the UAE’s National Space Program. The Hope Probe was launched last year and reached Mars in February of this year. Data from the probe’s instruments is processed and analyzed on AWS, and made available to the global scientific community in less than 20 minutes.

Anghami is the leading music platform in the Middle East and North Africa, giving over 70 million users access to 57 million songs. They have been hosting their infrastructure on AWS since their days as a tiny startup,. and have benefited from the ability to scale up by as much as 300% when new music is launched.

Sarwa is an investment bank and personal finance platform that was born on the AWS cloud in 2017. They grew by a factor of four in 2020 while processing hundreds of thousands of transactions. Recent AWS-powered innovations from Sarwa include the Sarwa App (design to market in 3 months) and the upcoming Sarwa Trade platform.

Stay Tuned
We’ll be announcing the opening of the Middle East (UAE) Region in a forthcoming blog post, so be sure to stay tuned!

Jeff;

New – AWS App Runner: From Code to a Scalable, Secure Web Application in Minutes

Containers have become the default way that I package my web applications. Although I love the speed, productivity, and consistency that containers provide, there is one aspect of the container development workflow that I do not like: the lengthy routine I go through when I deploy a container image for the first time.

You might recognize this routine: setting up a load balancer, configuring the domain, setting up TLS, creating a CI/CD pipeline, and deploying to a container service.

Over the years, I have tweaked my workflow and I now have a boilerplate AWS Cloud Development Kit project that I use, but it has taken me a long time to get to this stage. Although this boilerplate project is great for larger applications, it does feel like a lot of work when all I want to do is deploy and scale a single container image.

At AWS, we have a number of services that provide granular control over your containerized application, but many customers have asked if AWS can handle the configuration and operations of their container environments. They simply want to point to their existing code or container repository and have their application run and scale in the cloud without having to configure and manage infrastructure services.

Because customers asked us to create something simpler, our engineers have been hard at work creating a new service you are going to love.

Introducing AWS App Runner
AWS App Runner makes it easier for you to deploy web apps and APIs to the cloud, regardless of the language they are written in, even for teams that lack prior experience deploying and managing containers or infrastructure. The service has AWS operational and security best practices built-it and automatically scale up or down at a moment’s notice, with no cold starts to worry about.

Deploying from Source
App Runner can deploy your app by either connecting to your source code or to a container registry. I will first show you how it works when connecting to source code. I have a Python web application in a GitHub repository. I will connect App Runner to this project, so you can see how it compiles and deploys my code to AWS.

In the App Runner console, I choose Create an App Runner service.

Screenshot of the App Runner Console

For Repository type, I choose Source code repository and then follow the instructions to connect the service to my GitHub account. For Repository, I choose the repository that contains the application I want to deploy. For Branch, I choose main.

Screenshot of source and deployment section of the console

For Deployment trigger, I choose Automatic. This means when App Runner discovers a change to my source code, it automatically builds and deploys the updated version to my App Runner service

Screenshot of the deployment settings section of the console

I can now configure the build. For Runtime, I choose Python 3. The service currently supports two languages: Python and Node.js. If you require other languages, then you will need to use the container registry workflow (which I will demonstrate later). I also complete the Build command, Start command, and Port fields, as shown here:

Screenshot of the Configure build section of the console

I now give my service a name and choose the CPU and memory size that I want my container to have. The choices I make here will affect the price I pay. Because my application requires very little CPU or memory, I choose 1 vCPU and 2 GB to keep my costs low. I can also provide any environment variables here to configure my application.

Screenshot of the configure service section of the console

The console allows me to customize several different settings for my service.

I can configure the auto scaling behavior. By default, my service will have one instance of my container image, but if the service receives more than 80 concurrent requests, it will scale to multiple instances. You can optionally specify a maximum number for cost control.

I can expand Health check, and set a path to which App Runner sends health check requests. If I do not set the path, App Runner attempts to make a TCP connection to verify health. By default, if App Runner receives five consecutive health check failures, it will consider the instance unhealthy and replace it.

I can expand Security, and choose an IAM role to be used by the instance. This will give permission for the container to communicate with other AWS services. App Runner encrypts all stored copies of my application source image or source bundle. If I provide a customer-managed key (CMK), App Runner uses it to encrypt my source. If I do not provide one, App Runner uses an AWS-managed key instead.

Screenshot of the console

Finally, I review the configuration of the service and then choose Create & deploy.

Screenshot of the review and create section of the console

After a few minutes, my application was deployed and the service provided a URL that points to my deployed web application. App Runner ensures that https is configured so I can share it with someone on my team to test the application, without them receiving browser security warnings. I do not need to implement the handling of HTTPS secure traffic in my container image App Runner configures the whole thing.

I now want to set up a custom domain. The service allows me to configure it without leaving the console. I open my service, choose the Custom domains tab, and then choose Add domain.

Screenshot of the custom domain section of the console

In Domain name, I enter the domain I want to use for my app, and then choose Save.

Screenshot of user adding a custom domain

After I prove ownership of the domain, the app will be available at my custom URL. Next, I will show you how App Runner works with container images.

Deploy from a Container Image
I created a .NET web application, built it as a container image, and then pushed this container image to Amazon ECR Public (our public container registry).

Just as I did in the first demo, I choose Create service. On Source and deployment, for Repository type, I choose Container registry. For Provider, I choose Amazon ECR Public. In Container image URI, I enter the URI to the image.

The deployment settings now only provide the option to manually trigger the deployment. This is because I chose Amazon ECR Public. If I wanted to deploy every time a container changed, then for Provider, I would need to choose Amazon ECR.

Screenshot of source and deployment section of the console

From this point on, the instructions are identical to those in the “Deploying from Source” section. After the deployment, the service provides a URL, and my application is live on the internet.

Screenshot of the app running

Things to Know
App Runner implements the file system in your container instance as ephemeral storage. Files are temporary. For example, they do not persist when you pause and resume your App Runner service. More generally, files are not guaranteed to persist beyond the processing of a single request, as part of the stateless nature of your application. Stored files do, however, take up part of the storage allocation of your App Runner service for the duration of their lifespan. Although ephemeral storage files are not guaranteed to persist across requests, they might sometimes persist. You can opportunistically take advantage of it. For example, when handling a request, you can cache files that your application downloads if future requests might need them. This might speed up future request handling, but I cannot guarantee the speed gains. Your code should not assume that a file that has been downloaded in a previous request still exists. For guaranteed caching, use a high-throughput, low-latency, in-memory data store like Amazon ElastiCache.

Partners in Action
We have been working with partners such as MongoDB, Datadog and HashiCorp to integrate with App Runner. Here is a flavor of what they have been working on:

MongoDB – “We’re excited to integrate App Runner with MongoDB Atlas so that developers can leverage the scalability and performance of our global, cloud-native database service for their App Runner applications.”

Datadog – “Using AWS App Runner, customers can now more easily deploy and scale their web applications from a container image or source code repository. With our new integration, customers can monitor their App Runner metrics, logs, and events to troubleshoot issues faster, and determine the best resource and scaling settings for their app.”

HashiCorp – “Integrating HashiCorp Terraform with AWS App Runner means developers have a faster, easier way to deploy production cloud applications, with less infrastructure to configure and manage.”

We also have exciting integrations from PulumiLogz.io, and Sysdig, which will allow App Runner customers to use the tools and service they already know and trust. As an AWS Consulting Partner, Trek10 can help customers leverage App Runner for cloud-native architecture design.

Availability and Pricing
AWS App Runner is available today in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Europe (Ireland). You can use App Runner with the AWS Management Console and AWS Copilot CLI.

With App Runner, you pay for the compute and memory resources used by your application. App Runner automatically scales the number of active containers up and down to meet the processing requirements of your application. You can set a maximum limit on the number of containers your application uses so that costs do not exceed your budget.

You are only billed for App Runner when it is running and you can pause your application easily and resume it quickly. This is particularly useful in development and test situations as you can switch off an application when you are not using it, helping you manage your costs. For more information, see the App Runner pricing page.

Start using AWS App Runner today and run your web applications at scale, quickly and securely.

— Martin

 

Resolve IT Incidents Faster with Incident Manager, a New Capability of AWS Systems Manager

IT engineers pride themselves on the skill and care they put into building applications and infrastructure. However, as much as we all hate to admit it, there is no such thing as 100% uptime. Everything will fail at some point, often at the worst possible time, leading to many a ruined evening, birthday party, or wedding anniversary (ask me how I know).

As pagers go wild, on-duty engineers scramble to restore service, and every second counts. For example, you need to be able to quickly filter the deluge of monitoring alerts in order to pinpoint the root cause of the incident. Likewise, you can’t afford to waste any time locating and accessing the appropriate runbooks and procedures needed to solve the incident. Imagine yourself at 3:00A.M., wading in a sea of red alerts and desperately looking for the magic command that was “supposed to be in the doc.” Trust me, it’s not a pleasant feeling.

Serious issues often require escalations. Although it’s great to get help from team members, collaboration and a speedy resolution require efficient communication. Without it, uncoordinated efforts can lead to mishaps that confuse or worsen the situation.

Last but not least, it’s equally important to document the incident and how you responded to it. After the incident has been resolved and everyone has had a good night’s sleep, you can replay it, and work to continuously improve your platform and incident response procedures.

All this requires a lot of preparation based on industry best practices and appropriate tooling. Most companies and organizations simply cannot afford to learn this over the course of repeated incidents. That’s a very frustrating way to build an incident preparation and response practice.

Accordingly, many customers have asked us to help them, and today, I’m extremely happy to announce Incident Manager, a new capability of AWS Systems Manager that helps you prepare and respond efficiently to application and infrastructure incidents.

If you can’t wait to try it, please feel free to jump now to the Incident Manager console. If you’d like to learn more, read on!

Introducing Incident Manager in AWS Systems Manager
Since the launch of Amazon.com in 1995, Amazon teams have been responsible for incident response of their services. Over the years, they’ve accumulated a wealth of experience in responding to application and infrastructure issues at any scale. Distilling these years of experience, the Major Incident Management team at Amazon has designed Incident Manager to help all AWS customers prepare for and resolve incidents faster.

As preparation is key, Incident Manager lets you easily create a collection of incident response resources that are readily available when an alarm goes off. These resources include:

  • Contacts: Team members who may be engaged in solving incidents and how to page them (voice, email, SMS).
  • Escalation plans: Additional contacts who should be paged if the primary on-call responder doesn’t acknowledge the incident.
  • Response plans: Who to engage (contacts and escalation plan), what they should do (the runbook to follow), and where to collaborate (the channel tied to AWS Chatbot).

Incident Manager

In short, creating a response plan lets you prepare for incidents in a standardized way, so you can react as soon as they happen and resolve them quicker. Indeed, response plans can be triggered automatically by a Amazon CloudWatch alarm or an Amazon EventBridge event notification of your choice. If required, you can also launch a response plan manually.

When the response plan is initiated, contacts are paged, and a new dashboard is automatically put in place in the Incident Manager console. This dashboard is the point of reference for all things involved in the incident:

  • An overview on the incident, so that responders have a quick and accurate summary of the situation.
  • CloudWatch metrics and alarm graphs related to the incident.
  • A timeline of the incident that lists all events added by Incident Manager, and any custom event added manually by responders.
  • The runbook included in the response plan, and its current state of execution. Incident Manager provides a default template implementing triage, diagnosis, mitigation and recovery steps.
  • Contacts, and a link to the chat channel.
  • The list of related Systems Manager OpsItems.

Here’s a sample dashboard. As you can see, you can easily access all of the above in a single click.

Incident Dashboard

After the incident has been resolved, you can create a post-incident analysis, using a built-in template (based on the one that Amazon uses for Correction of Error), or one that you’ve created. This analysis will help you understand the root cause of the incident and what could have been done better or faster to resolve it.

By reviewing and editing the incident timeline, you can zoom in on specific events and how they were addressed. To guide you through the process, questions are automatically added to the analysis. Answering them will help you focus on potential improvements, and how to add them to your incident response procedures. Here’s a sample analysis, showing some of these questions.

Incident Analysis

Finally, Incident Manager recommends action items that you can accept or dismiss. If you accept an item, it’s added to a checklist that has to be fully completed before the analysis can be closed. The item is also filed as an OpsItem in AWS Systems Manager OpsCenter, which can sync to ticketing systems like Jira and ServiceNow.

Getting Started
The secret sauce in successfully responding to IT incidents is to prepare, prepare again, and then prepare some more. We encourage you to start planning now for failures that are waiting to happen. When that pager alarm comes in at 3:00AM, it will make a world of difference.

We believe Incident Manager will help you resolve incidents faster by improving your preparation, resolution and analysis workflows. It’s available today in the following AWS Regions:

  • US East (N. Virginia), US East (Ohio), US West (Oregon)
  • Europe (Ireland), Europe (Frankfurt), Europe (Stockholm)
  • Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney)

Give it a try, and let us know what you think. As always, we look forward to your feedback. You can send it through your usual AWS Support contacts, or post it on the AWS Forum for AWS Systems Manager.

If you want to learn more about Incident Manager, sign up for the AWS Summit Online event taking place on May 12 and 13, 2021.

New Amazon FinSpace Simplifies Data Management and Analytics for Financial Services

Managing data is the core of the Financial Services Industry (FSI). I worked for private banking and fund management companies and helped analysts to collect, aggregate, and analyze hundreds of petabytes of data from internal data sources, such as portfolio management, order management, and accounting systems, but also from external data sources, such as real-time market feeds and historical equities pricing and alternative data systems. During that period, I spent my time trying to access data across organizational silos, to manage permissions, and to build systems to automate recurring tasks in ever-growing and more complex environments.

Today, we are launching a solution that would have reduced the time I spent on such projects: Amazon FinSpace is a data management and analytics solution purpose-built for the financial services industry. Amazon FinSpace reduces the time it takes to find and prepare data from months to minutes so analysts can spend more time on analysis.

What Our Customers Told Us
Before data can be combined and analyzed, analysts spend weeks or months to find and access data across multiple departments, each specialized by market, instrument, or geography. In addition to this logical segregation, data is also physically isolated in different IT systems, file systems, or networks. Because access to data is strictly controlled by governance and policy, analysts must prepare and explain access requests to the compliance department. This is a very manual, ad-hoc process.

Once granted access, they often must perform computational logic (such as Bollinger Bands, Exponential Moving Average, or Average True Range) on larger and larger datasets to prepare data for analysis or to derive information out of the data. These computations often run on servers with constrained capacity, as they were not designed to handle the size of workloads in the modern financial world. Even server-side systems are struggling to scale up and keep up with the ever-growing size of the datasets they need to store and analyze.

How Amazon FinSpace Helps
Amazon FinSpace removes the undifferentiated heavy lifting required to store, prepare, manage, and audit access to data. It automates the steps involved in finding data and preparing it for analysis. Amazon FinSpace stores and organizes data using industry and internal data classification conventions. Analysts connect to the Amazon FinSpace web interface to search for data using familiar business terms (“S&P500,” “CAC40,” “private equity funds in euro”).

Analysts can prepare their chosen datasets using a built-in library of more than 100 specialized functions for time series data. They can use the integrated Jupyter notebooks to experiment with data, and parallelize these financial data transformations at the scale of the cloud in minutes. Finally, Amazon FinSpace provides a framework to manage data access and to audit who is accessing what data and when. It tracks usage of data and generates compliance and audit reports.

Amazon FinSpace also makes it easy to work on historical data. Let’s imagine I built a model to calculate credit risk. This model relies on interest rate and inflation rate. These two rates get updated frequently. The risk level associated with a customer is not the same today as it was a few months ago, when inflation and interest rates were different. When data analysts are looking at data as it is now and as it was in the past, they call it bitemporal modeling. Amazon FinSpace makes it easy to go back in time and to compare how models are evolving alongside multiple dimensions.

To show you how Amazon FinSpace works, let’s imagine I have a team of analysts and data scientists and I want to provide them a tool to search, prepare, and analyze data.

How to Create an Amazon FinSpace Environment
As an AWS account administrator, I create a working environment for my team of financial analysts. This is a one-time setup.

I navigate to the Amazon FinSpace console and click Create Environment:

FinSpace Create environment

I give my environment a name. I select a KMS encryption key that will serve to encrypt data at rest. Then I choose either to integrate with AWS Single Sign-On or to manage usernames and passwords in Amazon FinSpace. AWS Single Sign-On integration allows your analysts to authenticate with external systems, such as a corporate Active Directory, to access the Amazon FinSpace environment. For this example, I choose to manage the credentials by myself.

FinSpace create environment details

I create a superuser who will have administration permission on the Amazon FinSpace environment. I click Add Superuser:

Finspace create super user 1Finspace create super user 2I take a note of the temporary password. I copy the text of the message to send to my superuser. This message includes the connection instructions for the initial connection to the environment.

The superuser has permission to add other users and to manage these users’ permissions in the Amazon FinSpace environment itself.

Finally, and just for the purpose of this demo, I choose to import an initial dataset. This allows me to start with some data in the environment. Doing so is just a single click in the console. The storage cost of this dataset is $41.46 / month and I can delete it at any time.

Under Sample data bundles, Capital Markets sample data, I click Install dataset. This can take several minutes, so it’s a good time to stand up, stretch your legs, and grab a cup of coffee.

FinSpace install sample dataset

How to Use an Amazon FinSpace Environment
In my role as financial analyst, my AWS account administrator sends me an email containing a URL to connect to my Amazon FinSpace Environment along with the related credentials. I connect to the Amazon FinSpace environment.

A couple of points are worth noting on the welcome page. First, on the top right side, I click the gear icon to access the environment settings. This is where I can add other users and manage their permissions. Second, you can browse the different data by categories on the left side, or search for specific terms by typing your search query on the search bar on top of the screen, and refine your search on the left side.

I can use Amazon FinSpace as my data hub. Data are fed through the API or I can load data directly from my workstation. I use tags to describe datasets. Datasets are containers for data; changes are versioned and I can create historical views of data or use the auto-updating data view that Amazon FinSpace maintains for me.

For this demo, let’s imagine I received a request from a portfolio manager who wants a chart showing realized volatility using 5 minute time bars for AMZN stock. Let me show you how I use the search bar to locate data and then use a notebook to analyze that data.

First, I search my dataset for stock price time bar summary, with 5 min intervals. I type “equity” in the search box. I’m lucky: The first result is the one I want. If needed, I could have refined the results using the facets on the left.

finspace search equity

Once I find the dataset, I explore its description, the schema, and other information. Based on these, I decide if this is the correct dataset to answer my portfolio manager’s request.

finspace dataset details

 

I click Analyze in notebook to start a Jupyter notebook where I’ll be able to further explore the data with PySpark. Once the notebook is open, I first check it is correctly configured to use the Amazon FinSpace PySpark kernel (starting the kernel takes 5-8 minutes).

Finspace select kernel

I click “play” on the first code box to connect to the Spark cluster.

finspace connect to cluster

To analyze my dataset and answer the specific question from my PM, I need to type a bit of PySpark code. For the purpose of this demo, I am using sample code from the Amazon FinSpace GitHub repository. You can upload the Notebook to your environment. Click the up arrow as shown on the top left of the screen above to select the file from your local machine.

This notebook pulls data from the Amazon FinSpace catalog “US Equity Time-Bar Summary” data I found earlier, and then uses the Amazon FinSpace built-in analytic function realized_volatility() to compute realized volatility for a group of tickers and exchange event types.

Before creating any graph, let’s have a sense of the dataset. What is the time range of the data ? What tickers are in this dataset ? I answer these questions with simple select() or groupby() functions provided by Amazon FinSpace. I prepare my FinSpaceAnalyticsAnalyser class with the code below :

from aws.finspace.analytics import FinSpaceAnalyticsManager
finspace = FinSpaceAnalyticsManager(spark = spark, endpoint=hfs_endpoint)

sumDF = finspace.read_data_view(dataset_id = dataset_id, data_view_id = view_id)

Once done, I can start to explore the dataset:
finspace analysis 1

I can see there are 561778 AMZN trades and price quotes between Oct. 1, 2019 and March 31, 2020.

To plot the realized volatility, I use Panda to plot the values:

finspace plot realized volatility code

When I execute this code block, I receive:

finspace plot realized volatility graph

 

Similarly, I can start a Bollinger Bands analysis to check if the volatility spike created an oversold condition on the AMZN stock. I am also using Panda to plot the values.

finspace plot bollinger bandcode

and generate this graph:

finspace plot realized volatility bolinger graph

 

I am ready to answer the portfolio manager’s question. But why was there a spike on Jan 30 2020 ? The answer is in the news: “Amazon soars after huge earnings beat.” 🙂

Availability and Pricing
Amazon FinSpace is available today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Canada (Central).

As usual, we charge you only for the resources your project uses. Pricing is based on three dimensions: the number of analysts with access to the service, the volume of data ingested, and the compute hours used to apply your transformations. Detailed pricing information is available on the service pricing page.

Give it a try today and let us know your feedback.

— seb

Introducing CloudFront Functions – Run Your Code at the Edge with Low Latency at Any Scale

With Amazon CloudFront, you can securely deliver data, videos, applications, and APIs to your customers globally with low latency and high transfer speeds. To offer a customized experience and the lowest possible latency, many modern applications execute some form of logic at the edge. The use cases for applying logic at the edge can be grouped together in two main categories:

  • First are the complex, compute-heavy operations that are executed when objects are not in the cache. We launched [email protected] in 2017 to offer a fully programmable, serverless edge computing environment for implementing a wide variety of complex customizations. [email protected] functions are executed in a regional edge cache (usually in the AWS region closest to the CloudFront edge location reached by the client). For example, when you’re streaming video or audio, you can use [email protected] to create and serve the right segments on-the-fly reducing the need for origin scalability. Another common use case is to use [email protected] and Amazon DynamoDB to translate shortened, user-friendly URLs to full URL landing pages.
  • The second category of use cases are simple HTTP(s) request/response manipulations that can be executed by very short-lived functions. For these use cases, you need a flexible programming experience with the performance, scale, and cost-effectiveness that enable you to execute them on every request.

To help you with this second category of use cases, I am happy to announce the availability of CloudFront Functions, a new serverless scripting platform that allows you to run lightweight JavaScript code at the 218+ CloudFront edge locations at approximately 1/6th the price of [email protected]

Architectural diagram.

CloudFront Functions are ideal for lightweight processing of web requests, for example:

  • Cache-key manipulations and normalization: Transform HTTP request attributes (such as URL, headers, cookies, and query strings) to construct the cache-key, which is the unique identifier for objects in cache and is used to determine whether an object is already cached. For example, you could cache based on a header that contains the end user’s device type, creating two different versions of the content for mobile and desktop users. By transforming the request attributes, you can also normalize multiple requests to a single cache-key entry and significantly improve your cache-hit ratio.
  • URL rewrites and redirects: Generate a response to redirect requests to a different URL. For example, redirect a non-authenticated user from a restricted page to a login form. URL rewrites can also be used for A/B testing.
  • HTTP header manipulation: View, add, modify, or delete any of the request/response headers. For instance, add HTTP Strict Transport Security (HSTS) headers to your response, or copy the client IP address into a new HTTP header so that it is forwarded to the origin with the request.
  • Access authorization: Implement access control and authorization for the content delivered through CloudFront by creating and validating user-generated tokens, such as HMAC tokens or JSON web tokens (JWT), to allow/deny requests.

To give you the performance and scale that modern applications require, CloudFront Functions uses a new process-based isolation model instead of virtual machine (VM)-based isolation as used by AWS Lambda and [email protected] To do that, we had to enforce some restrictions, such as avoiding network and file system access. Also, functions run for less than one millisecond. In this way, they can handle millions of requests per second while giving you great performance on every function execution. Functions add almost no perceptible impact to overall content delivery network (CDN) performance.

Similar to [email protected], CloudFront Functions runs your code in response to events generated by CloudFront. More specifically, CloudFront Functions can be triggered after CloudFront receives a request from a viewer (viewer request) and before CloudFront forwards the response to the viewer (viewer response).

[email protected] can also be triggered before CloudFront forwards the request to the origin (origin request) and after CloudFront receives the response from the origin (origin response). You can use CloudFront Functions and [email protected] together, depending on whether you need to manipulate content before, or after, being cached.

Architectural diagram.

If you need some of the capabilities of [email protected] that are not available with CloudFront Functions, such as network access or a longer execution time, you can still use [email protected] before and after content is cached by CloudFront.

Architectural diagram.

To help you understand the difference between CloudFront Functions and [email protected], here’s a quick comparison:

CloudFront Functions [email protected]
Runtime support JavaScript
(ECMAScript 5.1 compliant)
Node.js, Python
Execution location 218+ CloudFront
Edge Locations
13 CloudFront
Regional Edge Caches
CloudFront triggers supported Viewer request
Viewer response
Viewer request
Viewer response
Origin request
Origin response
Maximum execution time Less than 1 millisecond 5 seconds (viewer triggers)
30 seconds (origin triggers)
Maximum memory 2MB 128MB (viewer triggers)
10GB (origin triggers)
Total package size 10 KB 1 MB (viewer triggers)
50 MB (origin triggers)
Network access No Yes
File system access No Yes
Access to the request body No Yes
Pricing Free tier available;
charged per request
No free tier; charged per request
and function duration

Let’s see how this works in practice.

Using CloudFront Functions From the Console
I want to customize the content of my website depending on the country of origin of the viewers. To do so, I use a CloudFront distribution that I created using an S3 bucket as origin. Then, I create a cache policy to include the CloudFront-Viewer-Country header (that contains the two-letter country code of the viewer’s country) in the cache key. CloudFront Functions can see CloudFront-generated headers (like the CloudFront geolocation or device detection headers) only if they are included in an origin policy or cache key policy.

In the CloudFront console, I select Functions on the left bar and then Create function. I give the function a name and Continue.

Console screenshot.

From here, I can follow the lifecycle of my function with these steps:

  1. Build the function by providing the code.
  2. Test the function with a sample payload.
  3. Publish the function from the development stage to the live stage.
  4. Associate the function with one or more CloudFront distributions.

Console screenshot.

1. In the Build tab, I can access two stages for each function: a Development stage for tests, and a Live stage that can be used by one or more CloudFront distributions. With the development stage selected, I type the code of my function and Save:

function handler(event) {
  var request = event.request;
  var supported_countries = ['de', 'it', 'fr'];
  if (request.uri.substr(3,1) != '/') {
    var headers = request.headers;
    var newUri;
    if (headers['cloudfront-viewer-country']) {
      var countryCode = headers['cloudfront-viewer-country'].value.toLowerCase();
      if (supported_countries.includes(countryCode)) {
        newUri = '/' + countryCode + request.uri;
      }
    }
    if (newUri === undefined) {
      var defaultCountryCode = 'en';
      newUri = '/' + defaultCountryCode + request.uri;
    }
    var response = {
      statusCode: 302,
      statusDescription: 'Found',
      headers: {
        "location": { "value": newUri }
      }
    }
    return response;
  }
  return request;
}

The function looks at the content of the CloudFront-Viewer-Country header set by CloudFront. If it contains one of the supported countries, and the URL does not already contain a country prefix, it adds the country at the beginning of the URL path. Otherwise, it lets the request pass through without changes.

2. In the Test tab, I select the event type (Viewer Request), the Stage (Development, for now) and a sample event.

Console screenshot.

Below, I can customize the Input event by selecting the HTTP method, and then editing the path of the URL, and optionally the client IP to use. I can also add custom headers, cookies, or query strings. In my case, I leave all the default values and add the CloudFront-Viewer-Country header with the value of FR (for France). Optionally, instead of using the visual editor, I can customize the input event by editing the JSON payload that is passed to the function.

Console screenshot.

I click on the Test button and look at the Output. As expected, the request is being redirected (HTTP status code 302). In the Response headers, I see that the location where the request is being redirected starts with /fr/ to provide custom content for viewers based in France. If something doesn’t go as expected in my tests, I can look at the Function Logs. I can also use console.log() in my code to add more debugging information.

Console screenshot.

In the Output, just above the HTTP status, I see the Compute utilization for this execution. Compute utilization is a number between 0 and 100 that indicates the amount of time that the function took to run as a percentage of the maximum allowed time. In my case, a compute utilization of 21 means that the function completed in 21% of the maximum allowed time.

3. I run more tests using different configurations of URL and headers, then I move to the Publish tab to copy the function from the development stage to the live stage. Now, the function is ready to be associated with an existing distribution.

Console screenshot.

4. In the Associate tab, I select the Distribution, the Event type (Viewer Request or Viewer Response) and the Cache behavior (I only have the Default (*) cache behavior for my distribution). I click Add association and confirm in the dialog.

Console screenshot.

Now, I see the function association at the bottom of the Associate tab.

Console screenshot.

To test this configuration from two different locations, I start two Amazon Elastic Compute Cloud (Amazon EC2) instances, one in the US East (N. Virginia) Region and one in the Europe (Paris) Region. I connect using SSH and use cURL to get an object from the CloudFront distribution. Previously, I have uploaded two objects to the S3 bucket that is used as the origin for the distribution: one, for customers based in France, using the fr/ prefix, and one, for customers not in a supported country, using the en/ prefix.

I list the two objects using the AWS Command Line Interface (CLI):

$ aws s3 ls --recursive s3://BUCKET
2021-04-01 13:54:20         13 en/doc.txt
2021-04-01 13:54:20          8 fr/doc.txt

In the EC2 instance in the US East (N. Virginia) Region, I run this command to download the object:

[us-east-1]$ curl -L https://d2wj2l15gt32vo.cloudfront.net/doc.txt 
Good morning

Then I run the same command in the Europe (Paris) Region:

[eu-west-3]$ curl -L https://d2wj2l15gt32vo.cloudfront.net/doc.txt
Bonjour

As expected, I am getting different results from the same URL. I am using the -L option so that cURL is following the redirect it receives. In this way, each command is executing two HTTP requests: the first request receives the HTTP redirect from the CloudFront function, the second request follows the redirect and is not modified by the function because it contains a custom path in the URL (/en/ or /fr/).

To see the actual location of the redirect and all HTTP response headers, I use cURL with the -i option. These are the response headers for the EC2 instance running in the US; the function is executed at an edge location in Virginia:

[us-east-1]$ curl -i https://d2wj2l15gt32vo.cloudfront.net/doc.txt 
HTTP/2 302 
server: CloudFront
date: Thu, 01 Apr 2021 14:39:31 GMT
content-length: 0
location: /en/doc.txt
x-cache: FunctionGeneratedResponse from cloudfront
via: 1.1 cb0868a0a661911b98247aaff77bc898.cloudfront.net (CloudFront)
x-amz-cf-pop: IAD50-C2
x-amz-cf-id: TuaLKKg3YvLKN85fzd2qfcx9jOlfMQrWazpOVmN7NgfmmcXc1wzjmA==

And these are the response headers for the EC2 instance running in France; this time, the function is executed in an edge location near Paris:

[eu-west-3]$ curl -i https://d2wj2l15gt32vo.cloudfront.net/doc.txt
HTTP/2 302 
server: CloudFront
date: Thu, 01 Apr 2021 14:39:26 GMT
content-length: 0
location: /fr/doc.txt
x-cache: FunctionGeneratedResponse from cloudfront
via: 1.1 6fa25eadb94abd73b5efc56a89b2d829.cloudfront.net (CloudFront)
x-amz-cf-pop: CDG53-C1
x-amz-cf-id: jzWcbccJiTDRj22TGwsn_

Availability and Pricing
CloudFront Functions is available today and you can use it with new and existing distributions. You can use CloudFront Functions with the AWS Management Console, AWS Command Line Interface (CLI), AWS SDKs, and AWS CloudFormation. With CloudFront Functions, you pay by the number of invocations. You can get started with CloudFront Functions for free as part of the AWS Free Usage Tier. For more information, please see the CloudFront pricing page.

AWS for the Edge
Amazon CloudFront and AWS edge networking capabilities are part of the AWS for the Edge portfolio. AWS edge services improve performance by moving compute, data processing, and storage closer to end-user devices. This includes deploying AWS managed services, APIs, and tools to locations outside AWS data centers, and even onto customer-owned infrastructure and devices.

AWS offers you a consistent experience and portfolio of capabilities from the edge to the cloud. Using AWS, you have access to the broadest and deepest capabilities for edge use cases, like edge networking, hybrid architectures, connected devices, 5G, and multi-access edge computing.

Start using CloudFront Functions today to add custom logic at the edge for your applications.

Danilo

Amazon Nimble Studio – Build a Creative Studio in the Cloud

Amazon Nimble Studio is a new service that creative studios can use to produce visual effects, animations, and interactive content entirely in the cloud with AWS, from the storyboard sketch to the final deliverable. Nimble Studio provides customers with on-demand access to virtual workstations, elastic file storage, and render farm capacity. It also provides built-in automation tools for IP security, permissions, and collaboration.

Traditional creative studios often face the challenge of attracting and onboarding talent. Talent is not localized, and it’s important to speed up the hiring of remote artists and make them as productive as if they were in the same physical space. It’s also important to allow distributed teams to use the same workflow, software, licenses, and tools that they use today, while keeping all assets secure.

In addition, traditional customers resort to buying more hardware for their local render farms, rent hardware at a premium, or extend their capacity with cloud resources. Those who decide to render on the cloud navigate these options and the large breadth of AWS offerings and perceive complexity, which can lead them to choose legacy approaches.

Screenshot of maya lighting

 

Today, we are happy to announce the launch of Amazon Nimble Studio, a service that addresses these concerns by providing ready-made IT infrastructure as a service, including workstations, storage, and rendering.

Nimble Studio is part of a broader portfolio of purpose-built AWS capabilities for media and entertainment use cases, including AWS services, AWS and Partner Solutions, and over 400 AWS Partners. You can learn more about our AWS for Media & Entertainment initiative, also launched today, which helps media and entertainment customers easily identify industry-specific AWS capabilities across five key business areas: Content production, Direct-to-consumer and over-the-top (OTT) streaming, Broadcast, Media supply chain and archive, and Data science and Analytics.

Benefits of using Nimble Studio
Nimble Studio is built on top of AWS infrastructure, ensuring performant computing power and security of your data. It also enables an elastic production, allowing studios to scale the artists’ workstations, storage, and rendering needs based on demand.

When using Nimble Studio, artists can use their favorite creative software tools on an Amazon Elastic Compute Cloud (Amazon EC2) virtual workstation. Nimble Studio uses EC2 for compute, Amazon FSx for storage, AWS Single Sign-On for user management, and AWS Thinkbox Deadline for render farm management.

Nimble Studio is open and compatible with most storage solutions that support the NFS/SMB protocol, such as Weka.io and Qumulo, so you can bring your own storage.

For high-performance remote display, the service uses the NICE DCV protocol and provisions G4dn instances that are available with NVIDIA GPUs.

By providing the ability to share assets using globally accessible data storage, Nimble Studio makes it possible for studios to onboard global talent. And Nimble Studio centrally manages licenses for many software packages, reduces the friction of onboarding new artists, and enables predictable budgeting.

In addition, Nimble Studio comes with an artist-friendly console. Artists don’t need to have an AWS account or use the AWS Management Console to do their work.

But my favorite thing about Nimble Studio is that it’s intuitive to set up and use. In this post, I will show you how to create a cloud-based studio.

screenshot of the intro of nimble studio

 

Set up a cloud studio using Nimble Studio
To get started with Nimble Studio, go to the AWS Management Console and create your cloud studio. You can decide if you want to bring your existing resources, such as file storage, render farm, software license service, Amazon Machine Image (AMI), or a directory in Active Directory. Studios that migrate existing projects can use AWS Snowball for fast and secure physical data migration.

Configure AWS SSO
For this configuration, choose the option where you don’t have any existing resources, so you can see how easy it is to get started. The first step is to configure AWS SSO in your account. You will use AWS SSO to assign the artists permissions based on some roles to access the studio. To set up AWS SSO, follow these steps in the AWS Directory Services console.

Configure your studio with StudioBuilder
StudioBuilder is a tool that will help you to deploy your studio simply by answering some configuration questions. StudioBuilder is available in an AMI. You can get it for free from the AWS Marketplace.

After you get the AMI from AWS Marketplace, you can find it in the Public images list in the EC2 console. Launch that AMI in an EC2 instance. I recommend a t3.medium instance. Set the Auto-assign Public IP field to Enable to ensure that your instance receives a public IP address. You will use it later to connect to the instance.

Screenshot of launching the StudioBuilder AMI

As soon as you connect to the instance, you are directed to the StudioBuilder setup.

Screenshot of studio builder setup

The setup guides you, step by step through the configuration of networking, storage, Active Directory, and rendering farm. Simply answer the questions to build the right studio for your needs.

Screenshot of the studio building

The setup takes around 90 minutes. You can monitor the progress using the AWS CloudFormation console. Your studio is ready when four new CloudFormation stacks have been created in your account: Network, Data, Service, and Compute.

Screenshot of stacks completed in CloudFormation

Now terminate the StudioBuilder instance and return to the Nimble Studio console. In Studio setup, you can see that you’ve completed four of the five steps.

Screenshot of the studio setup tutorial

Assign access to studio admin and users
To complete the last step, Step 5: Allow studio access, you will use the Active Directory directory you created during the StudioBuilder step and the AWS SSO you configured earlier to allow access to the administrators and artists to the studio.

Follow the instructions in the Nimble Studio console to connect the directory to AWS SSO. Then you can add administrators to your studio. Administrators have control over what users can do in the studio.

At this point, you can add users to the studio, but because your directory doesn’t have users yet, move to the next step. You can return to this step later to add users to the studio.

Accessing the studio for the first time
When you open the studio for the first time, you will find the Studio URL in the Studio Manager console. You will share this URL with your artists when the studio is ready. To sign in to the studio, you need the user name and password of the Admin you created earlier.

Screenshot of studio name

Launch profiles
When the studio opens, you will see two launch profiles: one that is a render farm instance and the other that is a workstation. They were created by StudioBuilder when you set up the studio. Launch profiles control access to resources in your studio, such as compute farms, shared file systems, instance types, and AMIs. You can use the Nimble Studio console to create as many launch profiles as you need, to customize your team’s access to the studio resources.

Screenshot of the studio

When you launch a profile, you launch a workstation that includes all the software you provided in the installed AMI. It takes a few minutes for the instance to launch for the first time. When it is ready, you will see the instance in the browser. You can sign in to it with the same user name and password you use to sign in to the studio.

Screenshot of launching a new instance

Now your studio is ready! Before you share it with your artists, you might want to configure more launch profiles, add your artist users to the directory, and give those users permissions to access the studio and launch profiles.

Here’s a short video that describes how Nimble Studio can help you and works:

Available now
Nimble Studio is now available in the US West (Oregon), US East (N. Virginia), Canada (Central), Europe (London), Asia Pacific (Sydney) Regions and the US West (Los Angeles) Local Zone.

Learn more about Amazon Nimble Studio and get started building your studio in the cloud.

Marcia

Decrease Your Machine Learning Costs with Instance Price Reductions and Savings Plans for Amazon SageMaker

Launched at AWS re:Invent 2017, Amazon SageMaker is a fully-managed service that has already helped tens of thousands of customers quickly build and deploy their machine learning (ML) workflows on AWS.

To help them get the most ML bang for their buck, we’ve added a string of cost-optimization services and capabilities, such as Managed Spot Training, Multi-Model Endpoints, Amazon Elastic Inference, and AWS Inferentia. In fact, customers find that the Total Cost of Ownership (TCO) for SageMaker over a three-year horizon is 54% lower compared to other cloud-based options, such as self-managed Amazon EC2 and AWS-managed Amazon EKS.

Since there’s nothing we like more than making customers happy by saving them money, I’m delighted to announce:

  • A price reduction for CPU and GPU instances in Amazon SageMaker,
  • The availability of Savings Plans for Amazon SageMaker.

Reducing Instance Prices in Amazon SageMaker
Effective today, we are dropping the price of several instance families in Amazon SageMaker by up to 14.2%.

This applies to:

Detailed pricing information is available on the Amazon SageMaker pricing page.

As welcome as price reductions are, many customers have also asked us for a simple and flexible way to optimize SageMaker costs for all instance-related activities, from data preparation to model training to model deployment. In fact, as a lot of customers are already optimizing their compute costs with Savings Plans, they told us that they’d love to do the same for their Amazon SageMaker costs.

Introducing SageMaker Savings Plans
Savings Plans for AWS Compute Services were launched in November 2019 to help customers optimize their compute costs. They offer up to 72% savings over the on-demand price, in exchange for your commitment to use a specific amount of compute power (measured in $ per hour) for a one- or three-year period. In the spirit of self-service, you have full control on setting up your plans, thanks to recommendations based on your past consumption, to usage reports, and to budget coverage and utilization alerts.

SageMaker Savings Plans follow in these footsteps, and you can create plans that cover ML workloads based on:

Savings Plans don’t distinguish between instance families, instance types, or AWS regions. This makes it easy for you to maximize savings regardless of how your use cases and consumption evolve over time, and you can save up to 64% compared to the on-demand price.

For example, you could start with small instances in order to experiment with different algorithms on a fraction of your dataset. Then, you could move on to preparing data and training at scale with larger instances on your full dataset. Finally, you could deploy your models in several AWS regions to serve low-latency predictions to your users. All these activities would be covered by the same Savings Plan, without any management required on your side.

Understanding Savings Plans Recommendations
Savings Plans provides you with recommendations that make it easy to find the right plan. These recommendations are based on:

  • Your SageMaker usage in the last 7, 30 or 60 days. You should select the time period that best represents your future usage.
  • The term of your plan: 1-year or 3-year.
  • Your payment option: no upfront, partial upfront (50% or more), or all upfront. Some customers prefer (or must use) this last option, as it gives them a clear and predictable view of their SageMaker bill.

Instantly, you’ll see what your optimized spend would be, and how much you could start saving per month. Savings Plans also suggest an hourly commitment that maximizes your savings. Of course, you’re completely free to use a different commitment, starting as low as $0.001 per hour!

Once you’ve made up your mind, you can add the plan to your cart, submit it, and start enjoying your savings.

Now, let’s do a quick demo, and see how I could optimize my own SageMaker spend.

Recommending Savings Plans for Amazon SageMaker
Opening the AWS Cost Management Console, I see a Savings Plans menu on the left.

Cost management console

Clicking on Recommendations, I select SageMaker Savings Plans.

Looking at the available options, I select Payer to optimize cost at the Organizations level, a 1-year term, a No upfront payment, and 7 days of past usage (as I’ve just ramped up my SageMaker usage).

SageMaker Savings Plan

Immediately, I see that I could reduce my SageMaker costs by 20%, saving $897.63 every month. This would only require a 1-year commitment of $3.804 per hour.

SageMaker Savings Plan

The monthly charge on my AWS bill would be $2,776 ($3.804 * 24 hours * 365 days / 12 months), plus any additional on-demand costs should my actual usage exceed the commitment. Pretty tempting, especially with no upfront required at all.

Moving to a 3-year plan (still no upfront), I could save $1,790.19 per month, and enjoy 41% savings thanks to a $2.765 per hour commitment.

SageMaker Savings Plan

I could add this plan to the cart as is, and complete my purchase. Every month for 3 years, I would be charged $2,018 ($2.765 * 24 * 365 / 12), plus additional on-demand cost.

As mentioned earlier, I can also create my own plan in just a few clicks. Let me show you how.

Creating Savings Plans for Amazon SageMaker
In the left-hand menu, I click on Purchase Savings Plans and I select SageMaker Savings Plans.

SageMaker Savings Plan

I pick a 1-year term without any upfront. As I expect to rationalize my SageMaker usage a bit in the coming months, I go for a commitment of $3 per hour, instead of the $3.804 recommendation. Then, I add the plan to the cart.

SageMaker Savings Plan

Confirming that I’m fine with an optimized monthly payment of $2,190, I submit my order.

SageMaker Savings Plan

The plan is now active, and I’ll see the savings on my next AWS bill. Thanks to utilization reports available in the Savings Plans console, I’ll also see the percentage of my commitment that I’ve actually used. Likewise, coverage reports will show me how much of my eligible spend has been covered by the plan.

Getting Started
Thanks to price reductions for CPU and GPU instances and to SageMaker Savings Plans, you can now further optimize your SageMaker costs in an easy and predictable way. ML on AWS has never been more cost effective.

Price reductions and SageMaker Savings Plans are available today in the following AWS regions:

  • Americas: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), AWS GovCloud (US-West), Canada (Central), South America (São Paulo).
  • Europe, Middle East and Africa: Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), Europe (Milan), Africa (Cape Town), Middle East (Bahrain).
  • Asia Pacific: Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Seoul), Asia Pacific (Mumbai), and Asia Pacific (Hong Kong).

Give them a try, and let us know what you think. As always, we’re looking forward to your feedback. You can send it to your usual AWS Support contacts, or on the AWS Forum for Amazon SageMaker.

– Julien

 

 

Modern Apps Live: Learn Serverless, Containers and More in May

Modern Apps Live is a series of events about modern application development that will be live-streaming on Twitch in May. Session topics include serverless, containers, and mobile and front-end development.

If you’re not familiar, modern applications are those that:

  • Can scale quickly to millions of users.
  • Have global availability.
  • Manage a lot of data (we’re talking exabytes of data).
  • Respond in milliseconds.

These applications are built using a combination of microservices architectures, serverless operational models, and agile developer processes. Modern applications allow organizations to innovate faster and reduce risk, time to market, and total cost of ownership.

Modern Apps Live is a series of four virtual events:

If you’re a developer, solutions architect, or IT and DevOps professional who wants to build and design modern applications, these sessions are for you, no matter if you’re just starting out or a more experience cloud practitioner. There will be time in each session for Q&A. AWS experts will be in the Twitch chat, ready to answer your questions.

If you cannot attend all four events, here are some sessions you definitely shouldn’t miss:

  • Keynotes are a must! AWS leadership will share product roadmaps and make some important announcements.
  • There are security best practices sessions scheduled on May 4 and May 19, during the Container Day x Kubecon and Serverless Live events. Security should be a top priority when you develop modern applications.
  • If you’re just getting started with serverless, don’t miss the “Building a Serverless Application Backend” on May 19. Eric Johnson and Justin Pirtle will show you how to pick the right serverless pattern for your workload and share information about security, observability, and simplifying your deployments.
  • On May 25, in the “API modernization with GraphQL” session, Brice Pelle will show how you can use GraphQL in your client applications.
  • Bring any burning questions about containers to the “Open Q&A and Whiteboarding” session during the Container Day x DockerCon event on May 26.

For more information or to register for the events, see the Modern Apps Live webpage.

I hope to see you there.

Marcia

Amazon CodeGuru Reviewer Updates: New Predictable Pricing Model Up To 90% Lower and Python Support Moves to GA

Amazon CodeGuru helps you automate code reviews and improve code quality with recommendations powered by machine learning and automated reasoning. You can use CodeGuru Reviewer to detect potential defects and bugs that are hard to find, and CodeGuru Profiler to fine-tune the performance of your applications based on live data. The service has been generally available since June 2020; you can read more about how to get started with CodeGuru here.

While working with many customers in the last few months, we introduced security detectors, Python support in preview, and memory profiling to help customers improve code quality and save hours of developer time. We also heard resounding feedback on various areas such as the structure of pricing and language coverage. We’ve decided to address that feedback and make it even easier to adopt Amazon CodeGuru at scale within your organization.

Today I’m happy to announce two major updates for CodeGuru Reviewer:

  • There’s a brand new, easy to estimate pricing model with lower and fixed monthly rates based on the size of your repository, with up to 90% price reductions
  • Python Support is now generally available (GA) with wider recommendation coverage and four updates related to Python detectors

New Predictable Pricing for CodeGuru Reviewer
CodeGuru Reviewer allows you to run full scans of your repository stored in GitHub, GitHub Enterprise, AWS CodeCommit, or Bitbucket. Also, every time you submit a pull request, CodeGuru Reviewer starts a new code review and suggests recommendations and improvements in the form of comments.

The previous pricing structure was based on the number of lines of code (LoC) analyzed per month, $0.75 per 100 LoC. We’ve heard your feedback: As a developer, you’d like to analyze your code as often as possible, create as many pull requests and branches as needed without thinking about cost, and maximize the chances of catching errors and defects before they hit production.

That’s why with the new pricing you’ll only pay a fixed monthly rate, based on the total size of your repositories: $10 per month for the first 100k lines of code, across all connected repositories. And $30 per month for each additional 100k of code. Please note that only the largest branch in the repository counts, and that empty lines and comments aren’t counted at all.

This price structure not only makes the cost more predictable and transparent, but it will also help simplify the way you scale CodeGuru Reviewer across different teams in the organization. You still have the possibility to perform full repository scans on demand and incremental reviews on each pull request. The monthly rate includes all incremental reviews, and you get up to two full scans per repository per month included in the monthly rate. Additional full scans are charged $10 per 100k lines of code.

Basically you get all the benefits of Amazon CodeGuru and all the new detectors and integrations, but it’s up to 90% cheaper. Also, you can get started at no cost with the Free Tier for the first 90 days, up to 100k LoC. When the Free Tier expires or if you exceed the 100k LoC, you simply pay the standard rates. Check out the updated pricing page here.

Let me share a few practical examples (excluding the Free Tier):

  1.  Medium-size repository of 150k LoC: In this case, your monthly rate gets rounded up to 200k lines of code, for a total of $40 per month ($10 + $30). The number of LoC is always rounded up. And by the way you could also connect a repository (or multiple) of up to 50K lines of code (bringing the total to that 200k lines of code) for the same price. Up to 2 full scans are included for each repository.
  2. Three repositories of 300k LoC each (the equivalent of a large repository of 900k LoC): In this case, your monthly rate is $250 ($10 for the first 100k lines of code, plus $240 for the remaining 800k lines of code). Up to 2 full scans are included for each repository.
  3.  Small repository of 70k LoC, with 10 full scans every month: In this case, your monthly rate is only $10 for the number of LoC, plus $60 for the additional full scans (560k LoC), for a total of $70 per month.
  4.  Small repository with 50 active branches, the largest branch containing 10k LoC, and 300 pull requests every month: Simple, just $10 per month. Up to 2 full scans are included for each repository.

The new pricing will apply starting in April for new repositories connected on April 6th, 2021, as well as for repositories already connected up to April 5th, 2021. We expect the largest majority of use cases to see a considerable cost reduction. Unless you need to perform full repository scans multiple times a day (this is quite an edge case), most small repositories up to 100k LoC will be charged only a predictable and affordable fee of $10 per month after the 90-day trial, regardless of the number of branches, contributors, or pull requests. Now you can develop and iterate on your code repositories — knowing that CodeGuru will review your code and find potential issues — without worrying about unpredictable costs.

Give CodeGuru Reviewer a try and connect your first repository in the CodeGuru console.

Python Support for CodeGuru Reviewer Now Generally Available
Python Support for CodeGuru Reviewer has been available in preview since December 2020. It allows you to improve your Python applications by suggesting optimal usage of data structures and concurrency. It helps you follow Python best practices for control flow, error handling, and the standard library. Last but not least, it offers recommendations about scientific/math operations and AWS best practices. These suggestions are useful for both beginners and expert developers, and for small and large teams who are passionate about code quality.

With today’s announcement of general availability, you’ll find four main updates related to Python detectors:

  • Increased coverage and precision for existing detectors: Many Python best practices have been integrated into the existing detectors related to standard library, data structures, control flow, and error handling. For example, CodeGuru Reviewer will warn you in case your code is creating a temporary file insecurely, using float numbers instead of decimal for scientific computation that requires maximum precision, or confusing equality with identity. These warnings will help you avoid security vulnerabilities, performance issues, and bugs in general.
  • Improved detector for resource leaks: During the preview, this detector only focused on open file descriptors; now it generates recommendations about a broader set of potential resource leaks such as connections, sessions, sockets, and multiprocessing thread pools. For example, you may implement a Python function that opens a new socket and then forget to close it. This is quite common and it doesn’t immediately turn into a problem, but resource leaks can cause your system to slow down or even crash in the long run. CodeGuru Reviewer will suggest closing the socket or wrapping it in a with statement. As a reminder, this detector is available for Java as well.
  • New code maintainability detector: This new detector helps you identify code issues related to aspects that usually make code bases harder to read, understand, and maintain, such as code complexity and tight coupling. For example, imagine that you’ve spent a couple of hours implementing a simple prototype and then you decide to integrate it with your production code as is. Because you were rushing a bit, this prototype may include a huge Python function with 50+ lines of code ranging from input validation, data preparation, some API calls, and a final write to disk. CodeGuru Reviewer will suggest you refactor this code into smaller, reusable, and loosely coupled functions that are easier to test and maintain.
  • New input validation detector: This new detector helps you identify situations where some functions or classes may benefit from additional validation of input parameters, especially when these parameter are likely to be user-generated or dynamic. For example, you may have implemented a Python function that takes an environment name as CLI input (e.g. dev, stage, prod), performs an API call, and then returns a resource ARN as output. You haven’t validated the environment name, so CodeGuru Reviewer might suggest you implement some additional validation; in this example, you may add a few lines of code to check that the environment name is not empty and that it’s a valid environment for this project.

Please note that Python Support for CodeGuru Profiler is still in preview: CodeGuru Profiler allows you to collect runtime performance data, identify how code is running on the CPU and where time is consumed, so you can tune your Python application starting from the most expensive parts — with the goal of reducing cost and improving performance.

You can get started with CodeGuru Reviewer and CodeGuru Profiler for Python in the CodeGuru console.

Conclusions
Amazon CodeGuru is available in 10 AWS Regions and it supports Python and Java applications. We’re looking forward to publishing even more detectors and support for more programming languages to help more developers and customers improve their application code quality and performance.

If you’d like to learn more about Amazon CodeGuru, check out the CodeGuru Reviewer documentation and CodeGuru Profiler documentation.

Alex