Category: <span>security</span>

Publishing CBL-Mariner CVEs on the Security Update Guide CVRF API

Microsoft is pleased to announce that beginning January 11, 2023, we will publish CBL-Mariner CVEs in the Security Update Guide (SUG) Common Vulnerability Reporting Framework (CVRF) API. CBL-Mariner is a Linux distribution built by Microsoft to power Azure’s cloud and edge products and services and is currently in preview as an AKS Container Host. Sharing …

Publishing CBL-Mariner CVEs on the Security Update Guide CVRF API Read More »

Amazon S3 Encrypts New Objects By Default

At AWS, security is the top priority. Starting today, Amazon Simple Storage Service (Amazon S3) encrypts all new objects by default. Now, S3 automatically applies server-side encryption (SSE-S3) for each new object, unless you specify a different encryption option. SSE-S3 was first launched in 2011. As Jeff wrote at the time: “Amazon S3 server-side encryption handles all encryption, decryption, and key management in a totally transparent fashion. When you PUT an object, we generate a unique key, encrypt your data with the key, and then encrypt the key with a [root] key.”

This change puts another security best practice into effect automatically—with no impact on performance and no action required on your side. S3 buckets that do not use default encryption will now automatically apply SSE-S3 as the default setting. Existing buckets currently using S3 default encryption will not change.

As always, you can choose to encrypt your objects using one of the three encryption options we provide: S3 default encryption (SSE-S3, the new default), customer-provided encryption keys (SSE-C), or AWS Key Management Service keys (SSE-KMS). To have an additional layer of encryption, you might also encrypt objects on the client side, using client libraries such as the Amazon S3 encryption client.

While it was simple to enable, the opt-in nature of SSE-S3 meant that you had to be certain that it was always configured on new buckets and verify that it remained configured properly over time. For organizations that require all their objects to remain encrypted at rest with SSE-S3, this update helps meet their encryption compliance requirements without any additional tools or client configuration changes.

With today’s announcement, we have now made it “zero click” for you to apply this base level of encryption on every S3 bucket.

Verify Your Objects Are Encrypted
The change is visible today in AWS CloudTrail data event logs. You will see the changes in the S3 section of the AWS Management Console, Amazon S3 Inventory, Amazon S3 Storage Lens, and as an additional header in the AWS CLI and in the AWS SDKs over the next few weeks. We will update this blog post and documentation when the encryption status is available in these tools in all AWS Regions.

To verify the change is effective on your buckets today, you can configure CloudTrail to log data events. By default, trails do not log data events, and there is an extra cost to enable it. Data events show the resource operations performed on or within a resource, such as when a user uploads a file to an S3 bucket. You can log data events for Amazon S3 buckets, AWS Lambda functions, Amazon DynamoDB tables, or a combination of those.

Once enabled, search for PutObject API for file uploads or InitiateMultipartUpload for multipart uploads. When Amazon S3 automatically encrypts an object using the default encryption settings, the log includes the following field as the name-value pair: "SSEApplied":"Default_SSE_S3". Here is an example of a CloudTrail log (with data event logging enabled) when I uploaded a file to one of my buckets using the AWS CLI command aws s3 cp backup.sh s3://private-sst.

Cloudtrail log for S3 with default encryption enabled

Amazon S3 Encryption Options
As I wrote earlier, SSE-S3 is now the new base level of encryption when no other encryption-type is specified. SSE-S3 uses Advanced Encryption Standard (AES) encryption with 256-bit keys managed by AWS.

You can choose to encrypt your objects using SSE-C or SSE-KMS rather than with SSE-S3, either as “one click” default encryption settings on the bucket, or for individual objects in PUT requests.

SSE-C lets Amazon S3 perform the encryption and decryption of your objects while you retain control of the keys used to encrypt objects. With SSE-C, you don’t need to implement or use a client-side library to perform the encryption and decryption of objects you store in Amazon S3, but you do need to manage the keys that you send to Amazon S3 to encrypt and decrypt objects.

With SSE-KMS, AWS Key Management Service (AWS KMS) manages your encryption keys. Using AWS KMS to manage your keys provides several additional benefits. With AWS KMS, there are separate permissions for the use of the KMS key, providing an additional layer of control as well as protection against unauthorized access to your objects stored in Amazon S3. AWS KMS provides an audit trail so you can see who used your key to access which object and when, as well as view failed attempts to access data from users without permission to decrypt the data.

When using an encryption client library, such as the Amazon S3 encryption client, you retain control of the keys and complete the encryption and decryption of objects client-side using an encryption library of your choice. You encrypt the objects before they are sent to Amazon S3 for storage. The Java, .Net, Ruby, PHP, Go, and C++ AWS SDKs support client-side encryption.

You can follow the instructions in this blog post if you want to retroactively encrypt existing objects in your buckets.

Available Now
This change is effective now, in all AWS Regions, including on AWS GovCloud (US) and AWS China Regions. There is no additional cost for default object-level encryption.

— seb

Happy 10th Birthday – AWS Identity and Access Management

Amazon S3 turned 15 earlier this year, and Amazon EC2 will do the same in a couple of months. Today we are celebrating the tenth birthday of AWS Identity and Access Management (IAM).

The First Decade
Let’s take a walk through the last decade and revisit some of the most significant IAM launches:

Am IAM policy in text form, shown in Windows Notepad.May 2011 – We launched IAM, with the ability to create users, groups of users, and to attach policy documents to either one, with support for fifteen AWS services. The AWS Policy Generator could be used to build policies from scratch, and there was also a modest collection of predefined policy templates. This launch set the standard for IAM, with fine-grained permissions for actions and resources, and the use of conditions to control when a policy is in effect. This model has scaled along with AWS, and remains central to IAM today.

August 2011 – We introduced the ability for you to use existing identities by federating into your AWS Account, including support for short-term temporary AWS credentials.

June 2012 – With the introduction of IAM Roles for EC2 instances, we made it easier for code running on an EC2 instance to make calls to AWS services.

February 2015 – We launched Managed Policies, and simultaneously turned the existing IAM policies into first-class objects that could be created, named, and used for multiple IAM users, groups, or roles.

AWS Organizations, with a root account and three accounts inside.February 2017 – We launched AWS Organizations, and gave you the ability to to implement policy-based management that spanned multiple AWS accounts, grouped into a hierarchy of Organizational Units. This launch also marked the debut of Service Control Policies (SCPs) that gave you the power to place guard rails around the level of access allowed within the accounts of an Organization.

April 2017 – Building on the IAM Roles for EC2 Instances, we introduced service-linked roles. This gave you the power to delegate permissions to AWS services, and made it easier for you to work with AWS services that needed to call other AWS services on your behalf.

December 2017 – We introduced AWS Single Sign-On to make it easier for you to centrally manage access to AWS accounts and your business applications. SSO is built on top of IAM and takes advantage of roles, temporary credentials, and other foundational IAM features.

November 2018 – We introduced Attribute-Based Access Control (ABAC) as a complement to the original Role-Based Access Control to allow you to use various types of user, resource, and environment attributes to drive policy & permission decisions. This launch allowed you to tag IAM users and roles, which allowed you to match identity attributes and resource attributes in your policies. After this launch, we followed up with support for the use of ABAC in conjunction with AWS SSO and Cognito.

IAM Access Analyzer, showing some active findingsDecember 2019 – We introduced IAM Access Analyzer to analyze your policies and determine which resources can be accessed publicly or from other accounts.

March 2021 – We added policy validation (over 100 policy checks) and actionable recommendations to IAM Access Analyzer in order to help you to construct IAM policies and SCPs that take advantage of time-tested AWS best practices.

April 2021 – We made it possible for you to generate least-privilege IAM policy templates based on access activity.

Then and Now
In the early days, a typical customer might use IAM to control access to a handful of S3 buckets, EC2 instances, and SQS queues, all in a single AWS account. These days, some of our customers use IAM to control access to billions of objects that span multiple AWS accounts!

Because every call to an AWS API must call upon IAM to check permissions, the IAM team has focused on availability and scalability from the get-go. Back in 2011 the “can the caller do this?” function handled a couple of thousand requests per second. Today, as new services continue to appear and the AWS customer base continues to climb, this function now handles more than 400 million API calls per second worldwide.

As you can see from my summary, IAM has come quite a long way from its simple yet powerful beginnings just a decade ago. While much of what was true a decade ago remains true today, I would like to call your attention to a few best practices that have evolved over time.

Multiple Accounts – Originally, customers generally used a single AWS account and multiple users. Today, in order to accommodate multiple business units and workloads, we recommend the use of AWS Organizations and multiple accounts. Even if your AWS usage is relatively simple and straightforward at first, your usage is likely to grow in scale and complexity, and it is always good to plan for this up front. To learn more, read Establishing Your Best Practice AWS Environment.

Users & SSO – In a related vein, we recommend that you use AWS SSO to create and manage users centrally, and then grant them access to one or more AWS accounts. To learn more, read the AWS Single Sign-On User Guide.

Happy Birthday, IAM
In line with our well-known penchant for Customer Obsession, your feedback is always welcome! What new IAM features and capabilities would you like to see in the decade to come? Leave us a comment and I will make sure that the team sees it.

And with that, happy 10th birthday, IAM!

Jeff;