AWS: SAM Introduction

Serverless is one of the most exciting ways to build modern cloud applications — and AWS SAM makes it even easier.

What is AWS SAM?

AWS Serverless Application Model (SAM) is an open-source framework that helps you build and deploy serverless applications on AWS.

It’s designed to simplify your infrastructure-as-code, especially when working with:

  • AWS Lambda
  • API Gateway
  • DynamoDB
  • EventBridge, SQS, Step Functions, and more

At its core, SAM is just a shorthand syntax for AWS CloudFormation — making your templates cleaner, easier to write, and faster to iterate.

Why Use AWS SAM?

You should consider SAM when:

  • You’re building serverless applications using AWS services like Lambda and API Gateway.
  • You want to define infrastructure as code but find raw CloudFormation too verbose.
  • You want to test Lambda functions locally using Docker.
  • You prefer guided deployment over manually zipping and uploading code.

SAM is ideal for:

  • Quick prototyping of serverless apps
  • Developer teams who want simplicity without giving up AWS-native IaC
  • Learning how serverless works with real AWS infrastructure

SAM vs. CloudFormation: what’s the difference?

SAM is built on top of CloudFormation, so it inherits all the benefits of CloudFormation while providing a simpler syntax for serverless applications. Here are some key differences:

Feature AWS CloudFormation AWS SAM
Purpose Define any AWS infrastructure Focused on serverless apps
Syntax YAML/JSON (verbose) Simplified YAML with shorthand
Testing ❌ No built-in local testing ✅ Local testing with Docker
Deployment CLI aws cloudformation deploy sam deploy –guided
Abstraction Layer Base layer Built on top of CloudFormation

In short: SAM is CloudFormation — just way easier for serverless.

You still get all the benefits of CloudFormation (rollback, drift detection, etc.), but with less effort and boilerplate.

SAM Main Components

  1. template.yaml

Your SAM template is the blueprint of your application — it declares all the AWS resources your app needs.

Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      CodeUri: hello_world/
      Handler: app.lambda_handler
      Runtime: python3.12
      Events:
        HelloWorld:
          Type: Api
          Properties:
            Path: /hello
            Method: get
  1. samconfig.toml: Configuration for Reusability & Environments

When you run sam deploy --guided, SAM generates a samconfig.toml file. This file stores deployment settings like your S3 bucket, stack name, region, and parameter overrides — so you don’t need to type them every time.

But beyond that, you can define multiple environments using named configurations. Example:

version = 0.1
[staging.deploy.parameters]
stack_name = "my-sam-app-staging"
region = "ap-southeast-1"
s3_bucket = "my-sam-artifacts-staging"
capabilities = "CAPABILITY_IAM"
parameter_overrides = "Environment=staging"

[prod.deploy.parameters]
stack_name = "my-sam-app-prod"
region = "ap-southeast-1"
s3_bucket = "my-sam-artifacts-prod"
capabilities = "CAPABILITY_IAM"
parameter_overrides = "Environment=prod"

Now you can deploy using:

sam deploy --config-env staging
sam deploy --config-env prod

This allows:

  • Cleaner separation between dev/staging/prod
  • Safer deployment practices
  • Per-env overrides for Lambda environment variables, tags, etc.
  1. SAM CLI

A command-line tool that simplifies development and deployment:

  • sam init – scaffold a new project
  • sam build – package your code
  • sam deploy – push it to AWS
  • sam local invoke – test individual functions locally
  • sam local start-api – emulate full API locally

If you’re starting your journey into serverless with AWS, SAM is one of the best tools to learn and use. It removes the friction of writing raw CloudFormation, supports local development, and lets you ship your ideas quickly.

It’s not just beginner-friendly — it’s also powerful enough to be used in production systems, especially when paired with other AWS services like DynamoDB, Step Functions, and EventBridge.

August 3, 2025 · 3 min

AWS: Lambda Basics

AWS Lambda is one of the most exciting services in the serverless world. It lets you write code that automatically responds to events — without needing to worry about provisioning servers or managing infrastructure.

In this post, I will cover the basics:

  • What is Lambda?
  • What are the core components?
  • Why use it?
  • A real use case: processing SQS messages in TypeScript
  • Common limitations

What is AWS Lambda?

AWS Lambda is a serverless compute service that lets you run code in response to events without provisioning or managing servers. You can use Lambda to run code for virtually any type of application or backend service with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability.

You don’t manage servers. You just focus on the code, and Lambda takes care of:

  • Running it
  • Scaling it automatically
  • Charging you only when it runs

Key Components of AWS Lambda

  1. Handlers: The entry point for your Lambda function. It’s the method that AWS Lambda calls to start execution.
  2. Events: Lambda functions are triggered by events, which can come from various AWS services like S3, DynamoDB, API Gateway, or even custom events.
  3. Context: Provides runtime information to your Lambda function, such as the function name, version, and remaining execution time.
  4. IAM Roles: AWS Identity and Access Management (IAM) roles define the permissions for your Lambda function, allowing it to access other AWS services securely.
  5. Environment Variables: Key-value pairs that you can use to pass configuration settings to your Lambda function at runtime.
  6. Timeouts and Memory: You can configure the maximum execution time and memory allocated to your Lambda function, which affects performance and cost.
  7. CloudWatch Logs: Automatically logs the output of your Lambda function, which you can use for debugging and monitoring.

Why Use AWS Lambda?

  • Cost-Effective: You only pay for the compute time you consume. There are no charges when your code is not running.
  • Scalability: Automatically scales your application by running code in response to each event, so you don’t have to worry about scaling your infrastructure.
  • Flexibility: Supports multiple programming languages (Node.js, Python, Java, Go, C#, Ruby, and custom runtimes) and can be used for a wide range of applications, from simple scripts to complex microservices.
  • Event-Driven: Easily integrates with other AWS services, allowing you to build event-driven architectures that respond to changes in your data or system state.
  • Zero Administration: No need to manage servers or runtime environments. AWS handles all the infrastructure management tasks, including patching, scaling, and availability.

Real Use Case: TypeScript Lambda to Process SQS → DynamoDB

In my current role, we use AWS Lambda to process messages from SQS queues. Here’s a simple example of how you can set up a Lambda function in TypeScript to process messages from an SQS queue and store them in DynamoDB.

Lets say we receive messages in SQS that contain user data, and we want to store this data in DynamoDB.

import { SQSHandler, SQSEvent, Context } from "aws-lambda";
import { DynamoDB } from "aws-sdk";

const dynamoDb = new DynamoDB.DocumentClient();

export const handler: SQSHandler = async (
  event: SQSEvent,
  context: Context
) => {
  for (const record of event.Records) {
    const userData = JSON.parse(record.body);
    const params = {
      TableName: "Users",
      Item: userData,
    };
    await dynamoDb.put(params).promise();
  }
};

In this example:

  • We import necessary types from aws-lambda and the DynamoDB client from aws-sdk.
  • The handler function processes each message in the SQS event.
  • We parse the message body and store it in a DynamoDB table named Users.

This function will be uploaded to AWS Lambda, and you can configure it to trigger whenever new messages arrive in the SQS queue.

Common Limitations of AWS Lambda

  • Execution Time: Lambda functions have a maximum execution time of 15 minutes. If your task takes longer, you may need to break it into smaller functions or use other services.
  • Cold Starts: When a Lambda function is invoked after being idle, it may take longer to start due to the initialization time (cold start). This can affect performance, especially for latency-sensitive applications.
  • Limited Resources: Each Lambda function has a maximum memory limit (up to 1024 MB) and a maximum package size (50 MB for direct upload, 250 MB when using layers). This can be a constraint for resource-intensive applications.
  • Limited Runtime Environment: While Lambda supports multiple programming languages, you may encounter limitations with certain libraries or dependencies that require a specific runtime environment.
  • State Management: Lambda functions are stateless, meaning they do not retain any state between invocations. If you need to maintain state, you will have to use external storage solutions like DynamoDB or S3.
  • Concurrency Limits: There are limits on the number of concurrent executions for Lambda functions. If your application experiences a sudden spike in traffic, you may hit these limits, leading to throttling of requests.
  • Vendor Lock-In: Using AWS Lambda ties you to the AWS ecosystem, which can make it challenging to migrate to other cloud providers or on-premises in the future.

Wrap-Up

AWS Lambda is a powerful tool for building serverless applications that can scale automatically and respond to events without the need for server management. By understanding its core components and limitations, you can effectively leverage Lambda to build efficient, cost-effective applications that meet your business needs.

Whether you’re processing SQS messages, building APIs with API Gateway, or integrating with other AWS services, Lambda provides a flexible and scalable solution that can adapt to your application’s requirements.

July 27, 2025 · 5 min