重要前提
安装AI Skills的关键前提是:必须科学上网,且开启TUN模式,这一点至关重要,直接决定安装能否顺利完成,在此郑重提醒三遍:科学上网,科学上网,科学上网。查看完整安装教程 →
aws-cloud-services by manutej/luxor-claude-marketplace
npx skills add https://github.com/manutej/luxor-claude-marketplace --skill aws-cloud-services一个用于在亚马逊云服务(AWS)上构建、部署和管理云基础设施的综合技能。掌握 S3 对象存储、Lambda 无服务器函数、DynamoDB NoSQL 数据库、EC2 计算实例、RDS 关系型数据库、IAM 安全、CloudFormation 基础设施即代码以及企业级云架构模式。
在以下情况时使用此技能:
AWS 是亚马逊提供的综合性云计算平台,提供涵盖计算、存储、数据库、网络、安全等领域的 200 多项服务。
区域和可用区
AWS 账户结构
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
服务类别
AWS SDK v3 是模块化的、支持摇树优化的,并为现代 JavaScript/TypeScript 应用程序进行了优化。
模块化架构
// v2 (单体式)
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
// v3 (模块化)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const client = new S3Client({ region: 'us-east-1' });
命令模式
中间件栈
IAM 控制所有 AWS 服务的身份验证和授权。
用户
组
角色
策略
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}
]
}
策略元素
始终授予必要的最小权限:
SDK 按以下顺序搜索凭证:
import { S3Client } from '@aws-sdk/client-s3';
// 显式指定区域
const client = new S3Client({
region: 'us-west-2',
endpoint: 'https://s3.us-west-2.amazonaws.com' // 可选自定义端点
});
// 使用环境/配置中的默认区域
const defaultClient = new S3Client({}); // 使用 AWS_REGION 或默认区域
S3 是 AWS 的对象存储服务,用于从任何地方存储和检索任意数量的数据。
S3 标准
S3 智能分层
S3 标准-不频繁访问
S3 单区-不频繁访问
S3 Glacier
S3 Glacier 深度归档
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { readFileSync } from 'fs';
const client = new S3Client({ region: 'us-east-1' });
// 简单上传
const uploadFile = async (bucketName, key, filePath) => {
const fileContent = readFileSync(filePath);
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
Body: fileContent,
ContentType: 'image/jpeg', // 可选
Metadata: { // 可选自定义元数据
'uploaded-by': 'user-123',
'upload-date': new Date().toISOString()
},
ServerSideEncryption: 'AES256', // 启用加密
ACL: 'private' // 访问控制
});
const response = await client.send(command);
return response;
};
import { GetObjectCommand } from '@aws-sdk/client-s3';
import { writeFileSync } from 'fs';
const downloadFile = async (bucketName, key, destinationPath) => {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: key
});
const response = await client.send(command);
// 将流转换为缓冲区
const chunks = [];
for await (const chunk of response.Body) {
chunks.push(chunk);
}
const buffer = Buffer.concat(chunks);
writeFileSync(destinationPath, buffer);
return response.Metadata;
};
import { ListObjectsV2Command } from '@aws-sdk/client-s3';
const listObjects = async (bucketName, prefix = '') => {
const command = new ListObjectsV2Command({
Bucket: bucketName,
Prefix: prefix, // 按前缀过滤
MaxKeys: 1000, // 每次请求最多 1000 个
Delimiter: '/' // 将 / 视为文件夹分隔符
});
const response = await client.send(command);
return response.Contents; // 对象数组
};
// 大型存储桶的分页
const listAllObjects = async (bucketName) => {
let allObjects = [];
let continuationToken;
do {
const command = new ListObjectsV2Command({
Bucket: bucketName,
ContinuationToken: continuationToken
});
const response = await client.send(command);
allObjects = allObjects.concat(response.Contents || []);
continuationToken = response.NextContinuationToken;
} while (continuationToken);
return allObjects;
};
import { DeleteObjectCommand, DeleteObjectsCommand } from '@aws-sdk/client-s3';
// 删除单个对象
const deleteObject = async (bucketName, key) => {
const command = new DeleteObjectCommand({
Bucket: bucketName,
Key: key
});
await client.send(command);
};
// 删除多个对象(一次最多 1000 个)
const deleteMultipleObjects = async (bucketName, keys) => {
const command = new DeleteObjectsCommand({
Bucket: bucketName,
Delete: {
Objects: keys.map(key => ({ Key: key })),
Quiet: false // 返回已删除对象的列表
}
});
const response = await client.send(command);
return response.Deleted;
};
生成用于安全文件上传/下载的临时 URL,无需 AWS 凭证。
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
// 用于上传的预签名 URL
const createUploadUrl = async (bucketName, key, expiresIn = 3600) => {
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
ContentType: 'image/jpeg'
});
const url = await getSignedUrl(client, command, { expiresIn });
return url; // 客户端可以 PUT 到此 URL
};
// 用于下载的预签名 URL
const createDownloadUrl = async (bucketName, key, expiresIn = 3600) => {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: key
});
const url = await getSignedUrl(client, command, { expiresIn });
return url; // 客户端可以 GET 此 URL
};
对于大文件(> 100MB),使用分段上传以获得更好的性能和可靠性。
import {
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand,
AbortMultipartUploadCommand
} from '@aws-sdk/client-s3';
const multipartUpload = async (bucketName, key, fileBuffer, partSize = 5 * 1024 * 1024) => {
// 1. 初始化分段上传
const createCommand = new CreateMultipartUploadCommand({
Bucket: bucketName,
Key: key
});
const { UploadId } = await client.send(createCommand);
try {
// 2. 上传分段
const parts = [];
const numParts = Math.ceil(fileBuffer.length / partSize);
for (let i = 0; i < numParts; i++) {
const start = i * partSize;
const end = Math.min(start + partSize, fileBuffer.length);
const partBody = fileBuffer.slice(start, end);
const uploadCommand = new UploadPartCommand({
Bucket: bucketName,
Key: key,
UploadId,
PartNumber: i + 1,
Body: partBody
});
const { ETag } = await client.send(uploadCommand);
parts.push({ PartNumber: i + 1, ETag });
}
// 3. 完成分段上传
const completeCommand = new CompleteMultipartUploadCommand({
Bucket: bucketName,
Key: key,
UploadId,
MultipartUpload: { Parts: parts }
});
const result = await client.send(completeCommand);
return result;
} catch (error) {
// 出错时中止,以避免为不完整的上传产生存储费用
const abortCommand = new AbortMultipartUploadCommand({
Bucket: bucketName,
Key: key,
UploadId
});
await client.send(abortCommand);
throw error;
}
};
AWS Lambda 是一种无服务器计算服务,可在无需预置服务器的情况下运行代码以响应事件。
// Lambda 处理函数签名
export const handler = async (event, context) => {
// event: 输入数据(API 请求、S3 事件等)
// context: 运行时信息(请求 ID、剩余时间等)
console.log('Event:', JSON.stringify(event, null, 2));
console.log('Request ID:', context.requestId);
console.log('Remaining time:', context.getRemainingTimeInMillis());
// 处理事件
const result = await processEvent(event);
// 返回响应
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result)
};
};
同步
异步
基于轮询
// 使用 AWS SDK 创建/更新 Lambda 函数
import {
LambdaClient,
CreateFunctionCommand,
UpdateFunctionCodeCommand,
UpdateFunctionConfigurationCommand
} from '@aws-sdk/client-lambda';
const lambdaClient = new LambdaClient({ region: 'us-east-1' });
const createFunction = async () => {
const command = new CreateFunctionCommand({
FunctionName: 'myFunction',
Runtime: 'nodejs20.x',
Role: 'arn:aws:iam::123456789012:role/lambda-execution-role',
Handler: 'index.handler',
Code: {
ZipFile: zipBuffer // 或者 S3Bucket/S3Key 用于存储在 S3 中的代码
},
Environment: {
Variables: {
'BUCKET_NAME': 'my-bucket',
'TABLE_NAME': 'my-table'
}
},
MemorySize: 512, // MB
Timeout: 30, // 秒
Tags: {
'Environment': 'production',
'Team': 'backend'
}
});
const response = await lambdaClient.send(command);
return response.FunctionArn;
};
// 用于 API Gateway 的 Lambda 函数
export const handler = async (event) => {
// 解析请求
const { httpMethod, path, queryStringParameters, body } = event;
const requestBody = body ? JSON.parse(body) : null;
// 根据 HTTP 方法和路径路由
if (httpMethod === 'GET' && path === '/users') {
const users = await getUsers();
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(users)
};
}
if (httpMethod === 'POST' && path === '/users') {
const newUser = await createUser(requestBody);
return {
statusCode: 201,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(newUser)
};
}
// 未找到
return {
statusCode: 404,
body: JSON.stringify({ message: 'Not found' })
};
};
// 由 S3 事件触发的 Lambda 函数
export const handler = async (event) => {
// 处理每个 S3 事件记录
for (const record of event.Records) {
const bucket = record.s3.bucket.name;
const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '));
const eventName = record.eventName;
console.log(`Event: ${eventName}, Bucket: ${bucket}, Key: ${key}`);
if (eventName.startsWith('ObjectCreated:')) {
await processNewFile(bucket, key);
} else if (eventName.startsWith('ObjectRemoved:')) {
await handleFileDeleted(bucket, key);
}
}
return { statusCode: 200 };
};
// 用于 DynamoDB 流的 Lambda 函数
export const handler = async (event) => {
for (const record of event.Records) {
const { eventName, dynamodb } = record;
// INSERT, MODIFY, REMOVE
console.log(`Event: ${eventName}`);
if (eventName === 'INSERT') {
const newItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.NewImage);
await handleNewItem(newItem);
}
if (eventName === 'MODIFY') {
const oldItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.OldImage);
const newItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.NewImage);
await handleItemUpdate(oldItem, newItem);
}
if (eventName === 'REMOVE') {
const oldItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.OldImage);
await handleItemDeleted(oldItem);
}
}
};
冷启动优化
错误处理
export const handler = async (event) => {
try {
// 处理事件
const result = await processEvent(event);
return { statusCode: 200, body: JSON.stringify(result) };
} catch (error) {
console.error('Error processing event:', error);
// 记录到 CloudWatch
console.error('Error details:', {
message: error.message,
stack: error.stack,
event
});
// 返回错误响应
return {
statusCode: 500,
body: JSON.stringify({
error: 'Internal server error',
requestId: context.requestId
})
};
}
};
环境变量
// 访问环境变量
const BUCKET_NAME = process.env.BUCKET_NAME;
const TABLE_NAME = process.env.TABLE_NAME;
const API_KEY = process.env.API_KEY; // 对敏感数据使用 Secrets Manager
DynamoDB 是一个完全托管的 NoSQL 数据库服务,可在任何规模下提供个位数毫秒级的性能。
表:项目的集合(类似于 SQL 中的表) 项目:单个记录(类似于行),最大 400KB 属性:键值对(类似于列) 主键:唯一标识每个项目
分区键
User Table:
- userId (Partition Key) -> "user-123"
- name -> "John Doe"
- email -> "john@example.com"
分区键 + 排序键
Order Table:
- userId (Partition Key) -> "user-123"
- orderId (Sort Key) -> "order-456"
- total -> 99.99
- status -> "shipped"
全局二级索引
本地二级索引
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient, PutCommand } from '@aws-sdk/lib-dynamodb';
const client = new DynamoDBClient({ region: 'us-east-1' });
const docClient = DynamoDBDocumentClient.from(client);
const putItem = async (tableName, item) => {
const command = new PutCommand({
TableName: tableName,
Item: item,
ConditionExpression: 'attribute_not_exists(userId)', // 防止覆盖
ReturnValues: 'ALL_OLD' // 如果存在则返回之前的项目
});
try {
const response = await docClient.send(command);
return response;
} catch (error) {
if (error.name === 'ConditionalCheckFailedException') {
console.log('Item already exists');
}
throw error;
}
};
// 示例用法
await putItem('Users', {
userId: 'user-123',
name: 'John Doe',
email: 'john@example.com',
createdAt: new Date().toISOString(),
preferences: {
theme: 'dark',
notifications: true
}
});
import { GetCommand } from '@aws-sdk/lib-dynamodb';
const getItem = async (tableName, key) => {
const command = new GetCommand({
TableName: tableName,
Key: key,
ConsistentRead: true, // 强一致性(默认:false)
ProjectionExpression: 'userId, #n, email', // 返回特定属性
ExpressionAttributeNames: {
'#n': 'name' // name 是保留字,使用占位符
}
});
const response = await docClient.send(command);
return response.Item;
};
// 示例用法
const user = await getItem('Users', { userId: 'user-123' });
import { UpdateCommand } from '@aws-sdk/lib-dynamodb';
const updateItem = async (tableName, key, updates) => {
const command = new UpdateCommand({
TableName: tableName,
Key: key,
UpdateExpression: 'SET #n = :name, email = :email, updatedAt = :now',
ExpressionAttributeNames: {
'#n': 'name'
},
ExpressionAttributeValues: {
':name': updates.name,
':email': updates.email,
':now': new Date().toISOString()
},
ConditionExpression: 'attribute_exists(userId)', // 仅当存在时更新
ReturnValues: 'ALL_NEW' // 返回更新后的项目
});
const response = await docClient.send(command);
return response.Attributes;
};
// 原子计数器递增
const incrementCounter = async (tableName, key, counterAttribute) => {
const command = new UpdateCommand({
TableName: tableName,
Key: key,
UpdateExpression: 'ADD #counter :inc',
ExpressionAttributeNames: {
'#counter': counterAttribute
},
ExpressionAttributeValues: {
':inc': 1
},
ReturnValues: 'UPDATED_NEW'
});
const response = await docClient.send(command);
return response.Attributes[counterAttribute];
};
查询具有相同分区键的项目(高效)。
import { QueryCommand } from '@aws-sdk/lib-dynamodb';
const queryItems = async (tableName, partitionKeyValue) => {
const command = new QueryCommand({
TableName: tableName,
KeyConditionExpression: 'userId = :userId AND orderId BETWEEN :start AND :end',
ExpressionAttributeValues: {
':userId': partitionKeyValue,
':start': 'order-100',
':end': 'order-200'
},
FilterExpression: 'orderStatus = :status', // 过滤结果(查询后应用)
ExpressionAttributeValues: {
':status': 'completed'
},
Limit: 100, // 返回的最大项目数
ScanIndexForward: false // 降序排序(默认:升序)
});
const response = await docClient.send(command);
return response.Items;
};
// 分页
const queryAllItems = async (tableName, partitionKeyValue) => {
let allItems = [];
let lastEvaluatedKey;
do {
const command = new QueryCommand({
TableName: tableName,
KeyConditionExpression: 'userId = :userId',
ExpressionAttributeValues: {
':userId': partitionKeyValue
},
ExclusiveStartKey: lastEvaluatedKey
});
const response = await docClient.send(command);
allItems = allItems.concat(response.Items);
lastEvaluatedKey = response.LastEvaluatedKey;
} while (lastEvaluatedKey);
return allItems;
};
扫描整个表(低效,避免在生产中使用)。
import { ScanCommand } from '@aws-sdk/lib-dynamodb';
const scanTable = async (tableName, filterExpression) => {
const command = new ScanCommand({
TableName: tableName,
FilterExpression: 'age > :minAge',
ExpressionAttributeValues: {
':minAge': 18
},
Limit: 1000
});
const response = await docClient.send(command);
return response.Items;
};
// 并行扫描以提高性能
const parallelScan = async (tableName, totalSegments = 4) => {
const scanSegment = async (segment) => {
const command = new ScanCommand({
TableName: tableName,
Segment: segment,
TotalSegments: totalSegments
});
const response = await docClient.send(command);
return response.Items;
};
// 并行扫描所有分段
const promises = [];
for (let i = 0; i < totalSegments; i++) {
promises.push(scanSegment(i));
}
const results = await Promise.all(promises);
return results.flat();
};
import { DeleteCommand } from '@aws-sdk/lib-dynamodb';
const deleteItem = async (tableName, key) => {
const command = new DeleteCommand({
TableName: tableName,
Key: key,
ConditionExpression: 'attribute_exists(userId)', // 仅当存在时删除
ReturnValues: 'ALL_OLD' // 返回已删除的项目
});
const response = await docClient.send(command);
return response.Attributes;
};
import { BatchGetCommand, BatchWriteCommand } from '@aws-sdk/lib-dynamodb';
// 批量获取(最多 100 个项目)
const batchGetItems = async (tableName, keys) => {
const command = new BatchGetCommand({
RequestItems: {
[tableName]: {
Keys: keys // 键对象数组
}
}
});
const response = await docClient.send(command);
return response.Responses[tableName];
};
// 批量写入(最多 25 个项目)
const batchWriteItems = async (tableName, items) => {
const command = new BatchWriteCommand({
RequestItems: {
[tableName]: items.map(item => ({
PutRequest: { Item: item }
}))
}
});
await docClient.send(command);
};
// 批量删除
const batchDeleteItems = async (tableName, keys) => {
const command = new BatchWriteCommand({
RequestItems: {
[tableName]: keys.map(key => ({
DeleteRequest: { Key: key }
}))
}
});
await docClient.send(command);
};
使用一个带有重载键的表来处理复杂的数据模型。
// 用户实体
{
PK: "USER#user-123",
SK: "METADATA",
type: "user",
name: "John Doe",
email: "john@example.com"
}
// 用户的订单
{
PK: "USER#user-123",
SK: "ORDER#order-456",
type: "order",
total: 99.99,
status: "shipped"
}
// 访问模式:
// 1. 获取用户:PK = "USER#user-123", SK = "METADATA"
// 2. 获取用户的所有订单:PK = "USER#user-123", SK 以 "ORDER#" 开头
// 3. 获取特定订单:PK = "USER#user-123", SK = "ORDER#order-456"
EC2 在云中提供可调整大小的计算容量,使用虚拟机(实例)。
通用型
计算优化型
内存优化型
存储优化型
加速计算型
实例的预配置模板,包含:
import {
EC2Client,
RunInstancesCommand,
DescribeInstancesCommand,
StartInstancesCommand,
StopInstancesCommand,
TerminateInstancesCommand
} from '@aws-sdk/client-ec2';
const ec2Client = new EC2Client({ region: 'us-east-1' });
// 启动实例
const launchInstance = async () => {
const command = new RunInstancesCommand({
ImageId: 'ami-0c55b159cbfafe1f0', // Amazon Linux 2 AMI
InstanceType: 't3.micro',
MinCount: 1,
MaxCount: 1,
KeyName: 'my-key-pair',
SecurityGroupIds: ['sg-0123456789abcdef0'],
SubnetId: 'subnet-0123456789abcdef0',
IamInstanceProfile: {
Name: 'ec2-instance-profile'
},
UserData: Buffer.from(`#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "Hello from EC2" > /var/www/html/index.html
`).toString('base64'),
TagSpecifications: [{
ResourceType: 'instance',
Tags: [
{ Key: 'Name', Value: 'WebServer' },
{ Key: 'Environment', Value: 'production' }
]
}]
});
const response = await ec2Client.send(command);
return response.Instances[0].InstanceId;
};
// 描述实例
const describeInstances = async (instanceIds) => {
const command = new DescribeInstancesCommand({
InstanceIds: instanceIds,
Filters: [
{ Name: 'instance-state-name', Values: ['running'] }
]
});
const response = await ec2Client.send(command);
return response.Reservations.flatMap(r => r.Instances);
};
// 停止实例
const stopInstance = async (instanceId) => {
const command = new StopInstancesCommand({
InstanceIds: [instanceId]
});
await ec2Client.send(command);
};
// 终止实例
const terminateInstance = async (instanceId) => {
const command = new TerminateInstancesCommand({
InstanceIds: [instanceId]
});
await ec2Client.send(command);
};
RDS 提供托管的关系统型数据库(PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, Aurora)。
import {
RDSClient,
CreateDBInstanceCommand,
DescribeDBInstancesCommand,
ModifyDBInstanceCommand,
DeleteDBInstanceCommand
} from '@aws-sdk/client-rds';
const rdsClient = new RDSClient({ region: 'us-east-1' });
// 创建数据库实例
const createDatabase = async () => {
const command = new CreateDBInstanceCommand({
DBInstanceIdentifier: 'mydb',
DBInstanceClass: 'db.t3.micro',
Engine: 'postgres',
EngineVersion: '15.3',
MasterUsername: 'admin',
MasterUserPassword: 'SecurePassword123!',
AllocatedStorage: 20, // GB
StorageType: 'gp3',
BackupRetentionPeriod:
A comprehensive skill for building, deploying, and managing cloud infrastructure on Amazon Web Services (AWS). Master S3 object storage, Lambda serverless functions, DynamoDB NoSQL databases, EC2 compute instances, RDS relational databases, IAM security, CloudFormation infrastructure as code, and enterprise-grade cloud architecture patterns.
Use this skill when:
AWS is Amazon's comprehensive cloud computing platform offering 200+ services across compute, storage, databases, networking, security, and more.
Regions and Availability Zones
AWS Account Structure
Service Categories
The AWS SDK v3 is modular, tree-shakable, and optimized for modern JavaScript/TypeScript applications.
Modular Architecture
// v2 (monolithic)
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
// v3 (modular)
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
const client = new S3Client({ region: 'us-east-1' });
Command Pattern
Middleware Stack
IAM controls authentication and authorization across all AWS services.
Users
Groups
Roles
Policies
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "203.0.113.0/24"
}
}
}
]
}
Policy Elements
Always grant minimum permissions necessary:
The SDK searches for credentials in this order:
import { S3Client } from '@aws-sdk/client-s3';
// Specify region explicitly
const client = new S3Client({
region: 'us-west-2',
endpoint: 'https://s3.us-west-2.amazonaws.com' // Optional custom endpoint
});
// Use default region from environment/config
const defaultClient = new S3Client({}); // Uses AWS_REGION or default region
S3 is AWS's object storage service for storing and retrieving any amount of data from anywhere.
S3 Standard
S3 Intelligent-Tiering
S3 Standard-IA (Infrequent Access)
S3 One Zone-IA
S3 Glacier
S3 Glacier Deep Archive
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import { readFileSync } from 'fs';
const client = new S3Client({ region: 'us-east-1' });
// Simple upload
const uploadFile = async (bucketName, key, filePath) => {
const fileContent = readFileSync(filePath);
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
Body: fileContent,
ContentType: 'image/jpeg', // Optional
Metadata: { // Optional custom metadata
'uploaded-by': 'user-123',
'upload-date': new Date().toISOString()
},
ServerSideEncryption: 'AES256', // Enable encryption
ACL: 'private' // Access control
});
const response = await client.send(command);
return response;
};
import { GetObjectCommand } from '@aws-sdk/client-s3';
import { writeFileSync } from 'fs';
const downloadFile = async (bucketName, key, destinationPath) => {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: key
});
const response = await client.send(command);
// Convert stream to buffer
const chunks = [];
for await (const chunk of response.Body) {
chunks.push(chunk);
}
const buffer = Buffer.concat(chunks);
writeFileSync(destinationPath, buffer);
return response.Metadata;
};
import { ListObjectsV2Command } from '@aws-sdk/client-s3';
const listObjects = async (bucketName, prefix = '') => {
const command = new ListObjectsV2Command({
Bucket: bucketName,
Prefix: prefix, // Filter by prefix
MaxKeys: 1000, // Max 1000 per request
Delimiter: '/' // Treat / as folder separator
});
const response = await client.send(command);
return response.Contents; // Array of objects
};
// Pagination for large buckets
const listAllObjects = async (bucketName) => {
let allObjects = [];
let continuationToken;
do {
const command = new ListObjectsV2Command({
Bucket: bucketName,
ContinuationToken: continuationToken
});
const response = await client.send(command);
allObjects = allObjects.concat(response.Contents || []);
continuationToken = response.NextContinuationToken;
} while (continuationToken);
return allObjects;
};
import { DeleteObjectCommand, DeleteObjectsCommand } from '@aws-sdk/client-s3';
// Delete single object
const deleteObject = async (bucketName, key) => {
const command = new DeleteObjectCommand({
Bucket: bucketName,
Key: key
});
await client.send(command);
};
// Delete multiple objects (up to 1000 at once)
const deleteMultipleObjects = async (bucketName, keys) => {
const command = new DeleteObjectsCommand({
Bucket: bucketName,
Delete: {
Objects: keys.map(key => ({ Key: key })),
Quiet: false // Return list of deleted objects
}
});
const response = await client.send(command);
return response.Deleted;
};
Generate temporary URLs for secure file uploads/downloads without AWS credentials.
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3';
// Presigned URL for upload
const createUploadUrl = async (bucketName, key, expiresIn = 3600) => {
const command = new PutObjectCommand({
Bucket: bucketName,
Key: key,
ContentType: 'image/jpeg'
});
const url = await getSignedUrl(client, command, { expiresIn });
return url; // Client can PUT to this URL
};
// Presigned URL for download
const createDownloadUrl = async (bucketName, key, expiresIn = 3600) => {
const command = new GetObjectCommand({
Bucket: bucketName,
Key: key
});
const url = await getSignedUrl(client, command, { expiresIn });
return url; // Client can GET from this URL
};
For large files (> 100MB), use multipart upload for better performance and reliability.
import {
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand,
AbortMultipartUploadCommand
} from '@aws-sdk/client-s3';
const multipartUpload = async (bucketName, key, fileBuffer, partSize = 5 * 1024 * 1024) => {
// 1. Initiate multipart upload
const createCommand = new CreateMultipartUploadCommand({
Bucket: bucketName,
Key: key
});
const { UploadId } = await client.send(createCommand);
try {
// 2. Upload parts
const parts = [];
const numParts = Math.ceil(fileBuffer.length / partSize);
for (let i = 0; i < numParts; i++) {
const start = i * partSize;
const end = Math.min(start + partSize, fileBuffer.length);
const partBody = fileBuffer.slice(start, end);
const uploadCommand = new UploadPartCommand({
Bucket: bucketName,
Key: key,
UploadId,
PartNumber: i + 1,
Body: partBody
});
const { ETag } = await client.send(uploadCommand);
parts.push({ PartNumber: i + 1, ETag });
}
// 3. Complete multipart upload
const completeCommand = new CompleteMultipartUploadCommand({
Bucket: bucketName,
Key: key,
UploadId,
MultipartUpload: { Parts: parts }
});
const result = await client.send(completeCommand);
return result;
} catch (error) {
// Abort on error to avoid storage charges for incomplete uploads
const abortCommand = new AbortMultipartUploadCommand({
Bucket: bucketName,
Key: key,
UploadId
});
await client.send(abortCommand);
throw error;
}
};
AWS Lambda is a serverless compute service that runs code in response to events without provisioning servers.
// Lambda handler signature
export const handler = async (event, context) => {
// event: Input data (API request, S3 event, etc.)
// context: Runtime information (request ID, remaining time, etc.)
console.log('Event:', JSON.stringify(event, null, 2));
console.log('Request ID:', context.requestId);
console.log('Remaining time:', context.getRemainingTimeInMillis());
// Process event
const result = await processEvent(event);
// Return response
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(result)
};
};
Synchronous (RequestResponse)
Asynchronous (Event)
Poll-based (Stream)
// Using AWS SDK to create/update Lambda function
import {
LambdaClient,
CreateFunctionCommand,
UpdateFunctionCodeCommand,
UpdateFunctionConfigurationCommand
} from '@aws-sdk/client-lambda';
const lambdaClient = new LambdaClient({ region: 'us-east-1' });
const createFunction = async () => {
const command = new CreateFunctionCommand({
FunctionName: 'myFunction',
Runtime: 'nodejs20.x',
Role: 'arn:aws:iam::123456789012:role/lambda-execution-role',
Handler: 'index.handler',
Code: {
ZipFile: zipBuffer // Or S3Bucket/S3Key for S3-stored code
},
Environment: {
Variables: {
'BUCKET_NAME': 'my-bucket',
'TABLE_NAME': 'my-table'
}
},
MemorySize: 512, // MB
Timeout: 30, // seconds
Tags: {
'Environment': 'production',
'Team': 'backend'
}
});
const response = await lambdaClient.send(command);
return response.FunctionArn;
};
// Lambda function for API Gateway
export const handler = async (event) => {
// Parse request
const { httpMethod, path, queryStringParameters, body } = event;
const requestBody = body ? JSON.parse(body) : null;
// Route based on HTTP method and path
if (httpMethod === 'GET' && path === '/users') {
const users = await getUsers();
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(users)
};
}
if (httpMethod === 'POST' && path === '/users') {
const newUser = await createUser(requestBody);
return {
statusCode: 201,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(newUser)
};
}
// Not found
return {
statusCode: 404,
body: JSON.stringify({ message: 'Not found' })
};
};
// Lambda function triggered by S3 events
export const handler = async (event) => {
// Process each S3 event record
for (const record of event.Records) {
const bucket = record.s3.bucket.name;
const key = decodeURIComponent(record.s3.object.key.replace(/\+/g, ' '));
const eventName = record.eventName;
console.log(`Event: ${eventName}, Bucket: ${bucket}, Key: ${key}`);
if (eventName.startsWith('ObjectCreated:')) {
await processNewFile(bucket, key);
} else if (eventName.startsWith('ObjectRemoved:')) {
await handleFileDeleted(bucket, key);
}
}
return { statusCode: 200 };
};
// Lambda function for DynamoDB Streams
export const handler = async (event) => {
for (const record of event.Records) {
const { eventName, dynamodb } = record;
// INSERT, MODIFY, REMOVE
console.log(`Event: ${eventName}`);
if (eventName === 'INSERT') {
const newItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.NewImage);
await handleNewItem(newItem);
}
if (eventName === 'MODIFY') {
const oldItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.OldImage);
const newItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.NewImage);
await handleItemUpdate(oldItem, newItem);
}
if (eventName === 'REMOVE') {
const oldItem = AWS.DynamoDB.Converter.unmarshall(dynamodb.OldImage);
await handleItemDeleted(oldItem);
}
}
};
Cold Start Optimization
Error Handling
export const handler = async (event) => {
try {
// Process event
const result = await processEvent(event);
return { statusCode: 200, body: JSON.stringify(result) };
} catch (error) {
console.error('Error processing event:', error);
// Log to CloudWatch
console.error('Error details:', {
message: error.message,
stack: error.stack,
event
});
// Return error response
return {
statusCode: 500,
body: JSON.stringify({
error: 'Internal server error',
requestId: context.requestId
})
};
}
};
Environment Variables
// Access environment variables
const BUCKET_NAME = process.env.BUCKET_NAME;
const TABLE_NAME = process.env.TABLE_NAME;
const API_KEY = process.env.API_KEY; // Use Secrets Manager for sensitive data
DynamoDB is a fully managed NoSQL database service for single-digit millisecond performance at any scale.
Table : Collection of items (like a table in SQL) Item : Individual record (like a row), max 400KB Attribute : Key-value pair (like a column) Primary Key : Uniquely identifies each item
Partition Key (Simple Primary Key)
User Table:
- userId (Partition Key) -> "user-123"
- name -> "John Doe"
- email -> "john@example.com"
Partition Key + Sort Key (Composite Primary Key)
Order Table:
- userId (Partition Key) -> "user-123"
- orderId (Sort Key) -> "order-456"
- total -> 99.99
- status -> "shipped"
Global Secondary Index (GSI)
Local Secondary Index (LSI)
import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
import { DynamoDBDocumentClient, PutCommand } from '@aws-sdk/lib-dynamodb';
const client = new DynamoDBClient({ region: 'us-east-1' });
const docClient = DynamoDBDocumentClient.from(client);
const putItem = async (tableName, item) => {
const command = new PutCommand({
TableName: tableName,
Item: item,
ConditionExpression: 'attribute_not_exists(userId)', // Prevent overwrite
ReturnValues: 'ALL_OLD' // Return previous item if existed
});
try {
const response = await docClient.send(command);
return response;
} catch (error) {
if (error.name === 'ConditionalCheckFailedException') {
console.log('Item already exists');
}
throw error;
}
};
// Example usage
await putItem('Users', {
userId: 'user-123',
name: 'John Doe',
email: 'john@example.com',
createdAt: new Date().toISOString(),
preferences: {
theme: 'dark',
notifications: true
}
});
import { GetCommand } from '@aws-sdk/lib-dynamodb';
const getItem = async (tableName, key) => {
const command = new GetCommand({
TableName: tableName,
Key: key,
ConsistentRead: true, // Strong consistency (default: false)
ProjectionExpression: 'userId, #n, email', // Return specific attributes
ExpressionAttributeNames: {
'#n': 'name' // name is reserved word, use placeholder
}
});
const response = await docClient.send(command);
return response.Item;
};
// Example usage
const user = await getItem('Users', { userId: 'user-123' });
import { UpdateCommand } from '@aws-sdk/lib-dynamodb';
const updateItem = async (tableName, key, updates) => {
const command = new UpdateCommand({
TableName: tableName,
Key: key,
UpdateExpression: 'SET #n = :name, email = :email, updatedAt = :now',
ExpressionAttributeNames: {
'#n': 'name'
},
ExpressionAttributeValues: {
':name': updates.name,
':email': updates.email,
':now': new Date().toISOString()
},
ConditionExpression: 'attribute_exists(userId)', // Only update if exists
ReturnValues: 'ALL_NEW' // Return updated item
});
const response = await docClient.send(command);
return response.Attributes;
};
// Atomic counter increment
const incrementCounter = async (tableName, key, counterAttribute) => {
const command = new UpdateCommand({
TableName: tableName,
Key: key,
UpdateExpression: 'ADD #counter :inc',
ExpressionAttributeNames: {
'#counter': counterAttribute
},
ExpressionAttributeValues: {
':inc': 1
},
ReturnValues: 'UPDATED_NEW'
});
const response = await docClient.send(command);
return response.Attributes[counterAttribute];
};
Query items with same partition key (efficient).
import { QueryCommand } from '@aws-sdk/lib-dynamodb';
const queryItems = async (tableName, partitionKeyValue) => {
const command = new QueryCommand({
TableName: tableName,
KeyConditionExpression: 'userId = :userId AND orderId BETWEEN :start AND :end',
ExpressionAttributeValues: {
':userId': partitionKeyValue,
':start': 'order-100',
':end': 'order-200'
},
FilterExpression: 'orderStatus = :status', // Filter results (applied after query)
ExpressionAttributeValues: {
':status': 'completed'
},
Limit: 100, // Max items to return
ScanIndexForward: false // Sort descending (default: ascending)
});
const response = await docClient.send(command);
return response.Items;
};
// Pagination
const queryAllItems = async (tableName, partitionKeyValue) => {
let allItems = [];
let lastEvaluatedKey;
do {
const command = new QueryCommand({
TableName: tableName,
KeyConditionExpression: 'userId = :userId',
ExpressionAttributeValues: {
':userId': partitionKeyValue
},
ExclusiveStartKey: lastEvaluatedKey
});
const response = await docClient.send(command);
allItems = allItems.concat(response.Items);
lastEvaluatedKey = response.LastEvaluatedKey;
} while (lastEvaluatedKey);
return allItems;
};
Scan entire table (inefficient, avoid in production).
import { ScanCommand } from '@aws-sdk/lib-dynamodb';
const scanTable = async (tableName, filterExpression) => {
const command = new ScanCommand({
TableName: tableName,
FilterExpression: 'age > :minAge',
ExpressionAttributeValues: {
':minAge': 18
},
Limit: 1000
});
const response = await docClient.send(command);
return response.Items;
};
// Parallel scan for performance
const parallelScan = async (tableName, totalSegments = 4) => {
const scanSegment = async (segment) => {
const command = new ScanCommand({
TableName: tableName,
Segment: segment,
TotalSegments: totalSegments
});
const response = await docClient.send(command);
return response.Items;
};
// Scan all segments in parallel
const promises = [];
for (let i = 0; i < totalSegments; i++) {
promises.push(scanSegment(i));
}
const results = await Promise.all(promises);
return results.flat();
};
import { DeleteCommand } from '@aws-sdk/lib-dynamodb';
const deleteItem = async (tableName, key) => {
const command = new DeleteCommand({
TableName: tableName,
Key: key,
ConditionExpression: 'attribute_exists(userId)', // Only delete if exists
ReturnValues: 'ALL_OLD' // Return deleted item
});
const response = await docClient.send(command);
return response.Attributes;
};
import { BatchGetCommand, BatchWriteCommand } from '@aws-sdk/lib-dynamodb';
// Batch get (up to 100 items)
const batchGetItems = async (tableName, keys) => {
const command = new BatchGetCommand({
RequestItems: {
[tableName]: {
Keys: keys // Array of key objects
}
}
});
const response = await docClient.send(command);
return response.Responses[tableName];
};
// Batch write (up to 25 items)
const batchWriteItems = async (tableName, items) => {
const command = new BatchWriteCommand({
RequestItems: {
[tableName]: items.map(item => ({
PutRequest: { Item: item }
}))
}
});
await docClient.send(command);
};
// Batch delete
const batchDeleteItems = async (tableName, keys) => {
const command = new BatchWriteCommand({
RequestItems: {
[tableName]: keys.map(key => ({
DeleteRequest: { Key: key }
}))
}
});
await docClient.send(command);
};
Use one table with overloaded keys for complex data models.
// User entity
{
PK: "USER#user-123",
SK: "METADATA",
type: "user",
name: "John Doe",
email: "john@example.com"
}
// User's order
{
PK: "USER#user-123",
SK: "ORDER#order-456",
type: "order",
total: 99.99,
status: "shipped"
}
// Access patterns:
// 1. Get user: PK = "USER#user-123", SK = "METADATA"
// 2. Get all user's orders: PK = "USER#user-123", SK begins_with "ORDER#"
// 3. Get specific order: PK = "USER#user-123", SK = "ORDER#order-456"
EC2 provides resizable compute capacity in the cloud with virtual machines (instances).
General Purpose (T3, M6i)
Compute Optimized (C6i)
Memory Optimized (R6i, X2idn)
Storage Optimized (I4i, D3)
Accelerated Computing (P4, G5)
Pre-configured templates for instances containing:
import {
EC2Client,
RunInstancesCommand,
DescribeInstancesCommand,
StartInstancesCommand,
StopInstancesCommand,
TerminateInstancesCommand
} from '@aws-sdk/client-ec2';
const ec2Client = new EC2Client({ region: 'us-east-1' });
// Launch instance
const launchInstance = async () => {
const command = new RunInstancesCommand({
ImageId: 'ami-0c55b159cbfafe1f0', // Amazon Linux 2 AMI
InstanceType: 't3.micro',
MinCount: 1,
MaxCount: 1,
KeyName: 'my-key-pair',
SecurityGroupIds: ['sg-0123456789abcdef0'],
SubnetId: 'subnet-0123456789abcdef0',
IamInstanceProfile: {
Name: 'ec2-instance-profile'
},
UserData: Buffer.from(`#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "Hello from EC2" > /var/www/html/index.html
`).toString('base64'),
TagSpecifications: [{
ResourceType: 'instance',
Tags: [
{ Key: 'Name', Value: 'WebServer' },
{ Key: 'Environment', Value: 'production' }
]
}]
});
const response = await ec2Client.send(command);
return response.Instances[0].InstanceId;
};
// Describe instances
const describeInstances = async (instanceIds) => {
const command = new DescribeInstancesCommand({
InstanceIds: instanceIds,
Filters: [
{ Name: 'instance-state-name', Values: ['running'] }
]
});
const response = await ec2Client.send(command);
return response.Reservations.flatMap(r => r.Instances);
};
// Stop instance
const stopInstance = async (instanceId) => {
const command = new StopInstancesCommand({
InstanceIds: [instanceId]
});
await ec2Client.send(command);
};
// Terminate instance
const terminateInstance = async (instanceId) => {
const command = new TerminateInstancesCommand({
InstanceIds: [instanceId]
});
await ec2Client.send(command);
};
RDS provides managed relational databases (PostgreSQL, MySQL, MariaDB, Oracle, SQL Server, Aurora).
import {
RDSClient,
CreateDBInstanceCommand,
DescribeDBInstancesCommand,
ModifyDBInstanceCommand,
DeleteDBInstanceCommand
} from '@aws-sdk/client-rds';
const rdsClient = new RDSClient({ region: 'us-east-1' });
// Create database instance
const createDatabase = async () => {
const command = new CreateDBInstanceCommand({
DBInstanceIdentifier: 'mydb',
DBInstanceClass: 'db.t3.micro',
Engine: 'postgres',
EngineVersion: '15.3',
MasterUsername: 'admin',
MasterUserPassword: 'SecurePassword123!',
AllocatedStorage: 20, // GB
StorageType: 'gp3',
BackupRetentionPeriod: 7, // days
MultiAZ: true, // High availability
PubliclyAccessible: false,
VpcSecurityGroupIds: ['sg-0123456789abcdef0'],
DBSubnetGroupName: 'my-db-subnet-group',
StorageEncrypted: true,
Tags: [
{ Key: 'Environment', Value: 'production' },
{ Key: 'Application', Value: 'api' }
]
});
const response = await rdsClient.send(command);
return response.DBInstance;
};
// Describe database
const describeDatabase = async (dbInstanceId) => {
const command = new DescribeDBInstancesCommand({
DBInstanceIdentifier: dbInstanceId
});
const response = await rdsClient.send(command);
return response.DBInstances[0];
};
Infrastructure as Code (IaC) service for defining and provisioning AWS resources.
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Full-stack web application infrastructure'
Parameters:
Environment:
Type: String
Default: production
AllowedValues:
- development
- staging
- production
Resources:
# S3 Bucket for static assets
AssetsBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub '${AWS::StackName}-assets-${Environment}'
VersioningConfiguration:
Status: Enabled
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
# DynamoDB Table
UsersTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: !Sub '${AWS::StackName}-users-${Environment}'
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
GlobalSecondaryIndexes:
- IndexName: EmailIndex
KeySchema:
- AttributeName: email
KeyType: HASH
Projection:
ProjectionType: ALL
StreamSpecification:
StreamViewType: NEW_AND_OLD_IMAGES
# Lambda Execution Role
LambdaExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
Policies:
- PolicyName: DynamoDBAccess
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:Query
Resource: !GetAtt UsersTable.Arn
# Lambda Function
ApiFunction:
Type: AWS::Lambda::Function
Properties:
FunctionName: !Sub '${AWS::StackName}-api-${Environment}'
Runtime: nodejs20.x
Handler: index.handler
Role: !GetAtt LambdaExecutionRole.Arn
Code:
ZipFile: |
exports.handler = async (event) => {
return {
statusCode: 200,
body: JSON.stringify({ message: 'Hello from Lambda!' })
};
};
Environment:
Variables:
TABLE_NAME: !Ref UsersTable
BUCKET_NAME: !Ref AssetsBucket
ENVIRONMENT: !Ref Environment
Timeout: 30
MemorySize: 512
# API Gateway
RestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: !Sub '${AWS::StackName}-api-${Environment}'
Description: REST API for application
ApiResource:
Type: AWS::ApiGateway::Resource
Properties:
RestApiId: !Ref RestApi
ParentId: !GetAtt RestApi.RootResourceId
PathPart: users
ApiMethod:
Type: AWS::ApiGateway::Method
Properties:
RestApiId: !Ref RestApi
ResourceId: !Ref ApiResource
HttpMethod: GET
AuthorizationType: NONE
Integration:
Type: AWS_PROXY
IntegrationHttpMethod: POST
Uri: !Sub 'arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${ApiFunction.Arn}/invocations'
ApiDeployment:
Type: AWS::ApiGateway::Deployment
DependsOn: ApiMethod
Properties:
RestApiId: !Ref RestApi
StageName: !Ref Environment
LambdaApiPermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref ApiFunction
Action: lambda:InvokeFunction
Principal: apigateway.amazonaws.com
SourceArn: !Sub 'arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${RestApi}/*'
Outputs:
ApiUrl:
Description: API Gateway URL
Value: !Sub 'https://${RestApi}.execute-api.${AWS::Region}.amazonaws.com/${Environment}'
Export:
Name: !Sub '${AWS::StackName}-api-url'
BucketName:
Description: S3 Bucket Name
Value: !Ref AssetsBucket
Export:
Name: !Sub '${AWS::StackName}-bucket-name'
TableName:
Description: DynamoDB Table Name
Value: !Ref UsersTable
Export:
Name: !Sub '${AWS::StackName}-table-name'
import {
CloudFormationClient,
CreateStackCommand,
DescribeStacksCommand,
UpdateStackCommand,
DeleteStackCommand
} from '@aws-sdk/client-cloudformation';
import { readFileSync } from 'fs';
const cfClient = new CloudFormationClient({ region: 'us-east-1' });
// Create stack
const createStack = async (stackName, templatePath, parameters = {}) => {
const templateBody = readFileSync(templatePath, 'utf8');
const command = new CreateStackCommand({
StackName: stackName,
TemplateBody: templateBody,
Parameters: Object.entries(parameters).map(([key, value]) => ({
ParameterKey: key,
ParameterValue: value
})),
Capabilities: ['CAPABILITY_IAM'],
Tags: [
{ Key: 'ManagedBy', Value: 'CloudFormation' },
{ Key: 'Application', Value: 'MyApp' }
]
});
const response = await cfClient.send(command);
return response.StackId;
};
// Get stack status
const getStackStatus = async (stackName) => {
const command = new DescribeStacksCommand({
StackName: stackName
});
const response = await cfClient.send(command);
const stack = response.Stacks[0];
return {
status: stack.StackStatus,
outputs: stack.Outputs || []
};
};
IAM Best Practices
Data Encryption
Network Security
Compute
Storage
Database
Application Design
Monitoring and Optimization
High Availability
Disaster Recovery
Error Handling
Infrastructure as Code
Monitoring and Logging
Automation
Skill Version : 1.0.0 Last Updated : October 2025 Skill Category : Cloud Infrastructure, Serverless, Database, DevOps Compatible With : AWS SDK v3, CloudFormation, AWS CLI, Terraform
Weekly Installs
50
Repository
GitHub Stars
47
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubFailSocketPassSnykFail
Installed on
codex40
gemini-cli39
opencode39
cursor37
claude-code37
github-copilot35
Azure 配额管理指南:服务限制、容量验证与配额增加方法
138,600 周安装