npx skills add https://github.com/itsmostafa/aws-agent-skills --skill s3Amazon Simple Storage Service (S3) 提供可扩展的对象存储,具备行业领先的持久性 (99.999999999%)。S3 是 AWS 的基础服务——用于数据湖、备份、静态网站,并作为许多其他 AWS 服务的存储后端。
对象的容器。存储桶名称在所有 AWS 账户中是全局唯一的。
存储在 S3 中的文件,由数据、元数据和唯一键(路径)组成。最大大小:5 TB。
| 类别 | 使用场景 | 持久性 | 可用性 |
|---|---|---|---|
| Standard | 频繁访问 | 99.999999999% | 99.99% |
| Intelligent-Tiering | 未知访问模式 | 99.999999999% | 99.9% |
| Standard-IA | 不频繁访问 | 99.999999999% | 99.9% |
| Glacier Instant |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 即时检索的归档 |
| 99.999999999% |
| 99.9% |
| Glacier Flexible | 归档(分钟到小时级检索) | 99.999999999% | 99.99% |
| Glacier Deep Archive | 长期归档 | 99.999999999% | 99.99% |
保留对象的多个版本。对于数据保护和恢复至关重要。
AWS CLI:
# Create bucket (us-east-1 doesn't need LocationConstraint)
aws s3api create-bucket \
--bucket my-secure-bucket-12345 \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2
# Enable versioning
aws s3api put-bucket-versioning \
--bucket my-secure-bucket-12345 \
--versioning-configuration Status=Enabled
# Block public access
aws s3api put-public-access-block \
--bucket my-secure-bucket-12345 \
--public-access-block-configuration \
BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
# Enable encryption
aws s3api put-bucket-encryption \
--bucket my-secure-bucket-12345 \
--server-side-encryption-configuration '{
"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]
}'
boto3:
import boto3
s3 = boto3.client('s3', region_name='us-west-2')
# Create bucket
s3.create_bucket(
Bucket='my-secure-bucket-12345',
CreateBucketConfiguration={'LocationConstraint': 'us-west-2'}
)
# Enable versioning
s3.put_bucket_versioning(
Bucket='my-secure-bucket-12345',
VersioningConfiguration={'Status': 'Enabled'}
)
# Block public access
s3.put_public_access_block(
Bucket='my-secure-bucket-12345',
PublicAccessBlockConfiguration={
'BlockPublicAcls': True,
'IgnorePublicAcls': True,
'BlockPublicPolicy': True,
'RestrictPublicBuckets': True
}
)
# Upload a single file
aws s3 cp myfile.txt s3://my-bucket/path/myfile.txt
# Upload with metadata
aws s3 cp myfile.txt s3://my-bucket/path/myfile.txt \
--metadata "environment=production,version=1.0"
# Download a file
aws s3 cp s3://my-bucket/path/myfile.txt ./myfile.txt
# Sync a directory
aws s3 sync ./local-folder s3://my-bucket/prefix/ --delete
# Copy between buckets
aws s3 cp s3://source-bucket/file.txt s3://dest-bucket/file.txt
import boto3
from botocore.config import Config
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))
# Generate presigned URL for download (GET)
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-bucket', 'Key': 'path/to/file.txt'},
ExpiresIn=3600 # URL valid for 1 hour
)
# Generate presigned URL for upload (PUT)
upload_url = s3.generate_presigned_url(
'put_object',
Params={
'Bucket': 'my-bucket',
'Key': 'uploads/newfile.txt',
'ContentType': 'text/plain'
},
ExpiresIn=3600
)
cat > lifecycle.json << 'EOF'
{
"Rules": [
{
"ID": "MoveToGlacierAfter90Days",
"Status": "Enabled",
"Filter": {"Prefix": "logs/"},
"Transitions": [
{"Days": 90, "StorageClass": "GLACIER"}
],
"Expiration": {"Days": 365}
},
{
"ID": "DeleteOldVersions",
"Status": "Enabled",
"Filter": {},
"NoncurrentVersionExpiration": {"NoncurrentDays": 30}
}
]
}
EOF
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle.json
aws s3api put-bucket-notification-configuration \
--bucket my-bucket \
--notification-configuration '{
"LambdaFunctionConfigurations": [
{
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:ProcessS3Upload",
"Events": ["s3:ObjectCreated:*"],
"Filter": {
"Key": {
"FilterRules": [
{"Name": "prefix", "Value": "uploads/"},
{"Name": "suffix", "Value": ".jpg"}
]
}
}
}
]
}'
| 命令 | 描述 |
|---|---|
aws s3 ls | 列出存储桶或对象 |
aws s3 cp | 复制文件 |
aws s3 mv | 移动文件 |
aws s3 rm | 删除文件 |
aws s3 sync | 同步目录 |
aws s3 mb | 创建存储桶 |
aws s3 rb | 删除存储桶 |
| 命令 | 描述 |
|---|---|
aws s3api create-bucket | 创建存储桶(带选项) |
aws s3api put-object | 上传对象(完全控制) |
aws s3api get-object | 下载对象(带选项) |
aws s3api delete-object | 删除单个对象 |
aws s3api put-bucket-policy | 设置存储桶策略 |
aws s3api put-bucket-versioning | 启用版本控制 |
aws s3api list-object-versions | 列出所有版本 |
--recursive:处理前缀下的所有对象--exclude/--include:过滤对象--dryrun:预览更改--storage-class:设置存储类别--acl:设置访问控制(建议使用策略替代)使用生命周期策略过渡到更便宜的存储类别
为不可预测的访问启用智能分层
删除未完成的分段上传:
{
"Rules": [{ "ID": "AbortIncompleteMultipartUpload", "Status": "Enabled", "Filter": {}, "AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7} }] }
使用 S3 Storage Lens 分析存储模式
原因:
调试步骤:
# Check your identity
aws sts get-caller-identity
# Check bucket policy
aws s3api get-bucket-policy --bucket my-bucket
# Check public access block
aws s3api get-public-access-block --bucket my-bucket
# Check object ownership
aws s3api get-object-attributes \
--bucket my-bucket \
--key myfile.txt \
--object-attributes ObjectOwner
症状: 浏览器阻止跨域请求
修复:
aws s3api put-bucket-cors --bucket my-bucket --cors-configuration '{
"CORSRules": [{
"AllowedOrigins": ["https://myapp.com"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}]
}'
解决方案:
aws s3 cp 并指定 --expected-size原因:
修复: 确保签名者拥有权限并使用正确的区域。
每周安装数
107
代码仓库
GitHub 星标数
1.1K
首次出现
Jan 22, 2026
安全审计
安装于
opencode94
codex93
gemini-cli91
cursor88
github-copilot83
amp72
Amazon Simple Storage Service (S3) provides scalable object storage with industry-leading durability (99.999999999%). S3 is fundamental to AWS—used for data lakes, backups, static websites, and as storage for many other AWS services.
Containers for objects. Bucket names are globally unique across all AWS accounts.
Files stored in S3, consisting of data, metadata, and a unique key (path). Maximum size: 5 TB.
| Class | Use Case | Durability | Availability |
|---|---|---|---|
| Standard | Frequently accessed | 99.999999999% | 99.99% |
| Intelligent-Tiering | Unknown access patterns | 99.999999999% | 99.9% |
| Standard-IA | Infrequent access | 99.999999999% | 99.9% |
| Glacier Instant | Archive with instant retrieval | 99.999999999% | 99.9% |
| Glacier Flexible | Archive (minutes to hours) | 99.999999999% | 99.99% |
| Glacier Deep Archive | Long-term archive | 99.999999999% | 99.99% |
Keeps multiple versions of an object. Essential for data protection and recovery.
AWS CLI:
# Create bucket (us-east-1 doesn't need LocationConstraint)
aws s3api create-bucket \
--bucket my-secure-bucket-12345 \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2
# Enable versioning
aws s3api put-bucket-versioning \
--bucket my-secure-bucket-12345 \
--versioning-configuration Status=Enabled
# Block public access
aws s3api put-public-access-block \
--bucket my-secure-bucket-12345 \
--public-access-block-configuration \
BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true
# Enable encryption
aws s3api put-bucket-encryption \
--bucket my-secure-bucket-12345 \
--server-side-encryption-configuration '{
"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]
}'
boto3:
import boto3
s3 = boto3.client('s3', region_name='us-west-2')
# Create bucket
s3.create_bucket(
Bucket='my-secure-bucket-12345',
CreateBucketConfiguration={'LocationConstraint': 'us-west-2'}
)
# Enable versioning
s3.put_bucket_versioning(
Bucket='my-secure-bucket-12345',
VersioningConfiguration={'Status': 'Enabled'}
)
# Block public access
s3.put_public_access_block(
Bucket='my-secure-bucket-12345',
PublicAccessBlockConfiguration={
'BlockPublicAcls': True,
'IgnorePublicAcls': True,
'BlockPublicPolicy': True,
'RestrictPublicBuckets': True
}
)
# Upload a single file
aws s3 cp myfile.txt s3://my-bucket/path/myfile.txt
# Upload with metadata
aws s3 cp myfile.txt s3://my-bucket/path/myfile.txt \
--metadata "environment=production,version=1.0"
# Download a file
aws s3 cp s3://my-bucket/path/myfile.txt ./myfile.txt
# Sync a directory
aws s3 sync ./local-folder s3://my-bucket/prefix/ --delete
# Copy between buckets
aws s3 cp s3://source-bucket/file.txt s3://dest-bucket/file.txt
import boto3
from botocore.config import Config
s3 = boto3.client('s3', config=Config(signature_version='s3v4'))
# Generate presigned URL for download (GET)
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-bucket', 'Key': 'path/to/file.txt'},
ExpiresIn=3600 # URL valid for 1 hour
)
# Generate presigned URL for upload (PUT)
upload_url = s3.generate_presigned_url(
'put_object',
Params={
'Bucket': 'my-bucket',
'Key': 'uploads/newfile.txt',
'ContentType': 'text/plain'
},
ExpiresIn=3600
)
cat > lifecycle.json << 'EOF'
{
"Rules": [
{
"ID": "MoveToGlacierAfter90Days",
"Status": "Enabled",
"Filter": {"Prefix": "logs/"},
"Transitions": [
{"Days": 90, "StorageClass": "GLACIER"}
],
"Expiration": {"Days": 365}
},
{
"ID": "DeleteOldVersions",
"Status": "Enabled",
"Filter": {},
"NoncurrentVersionExpiration": {"NoncurrentDays": 30}
}
]
}
EOF
aws s3api put-bucket-lifecycle-configuration \
--bucket my-bucket \
--lifecycle-configuration file://lifecycle.json
aws s3api put-bucket-notification-configuration \
--bucket my-bucket \
--notification-configuration '{
"LambdaFunctionConfigurations": [
{
"LambdaFunctionArn": "arn:aws:lambda:us-east-1:123456789012:function:ProcessS3Upload",
"Events": ["s3:ObjectCreated:*"],
"Filter": {
"Key": {
"FilterRules": [
{"Name": "prefix", "Value": "uploads/"},
{"Name": "suffix", "Value": ".jpg"}
]
}
}
}
]
}'
| Command | Description |
|---|---|
aws s3 ls | List buckets or objects |
aws s3 cp | Copy files |
aws s3 mv | Move files |
aws s3 rm | Delete files |
aws s3 sync | Sync directories |
aws s3 mb | Make bucket |
| Command | Description |
|---|---|
aws s3api create-bucket | Create bucket with options |
aws s3api put-object | Upload with full control |
aws s3api get-object | Download with options |
aws s3api delete-object | Delete single object |
aws s3api put-bucket-policy | Set bucket policy |
aws s3api put-bucket-versioning |
--recursive: Process all objects in prefix--exclude/--include: Filter objects--dryrun: Preview changes--storage-class: Set storage class--acl: Set access control (prefer policies instead)Use lifecycle policies to transition to cheaper storage
Enable Intelligent-Tiering for unpredictable access
Delete incomplete multipart uploads :
{
"Rules": [{ "ID": "AbortIncompleteMultipartUpload", "Status": "Enabled", "Filter": {}, "AbortIncompleteMultipartUpload": {"DaysAfterInitiation": 7} }] }
Use S3 Storage Lens to analyze storage patterns
Causes:
Debug steps:
# Check your identity
aws sts get-caller-identity
# Check bucket policy
aws s3api get-bucket-policy --bucket my-bucket
# Check public access block
aws s3api get-public-access-block --bucket my-bucket
# Check object ownership
aws s3api get-object-attributes \
--bucket my-bucket \
--key myfile.txt \
--object-attributes ObjectOwner
Symptom: Browser blocks cross-origin request
Fix:
aws s3api put-bucket-cors --bucket my-bucket --cors-configuration '{
"CORSRules": [{
"AllowedOrigins": ["https://myapp.com"],
"AllowedMethods": ["GET", "PUT", "POST"],
"AllowedHeaders": ["*"],
"ExposeHeaders": ["ETag"],
"MaxAgeSeconds": 3600
}]
}'
Solutions:
aws s3 cp with --expected-size for large filesCauses:
Fix: Ensure signer has permissions and use correct region.
Weekly Installs
107
Repository
GitHub Stars
1.1K
First Seen
Jan 22, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode94
codex93
gemini-cli91
cursor88
github-copilot83
amp72
aws s3 rb| Remove bucket |
| Enable versioning |
aws s3api list-object-versions | List all versions |