databricks-lakebase by databricks/databricks-agent-skills
npx skills add https://github.com/databricks/databricks-agent-skills --skill databricks-lakebase首要步骤:先使用父级 databricks 技能来掌握 CLI 基础、身份验证和配置文件选择。
Lakebase 是 Databricks 的无服务器、兼容 Postgres 的数据库(类似于 Neon)。它提供完全托管的 OLTP 存储,具备自动扩缩容、分支和缩容至零的能力。
通过 databricks postgres CLI 命令管理 Lakebase Postgres 项目、分支、端点和数据库。
项目(顶级容器)
└── 分支(隔离的数据库环境,写时复制)
├── 端点(读写或只读)
├── 数据库(标准 Postgres 数据库)
└── 角色(Postgres 角色)
production 分支和一个 primary 读写端点。READY、ARCHIVED。广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
ENDPOINT_TYPE_READ_WRITE、ENDPOINT_TYPE_READ_ONLY(只读副本)。databricks_postgres。databricks postgres create-role -h 管理角色。| 资源 | 格式 |
|---|---|
| 项目 | projects/{project_id} |
| 分支 | projects/{project_id}/branches/{branch_id} |
| 端点 | projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} |
| 数据库 | projects/{project_id}/branches/{branch_id}/databases/{database_id} |
所有 ID:1-63 个字符,以小写字母开头,仅包含小写字母/数字/连字符(RFC 1123)。
注意: "Lakebase" 是产品名称;CLI 命令组是
postgres。所有命令都使用databricks postgres ...。
请勿猜测命令语法。 动态发现可用命令及其用法:
# 列出所有 postgres 子命令
databricks postgres -h
# 获取任何子命令的详细用法(标志、参数、JSON 字段)
databricks postgres <subcommand> -h
在构建任何命令之前,先运行 databricks postgres -h。运行 databricks postgres <subcommand> -h 来发现该子命令的确切标志、位置参数和 JSON 规范字段。
databricks postgres create-project <PROJECT_ID> \
--json '{"spec": {"display_name": "<DISPLAY_NAME>"}}' \
--profile <PROFILE>
production 分支 + primary 读写端点(最小/最大 1 CU,可缩容至零)--no-wait 可立即返回。databricks postgres create-project -h 查看所有可用的规范字段(例如 pg_version)。创建后,验证自动配置的资源:
databricks postgres list-branches projects/<PROJECT_ID> --profile <PROFILE>
databricks postgres list-endpoints projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>
databricks postgres list-databases projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>
端点使用**计算单元(CU)**进行自动扩缩容。通过 create-endpoint 或 update-endpoint 配置最小/最大 CU。运行 databricks postgres create-endpoint -h 查看所有规范字段。
默认启用缩容至零。空闲时,计算资源会缩容至零;下次连接时会在几秒内恢复。
分支是现有分支的写时复制快照。将它们用于实验:测试模式迁移、尝试查询或预览数据更改——而不会影响生产环境。
databricks postgres create-branch projects/<PROJECT_ID> <BRANCH_ID> \
--json '{
"spec": {
"source_branch": "projects/<PROJECT_ID>/branches/<SOURCE_BRANCH_ID>",
"no_expiry": true
}
}' --profile <PROFILE>
分支需要过期策略:对于永久分支,使用 "no_expiry": true。
实验完成后,删除分支。受保护的分支必须先取消保护——使用 update-branch 将 spec.is_protected 设置为 false,然后删除:
# 步骤 1 — 取消保护
databricks postgres update-branch projects/<PROJECT_ID>/branches/<BRANCH_ID> \
--json '{"spec": {"is_protected": false}}' --profile <PROFILE>
# 步骤 2 — 删除(运行 -h 以确认您的 CLI 版本的位置参数格式)
databricks postgres delete-branch projects/<PROJECT_ID>/branches/<BRANCH_ID> \
--profile <PROFILE>
切勿删除 production 分支——它是在项目创建时自动配置的权威分支。
创建 Lakebase 项目后,搭建一个连接到它的 Databricks 应用。
步骤 1 — 发现分支名称(使用 READY 分支的 .name):
databricks postgres list-branches projects/<PROJECT_ID> --profile <PROFILE>
步骤 2 — 发现数据库名称(使用所需数据库的 .name;<BRANCH_ID> 是分支 ID,不是完整的资源名称):
databricks postgres list-databases projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>
步骤 3 — 使用 lakebase 功能搭建应用:
databricks apps init --name <APP_NAME> \
--features lakebase \
--set "lakebase.postgres.branch=<BRANCH_NAME>" \
--set "lakebase.postgres.database=<DATABASE_NAME>" \
--profile <PROFILE>
其中 <BRANCH_NAME> 是完整的资源名称(例如 projects/<PROJECT_ID>/branches/<BRANCH_ID>),<DATABASE_NAME> 是完整的资源名称(例如 projects/<PROJECT_ID>/branches/<BRANCH_ID>/databases/<DB_ID>)。
有关完整的应用开发工作流程,请使用 databricks-apps 技能。
连接 Postgres 客户端 从端点获取连接字符串,然后使用 psql、DBeaver 或任何标准 Postgres 客户端进行连接。
databricks postgres get-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID>/endpoints/<ENDPOINT_ID> --profile <PROFILE>
管理角色和权限 创建 Postgres 角色并授予对数据库或模式的访问权限。
databricks postgres create-role -h # 发现角色规范字段
添加只读端点 为分析或报告工作负载创建只读副本,以避免对主读写端点的争用。
databricks postgres create-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID> <ENDPOINT_ID> \
--json '{"spec": {"type": "ENDPOINT_TYPE_READ_ONLY"}}' --profile <PROFILE>
| 错误 | 解决方案 |
|---|---|
cannot configure default credentials | 使用 --profile 标志或先进行身份验证 |
PERMISSION_DENIED | 检查工作区权限 |
| 无法删除受保护的分支 | 先使用 update-branch 将 spec.is_protected 设置为 false |
| 长时间运行的操作超时 | 使用 --no-wait 并通过 get-operation 轮询 |
每周安装次数
1
代码仓库
GitHub 星标数
28
首次出现
1 天前
安全审计
安装于
cursor1
FIRST : Use the parent databricks skill for CLI basics, authentication, and profile selection.
Lakebase is Databricks' serverless Postgres-compatible database (similar to Neon). It provides fully managed OLTP storage with autoscaling, branching, and scale-to-zero.
Manage Lakebase Postgres projects, branches, endpoints, and databases via databricks postgres CLI commands.
Project (top-level container)
└── Branch (isolated database environment, copy-on-write)
├── Endpoint (read-write or read-only)
├── Database (standard Postgres DB)
└── Role (Postgres role)
production branch and a primary read-write endpoint.READY, ARCHIVED.ENDPOINT_TYPE_READ_WRITE, ENDPOINT_TYPE_READ_ONLY (read replica).databricks_postgres.databricks postgres create-role -h.| Resource | Format |
|---|---|
| Project | projects/{project_id} |
| Branch | projects/{project_id}/branches/{branch_id} |
| Endpoint | projects/{project_id}/branches/{branch_id}/endpoints/{endpoint_id} |
| Database | projects/{project_id}/branches/{branch_id}/databases/{database_id} |
All IDs: 1-63 characters, start with lowercase letter, lowercase letters/numbers/hyphens only (RFC 1123).
Note: "Lakebase" is the product name; the CLI command group is
postgres. All commands usedatabricks postgres ....
Do NOT guess command syntax. Discover available commands and their usage dynamically:
# List all postgres subcommands
databricks postgres -h
# Get detailed usage for any subcommand (flags, args, JSON fields)
databricks postgres <subcommand> -h
Run databricks postgres -h before constructing any command. Run databricks postgres <subcommand> -h to discover exact flags, positional arguments, and JSON spec fields for that subcommand.
databricks postgres create-project <PROJECT_ID> \
--json '{"spec": {"display_name": "<DISPLAY_NAME>"}}' \
--profile <PROFILE>
production branch + primary read-write endpoint (1 CU min/max, scale-to-zero)--no-wait to return immediately.databricks postgres create-project -h for all available spec fields (e.g. pg_version).After creation, verify the auto-provisioned resources:
databricks postgres list-branches projects/<PROJECT_ID> --profile <PROFILE>
databricks postgres list-endpoints projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>
databricks postgres list-databases projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>
Endpoints use compute units (CU) for autoscaling. Configure min/max CU via create-endpoint or update-endpoint. Run databricks postgres create-endpoint -h to see all spec fields.
Scale-to-zero is enabled by default. When idle, compute scales down to zero; it resumes in seconds on next connection.
Branches are copy-on-write snapshots of an existing branch. Use them for experimentation : testing schema migrations, trying queries, or previewing data changes -- without affecting production.
databricks postgres create-branch projects/<PROJECT_ID> <BRANCH_ID> \
--json '{
"spec": {
"source_branch": "projects/<PROJECT_ID>/branches/<SOURCE_BRANCH_ID>",
"no_expiry": true
}
}' --profile <PROFILE>
Branches require an expiration policy: use "no_expiry": true for permanent branches.
When done experimenting, delete the branch. Protected branches must be unprotected first -- use update-branch to set spec.is_protected to false, then delete:
# Step 1 — unprotect
databricks postgres update-branch projects/<PROJECT_ID>/branches/<BRANCH_ID> \
--json '{"spec": {"is_protected": false}}' --profile <PROFILE>
# Step 2 — delete (run -h to confirm positional arg format for your CLI version)
databricks postgres delete-branch projects/<PROJECT_ID>/branches/<BRANCH_ID> \
--profile <PROFILE>
Never delete theproduction branch — it is the authoritative branch auto-provisioned at project creation.
After creating a Lakebase project, scaffold a Databricks App connected to it.
Step 1 — Discover branch name (use .name from a READY branch):
databricks postgres list-branches projects/<PROJECT_ID> --profile <PROFILE>
Step 2 — Discover database name (use .name from the desired database; <BRANCH_ID> is the branch ID, not the full resource name):
databricks postgres list-databases projects/<PROJECT_ID>/branches/<BRANCH_ID> --profile <PROFILE>
Step 3 — Scaffold the app with the lakebase feature:
databricks apps init --name <APP_NAME> \
--features lakebase \
--set "lakebase.postgres.branch=<BRANCH_NAME>" \
--set "lakebase.postgres.database=<DATABASE_NAME>" \
--profile <PROFILE>
Where <BRANCH_NAME> is the full resource name (e.g. projects/<PROJECT_ID>/branches/<BRANCH_ID>) and <DATABASE_NAME> is the full resource name (e.g. projects/<PROJECT_ID>/branches/<BRANCH_ID>/databases/<DB_ID>).
For the full app development workflow, use the databricks-apps skill.
Connect a Postgres client Get the connection string from the endpoint, then connect with psql, DBeaver, or any standard Postgres client.
databricks postgres get-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID>/endpoints/<ENDPOINT_ID> --profile <PROFILE>
Manage roles and permissions Create Postgres roles and grant access to databases or schemas.
databricks postgres create-role -h # discover role spec fields
Add a read-only endpoint Create a read replica for analytics or reporting workloads to avoid contention on the primary read-write endpoint.
databricks postgres create-endpoint projects/<PROJECT_ID>/branches/<BRANCH_ID> <ENDPOINT_ID> \
--json '{"spec": {"type": "ENDPOINT_TYPE_READ_ONLY"}}' --profile <PROFILE>
| Error | Solution |
|---|---|
cannot configure default credentials | Use --profile flag or authenticate first |
PERMISSION_DENIED | Check workspace permissions |
| Protected branch cannot be deleted | update-branch to set spec.is_protected to false first |
| Long-running operation timeout | Use --no-wait and poll with |
Weekly Installs
1
Repository
GitHub Stars
28
First Seen
1 day ago
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
cursor1
Supabase Postgres 最佳实践指南 - 8大类别性能优化规则与SQL示例
62,800 周安装
get-operation