npx skills add https://github.com/incept5/eve-skillpacks --skill eve-fullstack-app-design架构化应用程序,其中清单是蓝图,平台处理基础设施,每个设计决策都是经过深思熟虑的。
在以下情况下加载此技能:
此技能教授针对 Eve 的 PaaS 层的设计思维。关于 CLI 使用和操作细节,请加载相应的 eve-se 技能(eve-manifest-authoring、eve-deploy-debugging、eve-auth-and-secrets、eve-pipelines-workflows)。
清单(.eve/manifest.yaml)是您应用程序形态的单一事实来源。将其视为架构文档,而不仅仅是配置。
| 关注点 | 清单部分 | 设计决策 |
|---|
Architect applications where the manifest is the blueprint, the platform handles infrastructure, and every design decision is intentional.
Load this skill when:
This skill teaches design thinking for Eve's PaaS layer. For CLI usage and operational detail, load the corresponding eve-se skills (eve-manifest-authoring, eve-deploy-debugging, eve-auth-and-secrets, eve-pipelines-workflows).
The manifest (.eve/manifest.yaml) is the single source of truth for your application's shape. Treat it as an architectural document, not just configuration.
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| 服务拓扑 | services | 运行哪些进程,它们如何连接 |
| 基础设施 | services[].x-eve | 托管数据库、入口、角色 |
| 构建策略 | services[].build + registry | 构建什么,镜像存放在哪里 |
| 发布流水线 | pipelines | 代码如何从提交流向生产环境 |
| 环境形态 | environments | 存在哪些环境,它们使用哪些流水线 |
| 代理配置 | x-eve.agents, x-eve.chat | 代理配置文件、团队调度、聊天路由 |
| 运行时默认值 | x-eve.defaults | 工具集、工作空间、git 策略 |
设计原则:如果代理或操作员无法通过阅读清单来理解您应用程序的形态,那么清单就是不完整的。
大多数 Eve 应用程序遵循以下模式之一:
API + 数据库(最简单):
services:
api: # 带有入口的 HTTP 服务
db: # 托管的 Postgres
API + 工作进程 + 数据库:
services:
api: # HTTP 服务(面向用户)
worker: # 后台处理器(作业、队列)
db: # 托管的 Postgres
多服务:
services:
web: # 前端/SSR
api: # 后端 API
worker: # 后台作业
db: # 托管的 Postgres
redis: # 外部缓存 (x-eve.external: true)
x-eve.role: managed_db 并让平台进行配置、连接和注入凭据。无需手动连接字符串。x-eve.external: true 和 x-eve.connection_url。x-eve.role: job。 迁移、种子数据和数据回填是作业服务,而不是持久进程。x-eve.ingress.public: true。内部服务通过集群网络进行通信。需要存储文件(上传、头像、导出)的应用程序可以在清单中声明对象存储桶:
services:
api:
x-eve:
object_store:
buckets:
- name: uploads
visibility: private
- name: avatars
visibility: public
注意: 应用程序对象存储的数据库模式已存在,但清单的自动配置尚未完成。有关当前状态,请参阅
references/object-store-filesystem.md。
当配置完成后,平台会将 STORAGE_ENDPOINT、STORAGE_ACCESS_KEY、STORAGE_SECRET_KEY、STORAGE_BUCKET 和 STORAGE_FORCE_PATH_STYLE 注入到服务容器中。
对于面向文档的存储,请使用云文件系统挂载。每个组织通过 BYOA OAuth 凭据连接自己的 Google Drive,然后将文件夹挂载到组织文件系统中:
eve integrations configure google-drive --client-id "..." --client-secret "..."
eve integrations connect google-drive
eve cloud-fs mount --org org_xxx --provider google-drive --folder-id <id> --label "Shared Drive"
应用程序可以通过 Eve 的云文件系统界面(eve cloud-fs ls、eve cloud-fs search 以及每个挂载的云文件系统 API 路由)浏览和搜索已挂载的 Drive 内容。这是对对象存储桶的补充——使用云文件系统处理共享文档和协作,使用对象存储处理应用程序管理的二进制资源。
每个部署的服务都会收到 EVE_API_URL、EVE_PUBLIC_API_URL、EVE_PROJECT_ID、EVE_ORG_ID 和 EVE_ENV_NAME。使用 EVE_API_URL 进行服务器到服务器的调用。使用 EVE_PUBLIC_API_URL 处理面向浏览器的代码。设计您的应用程序来读取这些变量,而不是硬编码 URL。
最常见的 Eve 全栈模式。一个由 nginx 前置的 SPA 将 API 调用代理到内部后端,使用托管的 Postgres 和 eve-migrate 进行模式管理。
services:
web: # nginx SPA(公共入口,代理 /api/ → api 服务)
api: # NestJS/Express 后端(内部,无公共入口)
db: # 托管的 Postgres 16
migrate: # eve-migrate 作业(运行 SQL 迁移)
为什么使用 nginx 代理? Web 服务的 nginx 反向代理将 /api/ 代理到内部 API 服务。这消除了 CORS 问题,无需硬编码 API 主机名,并赋予 SPA 对后端的同源访问权限。API 服务没有公共入口——它只能在集群内部访问。
services:
api:
build:
context: ./apps/api
dockerfile: ./apps/api/Dockerfile
ports: [3000]
environment:
NODE_ENV: production
DATABASE_URL: ${managed.db.url}
CORS_ORIGIN: "https://myapp.eh1.incept5.dev"
# 没有 x-eve.ingress — API 仅供内部使用
web:
build:
context: ./apps/web
dockerfile: ./apps/web/Dockerfile
ports: [80]
environment:
API_SERVICE_HOST: ${ENV_NAME}-api # nginx 代理的 k8s 服务 DNS
depends_on:
api:
condition: service_healthy
x-eve:
ingress:
public: true
port: 80
alias: myapp # https://myapp.{org}-{project}-{env}.eh1.incept5.dev
migrate:
image: public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest
environment:
DATABASE_URL: ${managed.db.url}
MIGRATIONS_DIR: /migrations
x-eve:
role: job
files:
- source: db/migrations
target: /migrations
db:
x-eve:
role: managed_db
managed:
class: db.p1
engine: postgres
engine_version: "16"
Web 服务 Dockerfile 使用 Vite 构建 SPA,然后通过 nginx 提供服务。nginx 配置使用 envsubst 在容器启动时解析 ${API_SERVICE_HOST}:
server {
listen 80;
root /usr/share/nginx/html;
index index.html;
location /api/ {
proxy_pass http://${API_SERVICE_HOST}:3000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
}
location / {
try_files $uri $uri/ /index.html;
}
location /health {
return 200 "ok";
add_header Content-Type text/plain;
}
}
在清单中,API_SERVICE_HOST: ${ENV_NAME}-api 解析为 k8s 服务名称(例如,sandbox-api),为 nginx 提供一个稳定的内部 DNS 目标。
Eve 在 public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest 提供了一个专门构建的迁移运行器。它使用带有时间戳前缀的纯 SQL 文件,在 schema_migrations 表中进行跟踪(幂等、校验和、事务性)。
db/
migrations/
20260312000000_initial_schema.sql
20260312100000_seed_data.sql
20260315000000_add_status_column.sql
通过 x-eve.files 将迁移挂载到容器中。流水线中的迁移步骤在部署后运行(必须先配置托管数据库)。
不要使用 TypeORM、Knex 或 Flyway 迁移——它们增加了复杂性,并且与 Eve 平台的迁移跟踪不一致。eve-migrate 运行器在本地开发和暂存环境之间提供了一致性。
API Dockerfile(NestJS/Node):
FROM node:22-slim AS base
WORKDIR /app
ENV PNPM_HOME="/pnpm" PATH="$PNPM_HOME:$PATH"
RUN corepack enable && corepack prepare pnpm@latest --activate
FROM base AS deps
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile 2>/dev/null || pnpm install
FROM deps AS build
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
FROM node:22-slim AS production
WORKDIR /app
RUN groupadd --gid 1000 node || true && useradd --uid 1000 --gid node --shell /bin/bash --create-home node || true
COPY --from=deps /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY package.json ./
USER node
ENV NODE_ENV=production PORT=3000
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD node -e "fetch('http://localhost:3000/health').then(r => r.ok ? process.exit(0) : process.exit(1)).catch(() => process.exit(1))"
CMD ["node", "dist/main.js"]
Web Dockerfile(Vite SPA + nginx):
FROM node:22-slim AS build
WORKDIR /app
ENV PNPM_HOME="/pnpm" PATH="$PNPM_HOME:$PATH"
RUN corepack enable && corepack prepare pnpm@latest --activate
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile 2>/dev/null || pnpm install
COPY tsconfig.json vite.config.ts index.html ./
COPY src ./src
RUN pnpm build
FROM nginx:alpine AS production
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/templates/default.conf.template
EXPOSE 80
HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost/health || exit 1
CMD ["nginx", "-g", "daemon off;"]
约定:node:22-slim 基础镜像,通过 corepack 使用 pnpm,冻结的 lockfile,非 root 用户(API),两个服务都进行健康检查。
在清单中声明一个托管数据库:
services:
db:
x-eve:
role: managed_db
managed:
class: db.p1
engine: postgres
engine_version: "16"
在其他服务中引用连接 URL:${managed.db.url}。
db/migrations/ 中创建带有时间戳前缀的 SQL 文件(例如,20260312000000_initial.sql)。通过 eve-migrate 运行(参见上面的参考架构)。切勿手动修改生产模式。org_id TEXT NOT NULL、RLS 策略以及设置会话上下文的 DatabaseService(见下文)。事后添加行级安全性非常痛苦。eve db schema 检查当前模式。在开发期间使用 eve db sql --env <env> 进行临时查询。eve-agent-memory)。在 NestJS 中实现多租户 RLS 的成熟模式是使用原始的 pg.Pool(而不是 ORM)和请求作用域的事务包装器:
db.ts — 带有启动健康检查的连接池配置:
import { Pool } from 'pg';
const databaseUrl = process.env.DATABASE_URL || 'postgresql://app:app@localhost:5432/myapp';
const parsed = new URL(databaseUrl);
const isLocal = ['localhost', '127.0.0.1'].includes(parsed.hostname);
export const pool = new Pool({
connectionString: databaseUrl,
ssl: !isLocal ? { rejectUnauthorized: false } : undefined,
});
database.service.ts — 带有 RLS 上下文的事务包装器:
import { Injectable } from '@nestjs/common';
import type { PoolClient, QueryResult, QueryResultRow } from 'pg';
import { pool } from '../db';
export interface DbContext {
org_id: string;
user_id?: string;
}
@Injectable()
export class DatabaseService {
async withClient<T>(context: DbContext | null, fn: (client: PoolClient) => Promise<T>): Promise<T> {
const client = await pool.connect();
try {
await client.query('BEGIN');
if (context?.org_id) {
await client.query("SELECT set_config('app.org_id', $1, true)", [context.org_id]);
}
if (context?.user_id) {
await client.query("SELECT set_config('app.user_id', $1, true)", [context.user_id]);
}
const result = await fn(client);
await client.query('COMMIT');
return result;
} catch (error) {
await client.query('ROLLBACK');
throw error;
} finally {
client.release();
}
}
async query<T extends QueryResultRow>(ctx: DbContext | null, sql: string, params?: unknown[]): Promise<QueryResult<T>> {
return this.withClient(ctx, (client) => client.query<T>(sql, params));
}
async queryOne<T extends QueryResultRow>(ctx: DbContext | null, sql: string, params?: unknown[]): Promise<T | null> {
const result = await this.query<T>(ctx, sql, params);
return result.rows[0] ?? null;
}
}
为什么使用这种模式?
set_config('app.org_id', $1, true) 是事务作用域的——当连接返回到连接池时,它会自动清除。withClient,确保在任何查询之前设置 RLS 上下文。DbContext 对象派生自 req.user(由 Eve 认证中间件设置)。RLS 策略模板(在迁移 SQL 中应用于每个表):
ALTER TABLE my_table ENABLE ROW LEVEL SECURITY;
CREATE POLICY my_table_select ON my_table FOR SELECT
USING (current_setting('app.org_id', true) IS NOT NULL
AND org_id = current_setting('app.org_id', true));
CREATE POLICY my_table_insert ON my_table FOR INSERT
WITH CHECK (current_setting('app.org_id', true) IS NOT NULL
AND org_id = current_setting('app.org_id', true));
CREATE POLICY my_table_update ON my_table FOR UPDATE
USING (current_setting('app.org_id', true) IS NOT NULL
AND org_id = current_setting('app.org_id', true))
WITH CHECK (current_setting('app.org_id', true) IS NOT NULL
AND org_id = current_setting('app.org_id', true));
表约定:每个表都有 id UUID PRIMARY KEY DEFAULT gen_random_uuid()、org_id TEXT NOT NULL、created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),对于可变表还有 updated_at TIMESTAMPTZ(带有触发器)。在第一次迁移中启用 pgcrypto 扩展。
| 谁查询 | 如何查询 | 认证 |
|---|---|---|
| 应用程序服务 | 服务环境变量中的 ${managed.db.url} | 在部署时注入连接字符串 |
| 通过 CLI 的代理 | eve db sql --env <env> | 作业令牌范围控制访问 |
| 通过 RLS 的代理 | 带有 app.current_user_id() 的 SQL | 由运行时设置的会话上下文 |
每个生产应用程序都应遵循 构建 → 发布 → 部署 → 迁移 → 冒烟测试:
pipelines:
deploy:
steps:
- name: build
action:
type: build # 创建 BuildSpec + BuildRun,生成镜像摘要
- name: release
depends_on: [build]
action:
type: release # 从构建工件创建不可变的发布
- name: deploy
depends_on: [release]
action:
type: deploy # 将发布部署到目标环境
- name: migrate
depends_on: [deploy]
action:
type: job
service: migrate # 针对托管数据库运行 eve-migrate
- name: smoke-test
depends_on: [migrate]
script:
run: ./scripts/smoke-test.sh
timeout: 300
为什么这个顺序很重要:
构建 生成 SHA256 镜像摘要。发布 固定这些确切的摘要。部署 使用固定的发布。您部署的正是您构建的内容——没有标签漂移,没有“最新”的意外。迁移 在部署之后运行,因为必须先配置托管数据库。eve-migrate 作业应用任何待处理的 SQL 迁移。冒烟测试 在流水线报告成功之前验证已部署服务的端到端功能。| 选项 | 何时使用 |
|---|---|
registry: "eve" | 默认值。带有 JWT 认证的内部注册表。最简单的设置。 |
| 自带注册表(GHCR、ECR) | 当您需要在 Eve 之外访问镜像,或者已有现有 CI 时。 |
registry: "none" | 仅公共基础镜像。无自定义构建。 |
对于 GHCR,在 Dockerfile 中添加 OCI 标签以实现自动仓库链接:
LABEL org.opencontainers.image.source="https://github.com/YOUR_ORG/YOUR_REPO"
每个具有自定义镜像的服务都需要一个 构建 部分:
services:
api:
build:
context: ./apps/api
dockerfile: Dockerfile
image: ghcr.io/org/my-api
使用多阶段 Dockerfile。BuildKit 原生支持它们。将 OCI 标签放在最终阶段。
| 环境 | 类型 | 目的 | 流水线 |
|---|---|---|---|
staging | 持久性 | 集成测试、演示 | deploy |
production | 持久性 | 实时流量 | deploy(带升级) |
preview-* | 临时性 | PR 预览、功能分支 | deploy(自动清理) |
在清单中将每个环境链接到流水线:
environments:
staging:
pipeline: deploy
production:
pipeline: deploy
标准部署:eve env deploy staging --ref main --repo-dir . 触发链接的流水线。
直接部署(绕过流水线):eve env deploy staging --ref <sha> --direct 用于紧急情况或简单设置。
升级:在暂存环境中构建一次,然后将相同的发布工件升级到生产环境。构建步骤的摘要会传递下去,保证镜像完全相同。
当部署失败时:
eve env diagnose <project> <env> —— 显示健康状态、最近部署、服务状态。eve env logs <project> <env> —— 容器输出。eve env reset <project> <env> —— 核选项,从头重新配置。设计您的应用程序以支持安全回滚:迁移应向前兼容,服务应在滚动部署期间优雅地处理模式版本不匹配。
与 Google Drive、Slack 或其他 OAuth 提供商集成的应用程序使用按组织凭据(BYOA——自带应用)。每个组织注册自己的 OAuth 应用,从而控制品牌、范围、速率限制和凭据轮换。
eve integrations configure google-drive --client-id "..." --client-secret "..."
eve integrations connect google-drive
设计影响:使用 Google Drive 数据或 Slack 消息的应用程序应通过 Eve API 引用集成令牌,而不是自行存储 OAuth 凭据。平台使用组织的注册 OAuth 应用凭据处理令牌刷新。
工作流可以由平台事件触发,从而实现响应式自动化:
workflows:
on-deploy:
trigger:
system.event: environment.deployed
steps:
- name: smoke-test
script:
run: ./scripts/smoke-test.sh
on-ingest:
trigger:
system.event: doc.ingest.completed
steps:
- name: process
agent: doc-processor
事件源包括:GitHub webhooks、Slack 事件、系统事件(部署、构建、摄取)、cron 计划以及手动触发器。有关触发器语法,请参阅 eve-pipelines-workflows;有关完整事件目录,请参阅 references/events.md。
应用程序可以发布对代理友好的 CLI,以替代原始的 REST/curl 交互。在清单中声明 CLI:
services:
api:
x-eve:
cli:
name: myapp
bin: cli/bin/myapp
平台将捆绑的二进制文件符号链接到代理工作空间的 $PATH 中。代理调用 myapp --help 来发现功能,从而消除了 URL 构造、认证头和 JSON 引用。有关声明细节,请参阅 eve-manifest-authoring;有关完整实现模式,请参阅 references/app-cli.md。
管理环境和项目的完整生命周期:
# 卸载服务(停止 pod,保留环境记录和历史记录)
eve env undeploy <project> <env>
# 完全删除环境(级联到托管数据库、密钥)
eve env delete <project> <env>
# 删除项目(级联到所有环境、工件、历史记录)
eve project delete <project-id>
设计您的应用程序以实现干净的拆卸:迁移应是幂等的,托管数据库删除是不可逆的,即使环境删除后,流水线历史记录也会保留在审计日志中。
密钥解析具有级联优先级:项目 > 用户 > 组织 > 系统。项目级别的 API_KEY 会覆盖组织级别的 API_KEY。
eve secrets set KEY "value" --project proj_xxx。保持项目密钥自包含。${secret.KEY}。平台在部署时解析。eve manifest validate --validate-secrets 以捕获缺失的密钥引用,避免它们导致部署失败。.eve/dev-secrets.yaml 进行本地开发。 使用本地值镜像生产密钥。此文件被 git 忽略。${secret.KEY} 插值。这确保密钥流经平台的解析和审计链。代理需要仓库访问权限。将 github_token(HTTPS)或 ssh_key(SSH)设置为项目密钥。工作进程在 git 操作期间自动注入这些凭据。
Eve 提供了消除样板代码的共享认证包。用大约 25 行代码添加 Eve SSO 登录。
后端(@eve-horizon/auth):
import { eveUserAuth, eveAuthGuard, eveAuthConfig } from '@eve-horizon/auth';
app.use(eveUserAuth()); // 解析令牌(非阻塞)
app.get('/auth/config', eveAuthConfig()); // 提供 SSO 发现
app.get('/auth/me', eveAuthGuard(), (req, res) => {
res.json(req.eveUser); // { id, email, orgId, role }
});
app.use('/api', eveAuthGuard()); // 保护所有 API 路由
前端(@eve-horizon/auth-react):
import { EveAuthProvider, EveLoginGate } from '@eve-horizon/auth-react';
function App() {
return (
<EveAuthProvider apiUrl="/api">
<EveLoginGate>
<ProtectedApp />
</EveLoginGate>
</EveAuthProvider>
);
}
对于组件中的认证 API 调用,使用 createEveClient:
import { createEveClient } from '@eve-horizon/auth-react';
const client = createEveClient('/api');
const res = await client.fetch('/data');
自定义认证门——当您需要控制加载和登录状态时(自定义登录页面、更丰富的加载 UI),直接使用 useEveAuth() 而不是 EveLoginGate:
import { EveAuthProvider, useEveAuth } from '@eve-horizon/auth-react';
function AuthGate() {
const { user, loading, loginWithToken, loginWithSso, logout } = useEveAuth();
if (loading) return <Spinner />;
if (!user) return <LoginPage onSso={loginWithSso} onToken={loginWithToken} />;
return <AppShell user={user} onLogout={logout}><Routes /></AppShell>;
}
export default function App() {
return (
<EveAuthProvider apiUrl={API_BASE}>
<AuthGate />
</EveAuthProvider>
);
}
EveAuthProvider 检查 sessionStorage 中缓存的令牌/session(根域 cookie)Authorization: Bearer <token>在 main.ts 中将 eveUserAuth() 应用为全局中间件。如果现有控制器期望 req.user 而不是 req.eveUser,请在一个地方添加一个薄桥,将 Eve 角色映射到应用程序特定角色:
import { eveUserAuth } from '@eve-horizon/auth';
app.use(eveUserAuth());
app.use((req, _res, next) => {
if (req.eveUser) {
req.user = { ...req.eveUser, role: req.eveUser.role === 'member' ? 'viewer' : 'admin' };
}
next();
});
平台将 EVE_SSO_URL、EVE_API_URL 和 EVE_ORG_ID 注入到已部署的容器中。无需手动配置。在前端可访问的 SSO URL 的清单环境块中使用 ${SSO_URL}。
eveUserAuth(),然后在受保护的路由上使用 eveAuthGuard()。这使得混合的公共/私有路由成为可能。/auth/config 端点是握手。 前端通过调用后端的 eveAuthConfig() 端点来发现 SSO URL。这将前端与平台环境变量解耦,并且在本地开发和部署环境中同样有效。orgs JWT 声明反映了铸造时的成员资格(1 天 TTL)。如果需要立即撤销,请使用 strategy: 'remote'。有关完整的 SDK 参考,请参阅 eve-read-eve-docs 技能中的 references/auth-sdk.md。
按以下阶段逐步升级:
1. 状态 → eve env show <project> <env>
2. 诊断 → eve env diagnose <project> <env>
3. 日志 → eve env logs <project> <env>
4. 流水线 → eve pipeline logs <pipeline> <run-id> --follow
5. 恢复 → eve env deploy (rollback) 或 eve env reset
从顶部开始。每个阶段提供更多细节和更高成本。大多数问题在阶段 1-2 解决。
实时监控流水线执行:
eve pipeline logs <pipeline> <run-id> --follow # 流式传输所有步骤
eve pipeline logs <pipeline> <run-id> --follow --step build # 流式传输一个步骤
失败的步骤包括失败提示,并在适用时链接到构建诊断。
当构建失败时:
eve build list --project <project_id>
eve build diagnose <build_id>
eve build logs <build_id>
常见原因:缺少注册表凭据、Dockerfile 路径不匹配、构建上下文过大。
设计带有健康端点的服务。Eve 轮询健康状态以确定部署就绪情况。当 ready === true 且 active_pipeline_run === null 时,部署完成。
服务拓扑:
x-eve.external: true数据库:
db/migrations/ 中x-eve.files 挂载的 eve-migrate 作业服务DatabaseService 包装所有带有 RLS 上下文(set_config)的数据库访问org_id 的表都有 RLS 策略pgcrypto 扩展,UUID 主键,updated_at 触发器流水线:
构建 → 发布 → 部署 → 迁移 → 冒烟测试 流水线环境:
密钥:
eve secrets set 按项目设置${secret.KEY} 插值eve manifest validate --validate-secrets 通过.eve/dev-secrets.yaml 用于本地开发github_token 或 ssh_key)认证:
@eve-horizon/auth 中间件(eveUserAuth + eveAuthGuard)eveAuthConfig)@eve-horizon/auth-react 包装前端(EveAuthProvider + EveLoginGate 或自定义 useEveAuth 门)createEveClient 进行认证的 API 调用EVE_SSO_URL、EVE_ORG_ID)可观察性:
eve pipeline logs --follow 访问流水线日志eve-manifest-authoringeve-deploy-debuggingeve-auth-and-secretseve-pipelines-workflowseve-local-dev-loopeve-agentic-app-designeve-read-eve-docs → references/auth-sdk.mdeve-read-eve-docs → references/object-store-filesystem.mdeve-read-eve-docs → references/integrations.md每周安装次数
117
仓库
首次出现
2026年2月16日
安全审计
Gen Agent Trust HubPassSocketPass[SnykPass](/incept5/eve-skillpacks/eve-fullstack-app-design/
| Concern | Manifest Section | Design Decision |
|---|---|---|
| Service topology | services | What processes run, how they connect |
| Infrastructure | services[].x-eve | Managed DB, ingress, roles |
| Build strategy | services[].build + registry | What gets built, where images live |
| Release pipeline | pipelines | How code flows from commit to production |
| Environment shape | environments | Which environments exist, what pipelines they use |
| Agent configuration | x-eve.agents, x-eve.chat | Agent profiles, team dispatch, chat routing |
| Runtime defaults | x-eve.defaults | Harness, workspace, git policies |
Design principle : If an agent or operator can't understand your app's shape by reading the manifest, the manifest is incomplete.
Most Eve apps follow one of these patterns:
API + Database (simplest):
services:
api: # HTTP service with ingress
db: # managed Postgres
API + Worker + Database :
services:
api: # HTTP service (user-facing)
worker: # Background processor (jobs, queues)
db: # managed Postgres
Multi-Service :
services:
web: # Frontend/SSR
api: # Backend API
worker: # Background jobs
db: # managed Postgres
redis: # external cache (x-eve.external: true)
x-eve.role: managed_db and let the platform provision, connect, and inject credentials. No manual connection strings.x-eve.external: true with x-eve.connection_url for services hosted outside Eve (Redis, third-party APIs).x-eve.role: job for one-off tasks. Migrations, seeds, and data backfills are job services, not persistent processes.x-eve.ingress.public: true. Internal services communicate via cluster networking.Apps that need to store files (uploads, avatars, exports) can declare object store buckets in the manifest:
services:
api:
x-eve:
object_store:
buckets:
- name: uploads
visibility: private
- name: avatars
visibility: public
Note: The database schema for app object stores exists, but automatic provisioning from the manifest is not yet wired. See
references/object-store-filesystem.mdfor current status.
When wired, the platform injects STORAGE_ENDPOINT, STORAGE_ACCESS_KEY, STORAGE_SECRET_KEY, STORAGE_BUCKET, and STORAGE_FORCE_PATH_STYLE into the service container.
For document-oriented storage, use cloud FS mounts. Each org connects its own Google Drive via BYOA OAuth credentials, then mounts folders into the org filesystem:
eve integrations configure google-drive --client-id "..." --client-secret "..."
eve integrations connect google-drive
eve cloud-fs mount --org org_xxx --provider google-drive --folder-id <id> --label "Shared Drive"
Apps can browse and search mounted Drive content through Eve's Cloud FS surface (eve cloud-fs ls, eve cloud-fs search, and the per-mount Cloud FS API routes). This is complementary to object store buckets -- use cloud FS for shared documents and collaboration, use object store for app-managed binary assets.
Every deployed service receives EVE_API_URL, EVE_PUBLIC_API_URL, EVE_PROJECT_ID, EVE_ORG_ID, and EVE_ENV_NAME. Use EVE_API_URL for server-to-server calls. Use EVE_PUBLIC_API_URL for browser-facing code. Design your app to read these rather than hardcoding URLs.
The most common Eve fullstack pattern. A nginx-fronted SPA proxies API calls to an internal backend, with managed Postgres and eve-migrate for schema management.
services:
web: # nginx SPA (public ingress, proxies /api/ → api service)
api: # NestJS/Express backend (internal, no public ingress)
db: # managed Postgres 16
migrate: # eve-migrate job (runs SQL migrations)
Why nginx proxy? The web service's nginx reverse-proxies /api/ to the internal API service. This eliminates CORS, removes the need for hard-coded API hostnames, and gives the SPA same-origin access to the backend. The API service has no public ingress — it's only reachable inside the cluster.
services:
api:
build:
context: ./apps/api
dockerfile: ./apps/api/Dockerfile
ports: [3000]
environment:
NODE_ENV: production
DATABASE_URL: ${managed.db.url}
CORS_ORIGIN: "https://myapp.eh1.incept5.dev"
# No x-eve.ingress — API is internal only
web:
build:
context: ./apps/web
dockerfile: ./apps/web/Dockerfile
ports: [80]
environment:
API_SERVICE_HOST: ${ENV_NAME}-api # k8s service DNS for nginx proxy
depends_on:
api:
condition: service_healthy
x-eve:
ingress:
public: true
port: 80
alias: myapp # https://myapp.{org}-{project}-{env}.eh1.incept5.dev
migrate:
image: public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest
environment:
DATABASE_URL: ${managed.db.url}
MIGRATIONS_DIR: /migrations
x-eve:
role: job
files:
- source: db/migrations
target: /migrations
db:
x-eve:
role: managed_db
managed:
class: db.p1
engine: postgres
engine_version: "16"
The web service Dockerfile builds the SPA with Vite, then serves it via nginx. The nginx config uses envsubst to resolve ${API_SERVICE_HOST} at container startup:
server {
listen 80;
root /usr/share/nginx/html;
index index.html;
location /api/ {
proxy_pass http://${API_SERVICE_HOST}:3000/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
}
location / {
try_files $uri $uri/ /index.html;
}
location /health {
return 200 "ok";
add_header Content-Type text/plain;
}
}
In the manifest, API_SERVICE_HOST: ${ENV_NAME}-api resolves to the k8s service name (e.g., sandbox-api), giving nginx a stable internal DNS target.
Eve provides a purpose-built migration runner at public.ecr.aws/w7c4v0w3/eve-horizon/migrate:latest. It uses plain SQL files with timestamp prefixes, tracked in a schema_migrations table (idempotent, checksummed, transactional).
db/
migrations/
20260312000000_initial_schema.sql
20260312100000_seed_data.sql
20260315000000_add_status_column.sql
Mount migrations into the container via x-eve.files. The migrate step in the pipeline runs after deploy (the managed DB must be provisioned first).
Do not use TypeORM, Knex, or Flyway migrations — they add complexity and diverge from the Eve platform's migration tracking. The eve-migrate runner gives parity between local dev and staging.
API Dockerfile (NestJS/Node):
FROM node:22-slim AS base
WORKDIR /app
ENV PNPM_HOME="/pnpm" PATH="$PNPM_HOME:$PATH"
RUN corepack enable && corepack prepare pnpm@latest --activate
FROM base AS deps
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile 2>/dev/null || pnpm install
FROM deps AS build
COPY tsconfig.json ./
COPY src ./src
RUN pnpm build
FROM node:22-slim AS production
WORKDIR /app
RUN groupadd --gid 1000 node || true && useradd --uid 1000 --gid node --shell /bin/bash --create-home node || true
COPY --from=deps /app/node_modules ./node_modules
COPY --from=build /app/dist ./dist
COPY package.json ./
USER node
ENV NODE_ENV=production PORT=3000
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD node -e "fetch('http://localhost:3000/health').then(r => r.ok ? process.exit(0) : process.exit(1)).catch(() => process.exit(1))"
CMD ["node", "dist/main.js"]
Web Dockerfile (Vite SPA + nginx):
FROM node:22-slim AS build
WORKDIR /app
ENV PNPM_HOME="/pnpm" PATH="$PNPM_HOME:$PATH"
RUN corepack enable && corepack prepare pnpm@latest --activate
COPY package.json pnpm-lock.yaml ./
RUN pnpm install --frozen-lockfile 2>/dev/null || pnpm install
COPY tsconfig.json vite.config.ts index.html ./
COPY src ./src
RUN pnpm build
FROM nginx:alpine AS production
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/templates/default.conf.template
EXPOSE 80
HEALTHCHECK --interval=30s --timeout=5s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost/health || exit 1
CMD ["nginx", "-g", "daemon off;"]
Conventions : node:22-slim base, pnpm via corepack, frozen lockfiles, non-root user (API), health checks on both services.
Declare a managed database in the manifest:
services:
db:
x-eve:
role: managed_db
managed:
class: db.p1
engine: postgres
engine_version: "16"
Reference the connection URL in other services: ${managed.db.url}.
db/migrations/ (e.g., 20260312000000_initial.sql). Run via eve-migrate (see Reference Architecture above). Never modify production schemas by hand.org_id TEXT NOT NULL, RLS policies, and a DatabaseService that sets the session context (see below). Retrofitting row-level security is painful.eve db schema to examine current schema. Use eve db sql --env <env> for ad-hoc queries during development.eve-agent-memory for storage patterns).The proven pattern for multi-tenant RLS in NestJS uses raw pg.Pool (not an ORM) with a request-scoped transaction wrapper:
db.ts — Pool configuration with startup health check:
import { Pool } from 'pg';
const databaseUrl = process.env.DATABASE_URL || 'postgresql://app:app@localhost:5432/myapp';
const parsed = new URL(databaseUrl);
const isLocal = ['localhost', '127.0.0.1'].includes(parsed.hostname);
export const pool = new Pool({
connectionString: databaseUrl,
ssl: !isLocal ? { rejectUnauthorized: false } : undefined,
});
database.service.ts — Transaction wrapper with RLS context:
import { Injectable } from '@nestjs/common';
import type { PoolClient, QueryResult, QueryResultRow } from 'pg';
import { pool } from '../db';
export interface DbContext {
org_id: string;
user_id?: string;
}
@Injectable()
export class DatabaseService {
async withClient<T>(context: DbContext | null, fn: (client: PoolClient) => Promise<T>): Promise<T> {
const client = await pool.connect();
try {
await client.query('BEGIN');
if (context?.org_id) {
await client.query("SELECT set_config('app.org_id', $1, true)", [context.org_id]);
}
if (context?.user_id) {
await client.query("SELECT set_config('app.user_id', $1, true)", [context.user_id]);
}
const result = await fn(client);
await client.query('COMMIT');
return result;
} catch (error) {
await client.query('ROLLBACK');
throw error;
} finally {
client.release();
}
}
async query<T extends QueryResultRow>(ctx: DbContext | null, sql: string, params?: unknown[]): Promise<QueryResult<T>> {
return this.withClient(ctx, (client) => client.query<T>(sql, params));
}
async queryOne<T extends QueryResultRow>(ctx: DbContext | null, sql: string, params?: unknown[]): Promise<T | null> {
const result = await this.query<T>(ctx, sql, params);
return result.rows[0] ?? null;
}
}
Why this pattern?
set_config('app.org_id', $1, true) is transaction-scoped — it automatically clears when the connection returns to the pool.withClient, guaranteeing RLS context is set before any query.DbContext object is derived from req.user (set by Eve auth middleware).RLS policy template (applied per table in migration SQL):
ALTER TABLE my_table ENABLE ROW LEVEL SECURITY;
CREATE POLICY my_table_select ON my_table FOR SELECT
USING (current_setting('app.org_id', true) IS NOT NULL
AND org_id = current_setting('app.org_id', true));
CREATE POLICY my_table_insert ON my_table FOR INSERT
WITH CHECK (current_setting('app.org_id', true) IS NOT NULL
AND org_id = current_setting('app.org_id', true));
CREATE POLICY my_table_update ON my_table FOR UPDATE
USING (current_setting('app.org_id', true) IS NOT NULL
AND org_id = current_setting('app.org_id', true))
WITH CHECK (current_setting('app.org_id', true) IS NOT NULL
AND org_id = current_setting('app.org_id', true));
Table conventions : Every table gets id UUID PRIMARY KEY DEFAULT gen_random_uuid(), org_id TEXT NOT NULL, created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), and updated_at TIMESTAMPTZ (with a trigger) on mutable tables. Enable pgcrypto extension in the first migration.
| Who Queries | How | Auth |
|---|---|---|
| App service | ${managed.db.url} in service env | Connection string injected at deploy |
| Agent via CLI | eve db sql --env <env> | Job token scopes access |
| Agent via RLS | SQL with app.current_user_id() | Session context set by runtime |
Every production app should follow build → release → deploy → migrate → smoke-test:
pipelines:
deploy:
steps:
- name: build
action:
type: build # Creates BuildSpec + BuildRun, produces image digests
- name: release
depends_on: [build]
action:
type: release # Creates immutable release from build artifacts
- name: deploy
depends_on: [release]
action:
type: deploy # Deploys release to target environment
- name: migrate
depends_on: [deploy]
action:
type: job
service: migrate # Runs eve-migrate against the managed DB
- name: smoke-test
depends_on: [migrate]
script:
run: ./scripts/smoke-test.sh
timeout: 300
Why this order matters :
build produces SHA256 image digests. release pins those exact digests. deploy uses the pinned release. You deploy exactly what you built — no tag drift, no "latest" surprises.migrate runs after deploy because the managed DB must be provisioned first. The eve-migrate job applies any pending SQL migrations.smoke-test validates the deployed services end-to-end before the pipeline reports success.| Option | When to Use |
|---|---|
registry: "eve" | Default. Internal registry with JWT auth. Simplest setup. |
| BYO registry (GHCR, ECR) | When you need images accessible outside Eve, or have existing CI. |
registry: "none" | Public base images only. No custom builds. |
For GHCR, add OCI labels to Dockerfiles for automatic repository linking:
LABEL org.opencontainers.image.source="https://github.com/YOUR_ORG/YOUR_REPO"
Every service with a custom image needs a build section:
services:
api:
build:
context: ./apps/api
dockerfile: Dockerfile
image: ghcr.io/org/my-api
Use multi-stage Dockerfiles. BuildKit handles them natively. Place the OCI label on the final stage.
| Environment | Type | Purpose | Pipeline |
|---|---|---|---|
staging | persistent | Integration testing, demos | deploy |
production | persistent | Live traffic | deploy (with promotion) |
preview-* | temporary | PR previews, feature branches | deploy (auto-cleanup) |
Link each environment to a pipeline in the manifest:
environments:
staging:
pipeline: deploy
production:
pipeline: deploy
Standard deploy : eve env deploy staging --ref main --repo-dir . triggers the linked pipeline.
Direct deploy (bypass pipeline): eve env deploy staging --ref <sha> --direct for emergencies or simple setups.
Promotion : Build once in staging, then promote the same release artifacts to production. The build step's digests carry forward, guaranteeing identical images.
When a deploy fails:
eve env diagnose <project> <env> — shows health, recent deploys, service status.eve env logs <project> <env> — container output.eve env reset <project> <env> — nuclear option, reprovisions from scratch.Design your app to be rollback-safe: migrations should be forward-compatible, and services should handle schema version mismatches gracefully during rolling deploys.
Apps that integrate with Google Drive, Slack, or other OAuth providers use per-org credentials (BYOA -- Bring Your Own App). Each org registers its own OAuth app, giving it control over branding, scopes, rate limits, and credential rotation.
eve integrations configure google-drive --client-id "..." --client-secret "..."
eve integrations connect google-drive
Design implications : Apps that consume Google Drive data or Slack messages should reference integration tokens through the Eve API, not store OAuth credentials themselves. The platform handles token refresh using the org's registered OAuth app credentials.
Workflows can be triggered by platform events, enabling reactive automation:
workflows:
on-deploy:
trigger:
system.event: environment.deployed
steps:
- name: smoke-test
script:
run: ./scripts/smoke-test.sh
on-ingest:
trigger:
system.event: doc.ingest.completed
steps:
- name: process
agent: doc-processor
Event sources include: GitHub webhooks, Slack events, system events (deploy, build, ingest), cron schedules, and manual triggers. See eve-pipelines-workflows for trigger syntax and references/events.md for the full event catalog.
Apps can ship agent-friendly CLIs that replace raw REST/curl interactions. Declare the CLI in the manifest:
services:
api:
x-eve:
cli:
name: myapp
bin: cli/bin/myapp
The platform symlinks the bundled binary onto $PATH in agent workspaces. Agents invoke myapp --help to discover capabilities, eliminating URL construction, auth headers, and JSON quoting. See eve-manifest-authoring for declaration details and references/app-cli.md for the full implementation pattern.
Manage the full lifecycle of environments and projects:
# Undeploy services (stops pods, keeps env record and history)
eve env undeploy <project> <env>
# Delete environment entirely (cascades to managed DB, secrets)
eve env delete <project> <env>
# Delete project (cascades to all environments, artifacts, history)
eve project delete <project-id>
Design your app for clean teardown: migrations should be idempotent, managed DB deletion is irreversible, and pipeline history is preserved in audit logs even after environment deletion.
Secrets resolve with cascading precedence: project > user > org > system. A project-level API_KEY overrides an org-level API_KEY.
eve secrets set KEY "value" --project proj_xxx. Keep project secrets self-contained.${secret.KEY} in service environment blocks. The platform resolves at deploy time.eve manifest validate --validate-secrets to catch missing secret references before they cause deploy failures..eve/dev-secrets.yaml for local development. Mirror the production secret keys with local values. This file is gitignored.${secret.KEY} interpolation. This ensures secrets flow through the platform's resolution and audit chain.Agents need repository access. Set either github_token (HTTPS) or ssh_key (SSH) as project secrets. The worker injects these automatically during git operations.
Eve provides shared auth packages that eliminate boilerplate. Add Eve SSO login in ~25 lines of code.
Backend (@eve-horizon/auth):
import { eveUserAuth, eveAuthGuard, eveAuthConfig } from '@eve-horizon/auth';
app.use(eveUserAuth()); // Parse tokens (non-blocking)
app.get('/auth/config', eveAuthConfig()); // Serve SSO discovery
app.get('/auth/me', eveAuthGuard(), (req, res) => {
res.json(req.eveUser); // { id, email, orgId, role }
});
app.use('/api', eveAuthGuard()); // Protect all API routes
Frontend (@eve-horizon/auth-react):
import { EveAuthProvider, EveLoginGate } from '@eve-horizon/auth-react';
function App() {
return (
<EveAuthProvider apiUrl="/api">
<EveLoginGate>
<ProtectedApp />
</EveLoginGate>
</EveAuthProvider>
);
}
For authenticated API calls from components, use createEveClient:
import { createEveClient } from '@eve-horizon/auth-react';
const client = createEveClient('/api');
const res = await client.fetch('/data');
Custom auth gate — When you need control over loading and login states (custom login page, richer loading UI), use useEveAuth() directly instead of EveLoginGate:
import { EveAuthProvider, useEveAuth } from '@eve-horizon/auth-react';
function AuthGate() {
const { user, loading, loginWithToken, loginWithSso, logout } = useEveAuth();
if (loading) return <Spinner />;
if (!user) return <LoginPage onSso={loginWithSso} onToken={loginWithToken} />;
return <AppShell user={user} onLogout={logout}><Routes /></AppShell>;
}
export default function App() {
return (
<EveAuthProvider apiUrl={API_BASE}>
<AuthGate />
</EveAuthProvider>
);
}
EveAuthProvider checks sessionStorage for cached token/session (root-domain cookie)Authorization: Bearer <token>Apply eveUserAuth() as global middleware in main.ts. If existing controllers expect req.user rather than req.eveUser, add a thin bridge that maps Eve roles to app-specific roles in one place:
import { eveUserAuth } from '@eve-horizon/auth';
app.use(eveUserAuth());
app.use((req, _res, next) => {
if (req.eveUser) {
req.user = { ...req.eveUser, role: req.eveUser.role === 'member' ? 'viewer' : 'admin' };
}
next();
});
The platform injects EVE_SSO_URL, EVE_API_URL, and EVE_ORG_ID into deployed containers. No manual configuration needed. Use ${SSO_URL} in manifest env blocks for frontend-accessible SSO URLs.
eveUserAuth() globally, then eveAuthGuard() on protected routes. This enables mixed public/private routes./auth/config endpoint is the handshake. The frontend discovers the SSO URL by calling the backend's eveAuthConfig() endpoint. This decouples the frontend from platform env vars and works identically in local dev and deployed environments.orgs JWT claim reflects membership at mint time (1-day TTL). Use strategy: 'remote' for immediate revocation if needed.For full SDK reference, see references/auth-sdk.md in the eve-read-eve-docs skill.
Escalate through these stages:
1. Status → eve env show <project> <env>
2. Diagnose → eve env diagnose <project> <env>
3. Logs → eve env logs <project> <env>
4. Pipeline → eve pipeline logs <pipeline> <run-id> --follow
5. Recover → eve env deploy (rollback) or eve env reset
Start at the top. Each stage provides more detail and more cost. Most issues resolve at stages 1-2.
Monitor pipeline execution in real time:
eve pipeline logs <pipeline> <run-id> --follow # stream all steps
eve pipeline logs <pipeline> <run-id> --follow --step build # stream one step
Failed steps include failure hints and link to build diagnostics when applicable.
When builds fail:
eve build list --project <project_id>
eve build diagnose <build_id>
eve build logs <build_id>
Common causes: missing registry credentials, Dockerfile path mismatch, build context too large.
Design services with health endpoints. Eve polls health to determine deployment readiness. A deploy is complete when ready === true and active_pipeline_run === null.
Service Topology:
x-eve.external: trueDatabase:
db/migrations/ with timestamp prefixeseve-migrate job service declared in manifest with x-eve.files mountDatabaseService wraps all DB access with RLS context (set_config)org_idpgcrypto extension enabled, UUID primary keys, updated_at triggersPipeline:
build → release → deploy → migrate → smoke-test pipeline definedEnvironments:
Secrets:
eve secrets set${secret.KEY} interpolationeve manifest validate --validate-secrets passes.eve/dev-secrets.yaml exists for local developmentgithub_token or ssh_key) configuredAuthentication:
@eve-horizon/auth middleware added to backend (eveUserAuth + eveAuthGuard)eveAuthConfig)@eve-horizon/auth-react wraps frontend (EveAuthProvider + EveLoginGate or custom useEveAuth gate)createEveClient used for authenticated API calls from frontendEVE_SSO_URL, EVE_ORG_ID)Observability:
eve pipeline logs --followeve-manifest-authoringeve-deploy-debuggingeve-auth-and-secretseve-pipelines-workflowseve-local-dev-loopeve-agentic-app-designeve-read-eve-docs → references/auth-sdk.mdeve-read-eve-docs → references/object-store-filesystem.mdeve-read-eve-docs → references/integrations.mdWeekly Installs
117
Repository
First Seen
Feb 16, 2026
Security Audits
Installed on
gemini-cli117
codex117
claude-code115
pi52
opencode31
github-copilot31
Cloudflare Vectorize 完整指南:全球分布式向量数据库,实现语义搜索与RAG应用
326 周安装
Cloudflare Agents SDK:构建AI驱动的自主智能体,支持可恢复流式传输与持久化状态
326 周安装
Snowflake平台技能:使用CLI、Cortex AI函数和Snowpark构建AI数据云应用
326 周安装
React + Cloudflare Workers 集成 Microsoft Entra ID 身份验证完整指南 | Azure Auth
327 周安装
Cloudflare Images 图像托管与转换 API 使用指南 | 支持 AI 人脸裁剪与内容凭证
328 周安装
Cloudflare MCP Server 教程:在Cloudflare Workers上构建远程模型上下文协议服务器
328 周安装