streamlit-snowflake by jezweb/claude-skills
npx skills add https://github.com/jezweb/claude-skills --skill streamlit-snowflake在 Snowflake 中原生构建和部署 Streamlit 应用,包括作为原生应用发布到市场。
将模板复制到你的项目:
# 创建项目目录
mkdir my-streamlit-app && cd my-streamlit-app
# 复制模板(Claude 将提供这些模板)
更新 snowflake.yml 中的占位符:
definition_version: 2
entities:
my_app:
type: streamlit
identifier: my_streamlit_app # ← 你的应用名称
stage: my_app_stage # ← 你的 stage 名称
query_warehouse: my_warehouse # ← 你的仓库
main_file: streamlit_app.py
pages_dir: pages/
artifacts:
- common/
- environment.yml
# 部署到 Snowflake
snow streamlit deploy --replace
# 在浏览器中打开
snow streamlit deploy --replace --open
在以下情况使用:
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
不要在以下情况使用:
Snowflake 为 Streamlit 应用提供两种运行时选项:
environment.ymlrequirements.txt 或 pyproject.toml容器运行时配置:
CREATE STREAMLIT my_app
FROM '@my_stage/app_folder'
MAIN_FILE = 'streamlit_app.py'
RUNTIME_NAME = 'SYSTEM$ST_CONTAINER_RUNTIME_PY3_11'
COMPUTE_POOL = my_compute_pool
QUERY_WAREHOUSE = my_warehouse;
关键区别: 容器运行时允许使用外部 PyPI 包 - 不受限于 Snowflake Anaconda Channel。
参见:运行时环境
Streamlit 应用支持两种权限模型:
安全影响:
在以下情况使用调用者权限:
参见下面的调用者权限连接模式。
my-streamlit-app/
├── snowflake.yml # 项目定义(必需)
├── environment.yml # 包依赖项(必需)
├── streamlit_app.py # 主入口点
├── pages/ # 多页面应用
│ └── data_explorer.py
├── common/ # 共享工具
│ └── utils.py
└── .gitignore
import streamlit as st
# 获取 Snowpark 会话(原生 SiS 连接)
conn = st.connection("snowflake")
session = conn.session()
# 查询数据
df = session.sql("SELECT * FROM my_table LIMIT 100").to_pandas()
st.dataframe(df)
使用查看者的权限而非所有者的权限执行查询:
import streamlit as st
# 使用调用者权限进行数据隔离
conn = st.connection("snowflake", type="callers_rights")
# 每个查看者只能看到他们有权访问的数据
df = conn.query("SELECT * FROM sensitive_customer_data")
st.dataframe(df)
安全比较:
| 连接类型 | 权限模型 | 使用场景 |
|---|---|---|
type="snowflake" (默认) | 所有者权限 | 内部工具、可信用户 |
type="callers_rights" (v1.53.0+) | 调用者权限 | 公开应用、数据隔离 |
@st.cache_data(ttl=600) # 缓存 10 分钟
def load_data(query: str):
conn = st.connection("snowflake")
return conn.session().sql(query).to_pandas()
# 使用缓存函数
df = load_data("SELECT * FROM large_table")
警告:在 Streamlit v1.22.0-1.53.0 中,params 参数不包含在缓存键中。使用参数化查询时,使用 ttl=0 禁用缓存,或在可用时升级到 1.54.0+(Issue #13644)。
将 Snowpark DataFrame 与图表或表格一起使用时,仅选择所需的列以避免获取不必要的数据:
# ❌ 即使图表只需要 2 列,也会获取所有 50 列
df = session.table("wide_table") # 50 列
st.line_chart(df, x="date", y="value")
# ✅ 仅获取所需列以获得更好的性能
df = session.table("wide_table").select("date", "value")
st.line_chart(df, x="date", y="value")
# 对于宽表,速度提升 5-10 倍
为什么重要:st.dataframe() 和图表组件会调用 df.to_pandas(),这会评估所有列,即使可视化只需要其中一部分。预选列可以减少数据传输并提高性能(Issue #11701)。
environment.yml(必需格式):
name: sf_env
channels:
- snowflake # 必需 - 唯一支持的 channel
dependencies:
- streamlit=1.35.0 # 显式版本(默认为旧的 1.22.0)
- pandas
- plotly
- altair=4.0 # SiS 支持版本 4.0
- snowflake-snowpark-python
此技能可预防14 个已记录的错误:
| 错误 | 原因 | 预防措施 |
|---|---|---|
PackageNotFoundError | 使用 conda-forge 或外部 channel | 使用 channels: - snowflake(或使用容器运行时以获取 PyPI 包) |
| 缺少 Streamlit 功能 | 默认版本 1.22.0 | 显式设置 streamlit=1.35.0(或使用容器运行时以获取 1.49+) |
ROOT_LOCATION deprecated | 旧的 CLI 语法 | 使用 Snowflake CLI 3.14.0+ 和 FROM source_location |
| 身份验证失败(2026+) | 仅密码身份验证 | 使用密钥对或 OAuth(参见 references/authentication.md) |
| 文件上传失败 | 文件 >200MB | 将上传文件保持在 200MB 限制以下 |
| DataFrame 显示失败 | 数据 >32MB | 在显示前对数据进行分页或限制 |
page_title not supported | SiS 限制 | 不要在 st.set_page_config() 中使用 page_title、page_icon 或 menu_items |
| 自定义组件错误 | SiS 限制 | 仅支持没有外部服务调用的组件 |
_snowflake module not found | 容器运行时迁移 | 使用 from snowflake.snowpark.context import get_active_session 替代 from _snowflake import get_active_session(迁移指南) |
| 带不同参数的缓存查询返回错误数据 | params 不在缓存键中(v1.22.0-1.53.0) | 对于参数化查询,使用 ttl=0 禁用缓存,或在可用时升级到 1.54.0+(Issue #13644) |
仅使用 kwargs 时出现 Invalid connection_name 'default' | 缺少 secrets.toml 或 connections.toml | 创建包含 [connections.snowflake] 部分的最小化 .streamlit/secrets.toml(Issue #9016) |
| 原生应用意外升级 | 隐式默认 Streamlit 版本(BCR-1857) | 在 environment.yml 中显式设置 streamlit=1.35.0 以防止自动版本更改(BCR-1857) |
| 容器运行时子目录中的文件路径失败 | 某些命令使用入口点相对路径 | 使用 pathlib 解析绝对路径:Path(__file__).parent / "assets/logo.png"(运行时文档) |
| 使用宽 Snowpark DataFrame 时性能缓慢 | st.dataframe() 即使未使用也会获取所有列 | 在传递给 Streamlit 之前预选仅需要的列:df.select("col1", "col2")(Issue #11701) |
# 部署并替换现有应用
snow streamlit deploy --replace
# 部署并在浏览器中打开
snow streamlit deploy --replace --open
# 部署特定实体(如果 snowflake.yml 中有多个)
snow streamlit deploy my_app --replace
有关 GitHub Actions 工作流模板,请参见 references/ci-cd.md。
要将你的 Streamlit 应用发布到 Snowflake 市场:
templates-native-app/ 模板完整工作流程请参见 templates-native-app/README.md。
my-native-app/
├── manifest.yml # 原生应用清单
├── setup.sql # 安装脚本
├── streamlit/
│ ├── environment.yml
│ ├── streamlit_app.py
│ └── pages/
└── README.md
仅来自 Snowflake Anaconda Channel 的包可用:
-- 查询可用包
SELECT * FROM information_schema.packages
WHERE language = 'python'
ORDER BY package_name;
-- 搜索特定包
SELECT * FROM information_schema.packages
WHERE language = 'python'
AND package_name ILIKE '%plotly%';
常见的可用包:
不可用:
st.dataframe)st.file_uploader.so 文件 - 不支持原生编译库st.set_page_config - 不支持 page_title、page_icon、menu_itemsst.bokeh_chart - 不支持eval() 被阻止 - CSP 阻止不安全的 JavaScript 执行st.cache_data 和 st.cache_resource 不会在用户之间持久化仅密码身份验证正在被弃用:
| 里程碑 | 日期 | 要求 |
|---|---|---|
| 里程碑 1 | 2025年9月 - 2026年1月 | Snowsight 用户必须使用 MFA |
| 里程碑 2 | 2026年5月 - 7月 | 所有新用户必须使用 MFA |
| 里程碑 3 | 2026年8月 - 10月 | 所有用户必须使用 MFA 或密钥对/OAuth |
推荐的身份验证方法:
实现模式请参见 references/authentication.md。
每周安装次数
320
代码仓库
GitHub 星标数
650
首次出现
2026年1月20日
安全审计
安装于
claude-code265
gemini-cli213
opencode208
cursor200
antigravity197
codex184
Build and deploy Streamlit apps natively within Snowflake, including Marketplace publishing as Native Apps.
Copy the templates to your project:
# Create project directory
mkdir my-streamlit-app && cd my-streamlit-app
# Copy templates (Claude will provide these)
Update placeholders in snowflake.yml:
definition_version: 2
entities:
my_app:
type: streamlit
identifier: my_streamlit_app # ← Your app name
stage: my_app_stage # ← Your stage name
query_warehouse: my_warehouse # ← Your warehouse
main_file: streamlit_app.py
pages_dir: pages/
artifacts:
- common/
- environment.yml
# Deploy to Snowflake
snow streamlit deploy --replace
# Open in browser
snow streamlit deploy --replace --open
Use when:
Don't use when:
Snowflake offers two runtime options for Streamlit apps:
environment.yml with Snowflake Anaconda Channelrequirements.txt or pyproject.toml with PyPI packagesContainer Runtime Configuration:
CREATE STREAMLIT my_app
FROM '@my_stage/app_folder'
MAIN_FILE = 'streamlit_app.py'
RUNTIME_NAME = 'SYSTEM$ST_CONTAINER_RUNTIME_PY3_11'
COMPUTE_POOL = my_compute_pool
QUERY_WAREHOUSE = my_warehouse;
Key difference: Container runtime allows external PyPI packages - not limited to Snowflake Anaconda Channel.
See: Runtime Environments
Streamlit apps support two privilege models :
Security implications:
Use caller's rights when:
See Caller's Rights Connection pattern below.
my-streamlit-app/
├── snowflake.yml # Project definition (required)
├── environment.yml # Package dependencies (required)
├── streamlit_app.py # Main entry point
├── pages/ # Multi-page apps
│ └── data_explorer.py
├── common/ # Shared utilities
│ └── utils.py
└── .gitignore
import streamlit as st
# Get Snowpark session (native SiS connection)
conn = st.connection("snowflake")
session = conn.session()
# Query data
df = session.sql("SELECT * FROM my_table LIMIT 100").to_pandas()
st.dataframe(df)
Execute queries with viewer's privileges instead of owner's privileges:
import streamlit as st
# Use caller's rights for data isolation
conn = st.connection("snowflake", type="callers_rights")
# Each viewer sees only data they have permission to access
df = conn.query("SELECT * FROM sensitive_customer_data")
st.dataframe(df)
Security comparison:
| Connection Type | Privilege Model | Use Case |
|---|---|---|
type="snowflake" (default) | Owner's rights | Internal tools, trusted users |
type="callers_rights" (v1.53.0+) | Caller's rights | Public apps, data isolation |
Source : Streamlit v1.53.0 Release
@st.cache_data(ttl=600) # Cache for 10 minutes
def load_data(query: str):
conn = st.connection("snowflake")
return conn.session().sql(query).to_pandas()
# Use cached function
df = load_data("SELECT * FROM large_table")
Warning : In Streamlit v1.22.0-1.53.0, params argument is not included in cache key. Use ttl=0 to disable caching when using parametrized queries, or upgrade to 1.54.0+ when available (Issue #13644).
When using Snowpark DataFrames with charts or tables, select only required columns to avoid fetching unnecessary data:
# ❌ Fetches all 50 columns even though chart only needs 2
df = session.table("wide_table") # 50 columns
st.line_chart(df, x="date", y="value")
# ✅ Fetch only needed columns for better performance
df = session.table("wide_table").select("date", "value")
st.line_chart(df, x="date", y="value")
# 5-10x faster for wide tables
Why it matters : st.dataframe() and chart components call df.to_pandas() which evaluates ALL columns, even if the visualization only needs some. Pre-selecting columns reduces data transfer and improves performance (Issue #11701).
environment.yml (required format):
name: sf_env
channels:
- snowflake # REQUIRED - only supported channel
dependencies:
- streamlit=1.35.0 # Explicit version (default is old 1.22.0)
- pandas
- plotly
- altair=4.0 # Version 4.0 supported in SiS
- snowflake-snowpark-python
This skill prevents 14 documented errors :
| Error | Cause | Prevention |
|---|---|---|
PackageNotFoundError | Using conda-forge or external channel | Use channels: - snowflake (or Container Runtime for PyPI) |
| Missing Streamlit features | Default version 1.22.0 | Explicitly set streamlit=1.35.0 (or use Container Runtime for 1.49+) |
ROOT_LOCATION deprecated | Old CLI syntax | Use Snowflake CLI 3.14.0+ with FROM source_location |
| Auth failures (2026+) | Password-only authentication | Use key-pair or OAuth (see references/authentication.md) |
# Deploy and replace existing
snow streamlit deploy --replace
# Deploy and open in browser
snow streamlit deploy --replace --open
# Deploy specific entity (if multiple in snowflake.yml)
snow streamlit deploy my_app --replace
See references/ci-cd.md for GitHub Actions workflow template.
To publish your Streamlit app to Snowflake Marketplace:
templates-native-app/ templatesSee templates-native-app/README.md for complete workflow.
my-native-app/
├── manifest.yml # Native App manifest
├── setup.sql # Installation script
├── streamlit/
│ ├── environment.yml
│ ├── streamlit_app.py
│ └── pages/
└── README.md
Only packages from the Snowflake Anaconda Channel are available:
-- Query available packages
SELECT * FROM information_schema.packages
WHERE language = 'python'
ORDER BY package_name;
-- Search for specific package
SELECT * FROM information_schema.packages
WHERE language = 'python'
AND package_name ILIKE '%plotly%';
Common available packages:
Not available:
See: Snowpark Python Packages Explorer
st.dataframe)st.file_uploader.so files - Native compiled libraries unsupportedst.set_page_config - page_title, page_icon, menu_items not supportedst.bokeh_chart - Not supportedeval() blocked - CSP prevents unsafe JavaScript executionst.cache_data and st.cache_resource don't persist across usersPassword-only authentication is being deprecated:
| Milestone | Date | Requirement |
|---|---|---|
| Milestone 1 | Sept 2025 - Jan 2026 | MFA required for Snowsight users |
| Milestone 2 | May - July 2026 | All new users must use MFA |
| Milestone 3 | Aug - Oct 2026 | All users must use MFA or key-pair/OAuth |
Recommended authentication methods:
See references/authentication.md for implementation patterns.
Weekly Installs
320
Repository
GitHub Stars
650
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
claude-code265
gemini-cli213
opencode208
cursor200
antigravity197
codex184
专业SEO审计工具:全面网站诊断、技术SEO优化与页面分析指南
58,700 周安装
| File upload fails | File >200MB | Keep uploads under 200MB limit |
| DataFrame display fails | Data >32MB | Paginate or limit data before display |
page_title not supported | SiS limitation | Don't use page_title, page_icon, or menu_items in st.set_page_config() |
| Custom component error | SiS limitation | Only components without external service calls work |
_snowflake module not found | Container Runtime migration | Use from snowflake.snowpark.context import get_active_session instead of from _snowflake import get_active_session (Migration Guide) |
| Cached query returns wrong data with different params | params not in cache key (v1.22.0-1.53.0) | Use ttl=0 to disable caching for parametrized queries, or upgrade to 1.54.0+ when available (Issue #13644) |
Invalid connection_name 'default' with kwargs only | Missing secrets.toml or connections.toml | Create minimal .streamlit/secrets.toml with [connections.snowflake] section (Issue #9016) |
| Native App upgrades unexpectedly | Implicit default Streamlit version (BCR-1857) | Explicitly set streamlit=1.35.0 in environment.yml to prevent automatic version changes (BCR-1857) |
| File paths fail in Container Runtime subdirectories | Some commands use entrypoint-relative paths | Use pathlib to resolve absolute paths: Path(__file__).parent / "assets/logo.png" (Runtime Docs) |
| Slow performance with wide Snowpark DataFrames | st.dataframe() fetches all columns even if unused | Pre-select only needed columns: df.select("col1", "col2") before passing to Streamlit (Issue #11701) |