modly-image-to-3d by aradotso/trending-skills
npx skills add https://github.com/aradotso/trending-skills --skill modly-image-to-3d技能来自 ara.so — Daily 2026 Skills 合集。
Modly 是一个本地、开源的桌面应用程序(Windows/Linux),它使用完全在您 GPU 上运行的 AI 模型将照片转换为 3D 网格模型 — 无需云端服务,也无需 API 密钥。
modly/
├── src/ # Electron + TypeScript 前端
│ ├── main/ # Electron 主进程
│ ├── renderer/ # React UI(渲染器进程)
│ └── preload/ # IPC 桥接
├── api/ # Python FastAPI 后端
│ ├── generator.py # 核心生成逻辑
│ └── requirements.txt
├── resources/
│ └── icons/
├── launcher.bat # Windows 快速启动
├── launcher.sh # Linux 快速启动
└── package.json
该应用程序作为一个 Electron 外壳,运行在本地 Python FastAPI 服务器之上。扩展是带有 manifest.json + generator.py 的 GitHub 仓库,它们可以接入扩展系统。
# Windows
launcher.bat
# Linux
chmod +x launcher.sh
./launcher.sh
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
# 1. 克隆
git clone https://github.com/lightningpixel/modly
cd modly
# 2. 安装 JS 依赖
npm install
# 3. 设置 Python 后端
cd api
python -m venv .venv
# 激活(Windows)
.venv\Scripts\activate
# 激活(Linux/macOS)
source .venv/bin/activate
pip install -r requirements.txt
cd ..
# 4. 运行开发模式(启动 Electron + Python 后端)
npm run dev
# 为当前平台构建安装程序
npm run build
# 输出到 dist/ 目录
npm run dev # 以开发模式启动应用(热重载)
npm run build # 打包应用以供分发
npm run lint # 运行 ESLint
npm run typecheck # TypeScript 类型检查
扩展是包含以下内容的 GitHub 仓库:
manifest.json — 元数据和模型变体generator.py — 实现 Modly 扩展接口的生成逻辑{
"name": "My 3D Extension",
"id": "my-extension-id",
"description": "Generates 3D models using XYZ model",
"version": "1.0.0",
"author": "Your Name",
"repository": "https://github.com/yourname/my-modly-extension",
"variants": [
{
"id": "model-small",
"name": "Small (faster)",
"description": "Lighter variant for faster generation",
"size_gb": 4.2,
"vram_gb": 6,
"files": [
{
"url": "https://huggingface.co/yourorg/yourmodel/resolve/main/weights.safetensors",
"filename": "weights.safetensors",
"sha256": "abc123..."
}
]
}
]
}
# api/extensions/<extension-id>/generator.py
# 每个扩展必须实现的必需接口
import sys
import json
from pathlib import Path
def generate(
image_path: str,
output_path: str,
variant_id: str,
models_dir: str,
**kwargs
) -> dict:
"""
Required entry point for all Modly extensions.
Args:
image_path: Path to input image file
output_path: Path where output .glb/.obj should be saved
variant_id: Which model variant to use
models_dir: Directory where downloaded model weights live
Returns:
dict with keys:
success (bool)
output_file (str) — path to generated mesh
error (str, optional)
"""
try:
# Load your model weights
weights = Path(models_dir) / variant_id / "weights.safetensors"
# Run your inference
mesh = run_inference(str(weights), image_path)
# Save output
mesh.export(output_path)
return {
"success": True,
"output_file": output_path
}
except Exception as e:
return {
"success": False,
"error": str(e)
}
https://github.com/lightningpixel/modly-hunyuan3d-mini-extension| 扩展 | 模型 |
|---|---|
| modly-hunyuan3d-mini-extension | Hunyuan3D 2 Mini |
后端在本地运行。Electron 前端使用的关键端点:
# 典型的后端路由模式(api/main.py 或类似文件)
# GET /extensions — 列出已安装的扩展
# GET /extensions/{id} — 获取扩展详情 + 变体
# POST /extensions/install — 从 GitHub URL 安装扩展
# POST /generate — 触发 3D 生成
# GET /generate/status — 轮询生成进度
# GET /models — 列出已下载的模型变体
# POST /models/download — 下载模型变体
// src/preload/index.ts — 将后端调用暴露给渲染器
import { contextBridge, ipcRenderer } from 'electron'
contextBridge.exposeInMainWorld('modly', {
generate: (imagePath: string, extensionId: string, variantId: string) =>
ipcRenderer.invoke('generate', { imagePath, extensionId, variantId }),
installExtension: (repoUrl: string) =>
ipcRenderer.invoke('install-extension', { repoUrl }),
listExtensions: () =>
ipcRenderer.invoke('list-extensions'),
})
// src/main/ipc-handlers.ts — 主进程处理
import { ipcMain } from 'electron'
ipcMain.handle('generate', async (_event, { imagePath, extensionId, variantId }) => {
const response = await fetch('http://localhost:PORT/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ image_path: imagePath, extension_id: extensionId, variant_id: variantId }),
})
return response.json()
})
// src/renderer/components/GenerateButton.tsx — UI 使用
declare global {
interface Window {
modly: {
generate: (imagePath: string, extensionId: string, variantId: string) => Promise<{ success: boolean; output_file?: string; error?: string }>
installExtension: (repoUrl: string) => Promise<{ success: boolean }>
listExtensions: () => Promise<Extension[]>
}
}
}
async function handleGenerate(imagePath: string) {
const result = await window.modly.generate(
imagePath,
'modly-hunyuan3d-mini-extension',
'hunyuan3d-mini-turbo'
)
if (result.success) {
console.log('Mesh saved to:', result.output_file)
} else {
console.error('Generation failed:', result.error)
}
}
my-modly-extension/
├── manifest.json
└── generator.py
# generator.py
import torch
from PIL import Image
from pathlib import Path
def generate(image_path, output_path, variant_id, models_dir, **kwargs):
device = "cuda" if torch.cuda.is_available() else "cpu"
weights_dir = Path(models_dir) / variant_id
try:
# Load model (example pattern)
from your_model_lib import ImageTo3DPipeline
pipe = ImageTo3DPipeline.from_pretrained(
str(weights_dir),
torch_dtype=torch.float16
).to(device)
image = Image.open(image_path).convert("RGB")
with torch.no_grad():
mesh = pipe(image).mesh
mesh.export(output_path)
return {"success": True, "output_file": output_path}
except Exception as e:
return {"success": False, "error": str(e)}
Modly 完全在本地运行 — 无需环境变量或 API 密钥。GPU/CUDA 由扩展中的 PyTorch 自动检测。
相关配置位于:
package.json # Electron 应用元数据,构建目标
api/requirements.txt # 后端的 Python 依赖
如果您需要配置后端端口或扩展目录,请查看 Electron 主进程配置(通常是 src/main/index.ts)中的常量,如 API_PORT 或 EXTENSIONS_DIR。
import torch
def get_device():
if torch.cuda.is_available():
print(f"Using GPU: {torch.cuda.get_device_name(0)}")
return "cuda"
print("No GPU found, falling back to CPU (slow)")
return "cpu"
import sys
import json
def report_progress(percent: int, message: str):
"""Write progress to stdout so Modly can display it."""
print(json.dumps({"progress": percent, "message": message}), flush=True)
def generate(image_path, output_path, variant_id, models_dir, **kwargs):
report_progress(0, "Loading model...")
# ... load model ...
report_progress(30, "Processing image...")
# ... inference ...
report_progress(90, "Exporting mesh...")
# ... export ...
report_progress(100, "Done")
return {"success": True, "output_file": output_path}
// src/renderer/pages/MyPage.tsx
import React, { useEffect, useState } from 'react'
interface Extension {
id: string
name: string
description: string
}
export default function MyPage() {
const [extensions, setExtensions] = useState<Extension[]>([])
useEffect(() => {
window.modly.listExtensions().then(setExtensions)
}, [])
return (
<div>
<h1>Installed Extensions</h1>
{extensions.map(ext => (
<div key={ext.id}>
<h2>{ext.name}</h2>
<p>{ext.description}</p>
</div>
))}
</div>
)
}
| 问题 | 解决方法 |
|---|---|
npm run dev — Python 后端未启动 | 确保 venv 已设置:cd api && python -m venv .venv && pip install -r requirements.txt |
| CUDA 内存不足 | 使用较小的模型变体或关闭其他 GPU 进程 |
| 扩展安装失败 | 验证 GitHub URL 是 HTTPS 且仓库根目录包含 manifest.json |
| 生成卡住 | 检查您的 GPU 驱动程序和 CUDA 工具包是否与 requirements.txt 中的 PyTorch 版本匹配 |
| 应用在 Linux 上无法启动 | 使 launcher.sh 可执行:chmod +x launcher.sh |
| 模型下载停滞 | 检查磁盘空间;大模型(4–10 GB)需要足够的可用空间 |
扩展中找不到 torch | 确保 PyTorch 在 api/requirements.txt 中,而不仅仅是扩展自身的依赖 |
cd api
source .venv/bin/activate # 或在 Windows 上:.venv\Scripts\activate
python -c "import torch; print(torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'no GPU')"
每周安装数
116
仓库
GitHub 星标数
10
首次出现
3 天前
安全审计
安装于
github-copilot116
codex116
warp116
kimi-cli116
amp116
cline116
Skill by ara.so — Daily 2026 Skills collection.
Modly is a local, open-source desktop application (Windows/Linux) that converts photos into 3D mesh models using AI models running entirely on your GPU — no cloud, no API keys required.
modly/
├── src/ # Electron + TypeScript frontend
│ ├── main/ # Electron main process
│ ├── renderer/ # React UI (renderer process)
│ └── preload/ # IPC bridge
├── api/ # Python FastAPI backend
│ ├── generator.py # Core generation logic
│ └── requirements.txt
├── resources/
│ └── icons/
├── launcher.bat # Windows quick-start
├── launcher.sh # Linux quick-start
└── package.json
The app runs as an Electron shell over a local Python FastAPI server. Extensions are GitHub repos with a manifest.json + generator.py that plug into the extension system.
# Windows
launcher.bat
# Linux
chmod +x launcher.sh
./launcher.sh
# 1. Clone
git clone https://github.com/lightningpixel/modly
cd modly
# 2. Install JS dependencies
npm install
# 3. Set up Python backend
cd api
python -m venv .venv
# Activate (Windows)
.venv\Scripts\activate
# Activate (Linux/macOS)
source .venv/bin/activate
pip install -r requirements.txt
cd ..
# 4. Run dev mode (starts Electron + Python backend)
npm run dev
# Build installers for current platform
npm run build
# Output goes to dist/
npm run dev # Start app in development mode (hot reload)
npm run build # Package app for distribution
npm run lint # Run ESLint
npm run typecheck # TypeScript type checking
Extensions are GitHub repositories containing:
manifest.json — metadata and model variantsgenerator.py — generation logic implementing the Modly extension interface{
"name": "My 3D Extension",
"id": "my-extension-id",
"description": "Generates 3D models using XYZ model",
"version": "1.0.0",
"author": "Your Name",
"repository": "https://github.com/yourname/my-modly-extension",
"variants": [
{
"id": "model-small",
"name": "Small (faster)",
"description": "Lighter variant for faster generation",
"size_gb": 4.2,
"vram_gb": 6,
"files": [
{
"url": "https://huggingface.co/yourorg/yourmodel/resolve/main/weights.safetensors",
"filename": "weights.safetensors",
"sha256": "abc123..."
}
]
}
]
}
# api/extensions/<extension-id>/generator.py
# Required interface every extension must implement
import sys
import json
from pathlib import Path
def generate(
image_path: str,
output_path: str,
variant_id: str,
models_dir: str,
**kwargs
) -> dict:
"""
Required entry point for all Modly extensions.
Args:
image_path: Path to input image file
output_path: Path where output .glb/.obj should be saved
variant_id: Which model variant to use
models_dir: Directory where downloaded model weights live
Returns:
dict with keys:
success (bool)
output_file (str) — path to generated mesh
error (str, optional)
"""
try:
# Load your model weights
weights = Path(models_dir) / variant_id / "weights.safetensors"
# Run your inference
mesh = run_inference(str(weights), image_path)
# Save output
mesh.export(output_path)
return {
"success": True,
"output_file": output_path
}
except Exception as e:
return {
"success": False,
"error": str(e)
}
https://github.com/lightningpixel/modly-hunyuan3d-mini-extension| Extension | Model |
|---|---|
| modly-hunyuan3d-mini-extension | Hunyuan3D 2 Mini |
The backend runs locally. Key endpoints used by the Electron frontend:
# Typical backend route patterns (api/main.py or similar)
# GET /extensions — list installed extensions
# GET /extensions/{id} — get extension details + variants
# POST /extensions/install — install extension from GitHub URL
# POST /generate — trigger 3D generation
# GET /generate/status — poll generation progress
# GET /models — list downloaded model variants
# POST /models/download — download a model variant
// src/preload/index.ts — exposing backend calls to renderer
import { contextBridge, ipcRenderer } from 'electron'
contextBridge.exposeInMainWorld('modly', {
generate: (imagePath: string, extensionId: string, variantId: string) =>
ipcRenderer.invoke('generate', { imagePath, extensionId, variantId }),
installExtension: (repoUrl: string) =>
ipcRenderer.invoke('install-extension', { repoUrl }),
listExtensions: () =>
ipcRenderer.invoke('list-extensions'),
})
// src/main/ipc-handlers.ts — main process handling
import { ipcMain } from 'electron'
ipcMain.handle('generate', async (_event, { imagePath, extensionId, variantId }) => {
const response = await fetch('http://localhost:PORT/generate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ image_path: imagePath, extension_id: extensionId, variant_id: variantId }),
})
return response.json()
})
// src/renderer/components/GenerateButton.tsx — UI usage
declare global {
interface Window {
modly: {
generate: (imagePath: string, extensionId: string, variantId: string) => Promise<{ success: boolean; output_file?: string; error?: string }>
installExtension: (repoUrl: string) => Promise<{ success: boolean }>
listExtensions: () => Promise<Extension[]>
}
}
}
async function handleGenerate(imagePath: string) {
const result = await window.modly.generate(
imagePath,
'modly-hunyuan3d-mini-extension',
'hunyuan3d-mini-turbo'
)
if (result.success) {
console.log('Mesh saved to:', result.output_file)
} else {
console.error('Generation failed:', result.error)
}
}
my-modly-extension/
├── manifest.json
└── generator.py
# generator.py
import torch
from PIL import Image
from pathlib import Path
def generate(image_path, output_path, variant_id, models_dir, **kwargs):
device = "cuda" if torch.cuda.is_available() else "cpu"
weights_dir = Path(models_dir) / variant_id
try:
# Load model (example pattern)
from your_model_lib import ImageTo3DPipeline
pipe = ImageTo3DPipeline.from_pretrained(
str(weights_dir),
torch_dtype=torch.float16
).to(device)
image = Image.open(image_path).convert("RGB")
with torch.no_grad():
mesh = pipe(image).mesh
mesh.export(output_path)
return {"success": True, "output_file": output_path}
except Exception as e:
return {"success": False, "error": str(e)}
Modly runs fully locally — no environment variables or API keys needed. GPU/CUDA is auto-detected by PyTorch in extensions.
Relevant configuration lives in:
package.json # Electron app metadata, build targets
api/requirements.txt # Python dependencies for backend
If you need to configure the backend port or extension directory, check the Electron main process config (typically src/main/index.ts) for constants like API_PORT or EXTENSIONS_DIR.
import torch
def get_device():
if torch.cuda.is_available():
print(f"Using GPU: {torch.cuda.get_device_name(0)}")
return "cuda"
print("No GPU found, falling back to CPU (slow)")
return "cpu"
import sys
import json
def report_progress(percent: int, message: str):
"""Write progress to stdout so Modly can display it."""
print(json.dumps({"progress": percent, "message": message}), flush=True)
def generate(image_path, output_path, variant_id, models_dir, **kwargs):
report_progress(0, "Loading model...")
# ... load model ...
report_progress(30, "Processing image...")
# ... inference ...
report_progress(90, "Exporting mesh...")
# ... export ...
report_progress(100, "Done")
return {"success": True, "output_file": output_path}
// src/renderer/pages/MyPage.tsx
import React, { useEffect, useState } from 'react'
interface Extension {
id: string
name: string
description: string
}
export default function MyPage() {
const [extensions, setExtensions] = useState<Extension[]>([])
useEffect(() => {
window.modly.listExtensions().then(setExtensions)
}, [])
return (
<div>
<h1>Installed Extensions</h1>
{extensions.map(ext => (
<div key={ext.id}>
<h2>{ext.name}</h2>
<p>{ext.description}</p>
</div>
))}
</div>
)
}
| Problem | Fix |
|---|---|
npm run dev — Python backend not starting | Ensure venv is set up: cd api && python -m venv .venv && pip install -r requirements.txt |
| CUDA out of memory | Use a smaller model variant or close other GPU processes |
| Extension install fails | Verify the GitHub URL is HTTPS and the repo contains manifest.json at root |
| Generation hangs | Check that your GPU drivers and CUDA toolkit match the PyTorch version in requirements.txt |
| App won't launch on Linux | Make launcher.sh executable: chmod +x launcher.sh |
cd api
source .venv/bin/activate # or .venv\Scripts\activate on Windows
python -c "import torch; print(torch.cuda.is_available(), torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'no GPU')"
Weekly Installs
116
Repository
GitHub Stars
10
First Seen
3 days ago
Security Audits
Gen Agent Trust HubFailSocketWarnSnykFail
Installed on
github-copilot116
codex116
warp116
kimi-cli116
amp116
cline116
Minecraft旧版主机版C++重制项目 - 支持模组开发、局域网联机和专用服务器
557 周安装
| Model download stalls | Check disk space; large models (4–10 GB) need adequate free space |
torch not found in extension | Ensure PyTorch is in api/requirements.txt, not just the extension's own deps |