ios-device-automation by web-infra-dev/midscene-skills
npx skills add https://github.com/web-infra-dev/midscene-skills --skill ios-device-automation关键规则 — 违反将导致工作流中断:
- 切勿在后台运行 midscene 命令。 每个命令必须同步运行,以便您能在决定下一步操作前读取其输出(尤其是截图)。后台执行会破坏截图-分析-执行循环。
- 一次只运行一个 midscene 命令。 等待上一个命令完成,读取截图,然后决定下一个操作。切勿将多个命令链接在一起运行。
- 为每个命令预留足够的完成时间。 Midscene 命令涉及 AI 推理和屏幕交互,可能比典型的 shell 命令耗时更长。一个典型命令需要大约 1 分钟;复杂的
act命令可能需要更长时间。- 在结束前必须报告任务结果。 完成自动化任务后,您必须主动向用户总结结果——包括找到的关键数据、完成的动作、拍摄的截图以及任何相关发现。切勿在最后一个自动化步骤后默默结束;用户期望在单次交互中获得完整的响应。
使用 npx @midscene/ios@1 自动化 iOS 设备。每个 CLI 命令直接对应一个 MCP 工具——您(AI 代理)作为大脑,根据截图决定采取哪些操作。
act 能做什么在 iOS 上的一次 act 调用中,Midscene 可以基于当前可见屏幕进行点击、双击、长按、输入、清除文本、滚动、拖拽项目、双指缩放、按键以及使用系统导航(如主屏幕或应用切换器)。
Midscene 需要具备强大视觉基础能力的模型。必须配置以下环境变量——可以设置为系统环境变量,也可以放在当前工作目录的 .env 文件中(Midscene 会自动加载 ):
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
.envMIDSCENE_MODEL_API_KEY="your-api-key"
MIDSCENE_MODEL_NAME="model-name"
MIDSCENE_MODEL_BASE_URL="https://..."
MIDSCENE_MODEL_FAMILY="family-identifier"
示例:Gemini (Gemini-3-Flash)
MIDSCENE_MODEL_API_KEY="your-google-api-key"
MIDSCENE_MODEL_NAME="gemini-3-flash"
MIDSCENE_MODEL_BASE_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
MIDSCENE_MODEL_FAMILY="gemini"
示例:Qwen 3.5
MIDSCENE_MODEL_API_KEY="your-aliyun-api-key"
MIDSCENE_MODEL_NAME="qwen3.5-plus"
MIDSCENE_MODEL_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
MIDSCENE_MODEL_FAMILY="qwen3.5"
MIDSCENE_MODEL_REASONING_ENABLED="false"
# 如果使用 OpenRouter,请设置:
# MIDSCENE_MODEL_API_KEY="your-openrouter-api-key"
# MIDSCENE_MODEL_NAME="qwen/qwen3.5-plus"
# MIDSCENE_MODEL_BASE_URL="https://openrouter.ai/api/v1"
示例:Doubao Seed 2.0 Lite
MIDSCENE_MODEL_API_KEY="your-doubao-api-key"
MIDSCENE_MODEL_NAME="doubao-seed-2-0-lite"
MIDSCENE_MODEL_BASE_URL="https://ark.cn-beijing.volces.com/api/v3"
MIDSCENE_MODEL_FAMILY="doubao-seed"
常用模型:Doubao Seed 2.0 Lite、Qwen 3.5、Zhipu GLM-4.6V、Gemini-3-Pro、Gemini-3-Flash。
如果模型未配置,请要求用户进行设置。有关支持的提供商,请参阅模型配置。
npx @midscene/ios@1 connect
当您想在任务其余部分开始前从一个已知的应用或路由启动时,请使用内置的启动功能。提供您拥有的最具体目标,例如包 ID、网页 URL、深度链接或电话/邮件链接。典型目标包括 com.apple.Preferences、https://www.apple.com、myapp://profile/user/123 和 tel:+1234567890。
当任务需要较低级别的设备控制而非正常的可见 UI 交互时使用此命令:
npx @midscene/ios@1 runwdarequest --method GET --endpoint /wda/screen
这不会运行 ADB 命令。在 iOS 上,底层操作是向 WebDriverAgent 发送 HTTP 请求,通常是 GET http://<wdaHost>:<wdaPort>/session/<sessionId>/wda/screen。
npx @midscene/ios@1 take_screenshot
截图后,在决定下一步操作前,请读取保存的图像文件以了解当前屏幕状态。
使用 act 与设备交互并获取结果。它会在内部自主处理所有 UI 交互——点击、输入、滚动、滑动、等待和导航——因此您应该将复杂的高级任务作为一个整体交给它,而不是将其分解为小步骤。用自然语言描述您想做什么以及期望的效果:
# 具体指令
npx @midscene/ios@1 act --prompt "type hello world in the search field and press Enter"
npx @midscene/ios@1 act --prompt "tap Delete, then confirm in the alert dialog"
# 或目标驱动的指令
npx @midscene/ios@1 act --prompt "open Settings and navigate to Wi-Fi, tell me the connected network name"
npx @midscene/ios@1 disconnect
由于 CLI 命令在调用之间是无状态的,请遵循以下模式:
act 执行期望的操作或目标驱动的指令。"右上角的设置图标" 而不是 "那个图标"。"右上角的搜索图标"、"列表中的第三项")。act 命令:在同一应用内执行连续操作时,将它们合并到一个 act 提示中,而不是拆分成单独的命令。例如,"打开设置,点击 Wi-Fi,并检查已连接网络" 应该是一个 act 调用,而不是三个。这减少了往返次数,避免了不必要的截图-分析周期,并且速度显著更快。示例 — 警告对话框交互:
npx @midscene/ios@1 act --prompt "tap the Delete button and confirm in the alert dialog"
npx @midscene/ios@1 take_screenshot
示例 — 表单交互:
npx @midscene/ios@1 act --prompt "fill in the username field with 'testuser' and the password field with 'pass123', then tap the Login button"
npx @midscene/ios@1 take_screenshot
症状: 连接被拒绝或超时错误。解决方案:
症状: 未检测到设备或连接错误。解决方案:
症状: 身份验证或模型错误。解决方案:
.env 文件是否包含 MIDSCENE_MODEL_API_KEY=<your-key>。每周安装量
422
代码仓库
GitHub 星标数
137
首次出现
2026年3月6日
安全审计
安装于
openclaw262
codex199
cursor197
opencode197
cline195
gemini-cli194
CRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:
- Never run midscene commands in the background. Each command must run synchronously so you can read its output (especially screenshots) before deciding the next action. Background execution breaks the screenshot-analyze-act loop.
- Run only one midscene command at a time. Wait for the previous command to finish, read the screenshot, then decide the next action. Never chain multiple commands together.
- Allow enough time for each command to complete. Midscene commands involve AI inference and screen interaction, which can take longer than typical shell commands. A typical command needs about 1 minute; complex
actcommands may need even longer.- Always report task results before finishing. After completing the automation task, you MUST proactively summarize the results to the user — including key data found, actions completed, screenshots taken, and any relevant findings. Never silently end after the last automation step; the user expects a complete response in a single interaction.
Automate iOS devices using npx @midscene/ios@1. Each CLI command maps directly to an MCP tool — you (the AI agent) act as the brain, deciding which actions to take based on screenshots.
act Can DoInside a single act call on iOS, Midscene can tap, double-tap, long-press, type, clear text, scroll, drag items, zoom with two fingers, press keys, and use system navigation such as Home or the app switcher while working from the current visible screen.
Midscene requires models with strong visual grounding capabilities. The following environment variables must be configured — either as system environment variables or in a .env file in the current working directory (Midscene loads .env automatically):
MIDSCENE_MODEL_API_KEY="your-api-key"
MIDSCENE_MODEL_NAME="model-name"
MIDSCENE_MODEL_BASE_URL="https://..."
MIDSCENE_MODEL_FAMILY="family-identifier"
Example: Gemini (Gemini-3-Flash)
MIDSCENE_MODEL_API_KEY="your-google-api-key"
MIDSCENE_MODEL_NAME="gemini-3-flash"
MIDSCENE_MODEL_BASE_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
MIDSCENE_MODEL_FAMILY="gemini"
Example: Qwen 3.5
MIDSCENE_MODEL_API_KEY="your-aliyun-api-key"
MIDSCENE_MODEL_NAME="qwen3.5-plus"
MIDSCENE_MODEL_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
MIDSCENE_MODEL_FAMILY="qwen3.5"
MIDSCENE_MODEL_REASONING_ENABLED="false"
# If using OpenRouter, set:
# MIDSCENE_MODEL_API_KEY="your-openrouter-api-key"
# MIDSCENE_MODEL_NAME="qwen/qwen3.5-plus"
# MIDSCENE_MODEL_BASE_URL="https://openrouter.ai/api/v1"
Example: Doubao Seed 2.0 Lite
MIDSCENE_MODEL_API_KEY="your-doubao-api-key"
MIDSCENE_MODEL_NAME="doubao-seed-2-0-lite"
MIDSCENE_MODEL_BASE_URL="https://ark.cn-beijing.volces.com/api/v3"
MIDSCENE_MODEL_FAMILY="doubao-seed"
Commonly used models: Doubao Seed 2.0 Lite, Qwen 3.5, Zhipu GLM-4.6V, Gemini-3-Pro, Gemini-3-Flash.
If the model is not configured, ask the user to set it up. See Model Configuration for supported providers.
npx @midscene/ios@1 connect
Use the built-in launch capability when you want to start from a known app or route before the rest of the task. Give it the most specific target you have, such as a bundle ID, web URL, deep link, or phone/mail link. Typical targets include com.apple.Preferences, https://www.apple.com, myapp://profile/user/123, and tel:+1234567890.
Use this when the task needs lower-level device control instead of a normal visible UI interaction:
npx @midscene/ios@1 runwdarequest --method GET --endpoint /wda/screen
This does not run an ADB command. On iOS, the underlying operation is an HTTP request to WebDriverAgent, typically GET http://<wdaHost>:<wdaPort>/session/<sessionId>/wda/screen.
npx @midscene/ios@1 take_screenshot
After taking a screenshot, read the saved image file to understand the current screen state before deciding the next action.
Use act to interact with the device and get the result. It autonomously handles all UI interactions internally — tapping, typing, scrolling, swiping, waiting, and navigating — so you should give it complex, high-level tasks as a whole rather than breaking them into small steps. Describe what you want to do and the desired effect in natural language:
# specific instructions
npx @midscene/ios@1 act --prompt "type hello world in the search field and press Enter"
npx @midscene/ios@1 act --prompt "tap Delete, then confirm in the alert dialog"
# or target-driven instructions
npx @midscene/ios@1 act --prompt "open Settings and navigate to Wi-Fi, tell me the connected network name"
npx @midscene/ios@1 disconnect
Since CLI commands are stateless between invocations, follow this pattern:
act to perform the desired action or target-driven instructions."the Settings icon in the top-right corner" instead of "the icon"."the search icon at the top right", "the third item in the list").act command: When performing consecutive operations within the same app, combine them into one act prompt instead of splitting them into separate commands. For example, "open Settings, tap Wi-Fi, and check the connected network" should be a single act call, not three. This reduces round-trips, avoids unnecessary screenshot-analyze cycles, and is significantly faster.Example — Alert dialog interaction:
npx @midscene/ios@1 act --prompt "tap the Delete button and confirm in the alert dialog"
npx @midscene/ios@1 take_screenshot
Example — Form interaction:
npx @midscene/ios@1 act --prompt "fill in the username field with 'testuser' and the password field with 'pass123', then tap the Login button"
npx @midscene/ios@1 take_screenshot
Symptom: Connection refused or timeout errors. Solution:
Symptom: No device detected or connection errors. Solution:
Symptom: Authentication or model errors. Solution:
.env file contains MIDSCENE_MODEL_API_KEY=<your-key>.Weekly Installs
422
Repository
GitHub Stars
137
First Seen
Mar 6, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykFail
Installed on
openclaw262
codex199
cursor197
opencode197
cline195
gemini-cli194
Google Slides 演示文稿创建与共享自动化教程 - 使用 Google Workspace CLI
6,500 周安装
Medusa 管理员用户创建指南:使用 new-user Skill 快速配置后台账户
410 周安装
LobeHub桌面端开发指南:基于Electron的桌面应用架构与功能实现教程
410 周安装
精益用户体验画布 (Lean UX Canvas v2) 指南:产品经理假设驱动开发与实验验证工具
411 周安装
iOS StoreKit 2 应用内购买与订阅开发指南 - SwiftUI 实现教程
411 周安装
Sentry AI监控配置指南:追踪LLM调用、智能体执行与令牌消耗
411 周安装
macOS SwiftPM应用打包指南:无需Xcode构建、签名、公证全流程
411 周安装