stream-transcript-processor by aj47/agent-skills
npx skills add https://github.com/aj47/agent-skills --skill stream-transcript-processor将 @techfren 的流媒体转录内容处理成可操作的内容资产:片段剪辑、脚本笔记、X 帖子、YouTube 元数据和数据分析。
此技能协调现有技能并增加新功能:
当用户出现以下情况时激活:
| 用户请求 | 操作 |
|---|---|
| "处理这个直播流 [URL]" | 完整流程:获取 → 分析 → 片段 → 帖子 → YouTube 元数据 |
| "从 [转录文本] 中查找片段" | 委托给 clipper 技能 |
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
| "为 [时间戳] 生成脚本笔记" | 创建录制笔记 |
| "根据 [直播流] 起草 X 帖子" | 委托给 x-post-extractor 或使用草稿脚本 |
| "生成 YouTube 描述" | 创建标题、带时间戳的描述和标签 |
| "分析这个直播流" | 内容分析报告 |
当用户提供直播流 URL 时:
阶段 1:获取转录文本
python .claude/skills/stream-transcript-processor/scripts/fetch_transcript.py "<URL>" transcript.txt
这将创建:
transcript.txt - 带时间戳的文本文件transcript.json - 用于处理的结构化 JSON 文件阶段 2:解析以兼容 Clipper
python .claude/skills/clipper/scripts/parse_transcription.py transcript.json > parsed.json
阶段 3:查找片段(使用 Clipper 技能)
遵循 clipper 技能工作流:
parsed.jsonsegments.json阶段 4:生成内容资产
对于每个高优先级片段:
脚本笔记:
python .claude/skills/stream-transcript-processor/scripts/generate_script_notes.py parsed.json <start_ts> <end_ts>
X 帖子草稿:
python .claude/skills/stream-transcript-processor/scripts/draft_x_posts.py parsed.json <timestamp> --style discovery
阶段 5:获取媒体文件(可选)
使用 media-fetcher 获取提及工具的截图:
python .claude/skills/media-fetcher/scripts/capture_screenshot.py "<url>" --output media/
如果 parsed.json 或 out.json 已存在:
当用户要求为特定片段生成脚本笔记时:
python .claude/skills/stream-transcript-processor/scripts/generate_script_notes.py \
<transcript_file> <start_timestamp> <end_timestamp> \
--output script_notes.md
输出格式(用于录制的句子开头):
## HOOK
- [Bold claim or result]...
- This is [Tool] and it...
## VALUE STACK
- Free, open source...
- Runs locally...
## DEMO
- Let's try it...
- [React naturally]
## CLOSE
- [Quick recap]...
- Try it out...
当用户要求根据直播流起草 X 帖子时:
选项 1:使用 x-post-extractor 技能(推荐用于完整工作流)
遵循 x-post-extractor SKILL.md 进行综合提取。
选项 2:根据时间戳快速起草
python .claude/skills/stream-transcript-processor/scripts/draft_x_posts.py \
<transcript_file> <timestamp> --style <style>
样式:discovery, result, comparison, tutorial
生成 SEO 优化的 YouTube 标题、带时间戳的描述和标签:
python .claude/skills/stream-transcript-processor/scripts/generate_youtube_metadata.py \
<transcript_file> [--title "Custom Title"] [--output youtube_metadata.md]
输出包括:
示例输出:
## Title
Claude Haiku Benchmark: New Record! | Live Testing
## Description
🎯 Claude Haiku Benchmark: New Record! | Live Testing
In this stream, I test and demo various AI tools and share my findings live. Watch to see real-time benchmarks, comparisons, and honest reactions.
⏱️ TIMESTAMPS 0:00 - Intro 6:12 - Setting up Playwright MCP 11:05 - Claude Code $200 plan discussion 47:35 - Testing Playrider MCP 1:01:31 - Playrider speed results 1:04:29 - Haiku 43 second record!
🔗 TOOLS MENTIONED • Claude: <https://claude.ai> • Playwright: <https://playwright.dev> • Augment: <https://augmentcode.com>
📱 CONNECT • Twitter/X: <https://x.com/techfren> • GitHub: <https://github.com/aj47> • Website: <https://techfren.net>
#AI #Coding #Programming #Claude #Benchmark #TechStream
## Tags
claude, haiku, benchmark, ai coding, playwright mcp, live coding, ai tools, programming, developer tools, automation
查看 youtube-description-template.md 获取完整模板和 SEO 指南。
当用户要求“分析”直播流时,生成洞察:
# Stream Analytics: [Title/Date]
## Overview
- Duration: X hours
- Total transcript entries: N
- High-energy moments detected: N
## Topics Covered
| Topic | Mentions | Peak Timestamp |
|-------|----------|----------------|
| Cursor | 15 | 01:23:45 |
| Claude | 12 | 02:15:30 |
| ...
## Energy Peaks
Timestamps with highest concentration of reaction words:
1. [01:23:45] "Dude, this is insane..." (Score: 12)
2. [02:15:30] "Whoa, that actually worked..." (Score: 10)
3. ...
## Clip Potential
- High priority clips: N
- Medium priority clips: N
- Estimated short-form content: N minutes
## Value Props Mentioned
- Free: X times
- Open source: X times
- Local/offline: X times
## Recommended Extractions
1. [Timestamp] - [Brief description] - **Clip + X Post**
2. [Timestamp] - [Brief description] - **Script Notes**
3. ...
对于完整的片段提取,委托给 clipper:
# Claude internally:
# 1. Ensure parsed.json exists
# 2. Follow clipper SKILL.md workflow
# 3. Create segments.json
# 4. Extract clips
clipper 技能处理:
对于综合的 X 帖子提取:
# Claude internally:
# 1. Ensure parsed.json exists
# 2. Follow x-post-extractor SKILL.md workflow
# 3. Create x_posts_output/ folder
# 4. Include media capture
x-post-extractor 技能处理:
用于截图和补充媒体:
# Screenshots of tools/websites
python .claude/skills/media-fetcher/scripts/capture_screenshot.py "<url>"
# Fetch logos
python .claude/skills/media-fetcher/scripts/fetch_logos.py "<domain>"
# Extract video clips
ffmpeg -ss <start> -i "<video>" -t <duration> clip.mp4
分析转录文本时,寻找以下信号(参见 voice-patterns.md):
TOOLS = [
'cursor', 'claude', 'gpt', 'copilot', 'windsurf', 'augment',
'ollama', 'llama', 'mistral', 'groq', 'anthropic', 'openai'
]
来自 clip-criteria.md:
运行完整流程时:
./
├── transcript.txt # Fetched auto-captions
├── transcript.json # Structured transcript
├── parsed.json # Clipper-compatible format
├── segments.json # Identified clips
├── youtube_metadata.md # Title, description, tags
├── clips/ # Extracted video clips
│ ├── 001_Tool_Demo.mp4
│ ├── 002_Discovery.mp4
│ └── cleaned/
│ └── ...
├── script_notes/ # Recording notes
│ ├── 01_tool_demo.md
│ └── 02_discovery.md
├── x_posts_output/ # X post drafts
│ ├── 01_tool.txt
│ ├── 02_discovery.txt
│ └── README.md
└── media/ # Screenshots, logos
├── cursor.com.png
└── github_tool.png
yt-dlp - 下载自动字幕 (brew install yt-dlp)ffmpeg - 视频处理 (brew install ffmpeg)playwright - 浏览器自动化 (pip install playwright && playwright install chromium)User: "Process this stream https://youtube.com/watch?v=abc123"
Claude:
1. Fetches transcript → transcript.txt
2. Parses for clipper → parsed.json
3. Analyzes and finds 15 clips
4. Generates script notes for top 5
5. Drafts 3 X posts
6. Captures screenshots for tools mentioned
7. Reports summary with links to all outputs
User: "Generate script notes for the segment at 1:23:45"
Claude:
1. Loads transcript
2. Extracts 60-second window around timestamp
3. Generates HOOK/VALUE/DEMO/CLOSE structure
4. Outputs to terminal or saves to file
User: "Analyze this stream for clip potential"
Claude:
1. Scans transcript for voice patterns
2. Counts topics, energy peaks, value props
3. Identifies top clip candidates
4. Generates analytics report
tr 别名的 batch-processor 技能)每周安装数
1
仓库
GitHub 星标数
1
首次出现
今天
安全审计
安装于
amp1
cline1
openclaw1
opencode1
cursor1
kimi-cli1
Process @techfren stream transcripts into actionable content assets: clips, script notes, X posts, YouTube metadata, and analytics.
This skill orchestrates existing skills and adds new capabilities:
Activate when user:
| User Request | Action |
|---|---|
| "Process this stream [URL]" | Full pipeline: fetch → analyze → clips → posts → YouTube metadata |
| "Find clips from [transcript]" | Delegate to clipper skill |
| "Generate script notes for [timestamp]" | Create recording notes |
| "Draft X posts from [stream]" | Delegate to x-post-extractor or use draft script |
| "Generate YouTube description" | Create title, description with timestamps, tags |
| "Analyze this stream" | Content analytics report |
When user provides a stream URL:
Phase 1: Fetch Transcript
python .claude/skills/stream-transcript-processor/scripts/fetch_transcript.py "<URL>" transcript.txt
This creates:
transcript.txt - Timestamped text filetranscript.json - Structured JSON for processingPhase 2: Parse for Clipper Compatibility
python .claude/skills/clipper/scripts/parse_transcription.py transcript.json > parsed.json
Phase 3: Find Clips (Use Clipper Skill)
Follow the clipper skill workflow:
parsed.json in windowssegments.jsonPhase 4: Generate Content Assets
For each high-priority clip:
Script Notes:
python .claude/skills/stream-transcript-processor/scripts/generate_script_notes.py parsed.json <start_ts> <end_ts>
X Post Drafts:
python .claude/skills/stream-transcript-processor/scripts/draft_x_posts.py parsed.json <timestamp> --style discovery
Phase 5: Fetch Media (Optional)
Use media-fetcher for screenshots of tools mentioned:
python .claude/skills/media-fetcher/scripts/capture_screenshot.py "<url>" --output media/
If parsed.json or out.json already exists:
When user asks for script notes for a specific segment:
python .claude/skills/stream-transcript-processor/scripts/generate_script_notes.py \
<transcript_file> <start_timestamp> <end_timestamp> \
--output script_notes.md
Output format (sentence starters for recording):
## HOOK
- [Bold claim or result]...
- This is [Tool] and it...
## VALUE STACK
- Free, open source...
- Runs locally...
## DEMO
- Let's try it...
- [React naturally]
## CLOSE
- [Quick recap]...
- Try it out...
When user asks to draft X posts from stream:
Option 1: Use x-post-extractor skill (recommended for full workflow)
Follow x-post-extractor SKILL.md for comprehensive extraction.
Option 2: Quick draft from timestamp
python .claude/skills/stream-transcript-processor/scripts/draft_x_posts.py \
<transcript_file> <timestamp> --style <style>
Styles: discovery, result, comparison, tutorial
Generate SEO-optimized YouTube title, description with timestamps, and tags:
python .claude/skills/stream-transcript-processor/scripts/generate_youtube_metadata.py \
<transcript_file> [--title "Custom Title"] [--output youtube_metadata.md]
Output includes:
Example output:
## Title
Claude Haiku Benchmark: New Record! | Live Testing
## Description
🎯 Claude Haiku Benchmark: New Record! | Live Testing
In this stream, I test and demo various AI tools and share my findings live. Watch to see real-time benchmarks, comparisons, and honest reactions.
⏱️ TIMESTAMPS 0:00 - Intro 6:12 - Setting up Playwright MCP 11:05 - Claude Code $200 plan discussion 47:35 - Testing Playrider MCP 1:01:31 - Playrider speed results 1:04:29 - Haiku 43 second record!
🔗 TOOLS MENTIONED • Claude: https://claude.ai • Playwright: https://playwright.dev • Augment: https://augmentcode.com
📱 CONNECT • Twitter/X: https://x.com/techfren • GitHub: https://github.com/aj47 • Website: https://techfren.net
#AI #Coding #Programming #Claude #Benchmark #TechStream
## Tags
claude, haiku, benchmark, ai coding, playwright mcp, live coding, ai tools, programming, developer tools, automation
See youtube-description-template.md for full template and SEO guidelines.
When user asks to "analyze" a stream, generate insights:
# Stream Analytics: [Title/Date]
## Overview
- Duration: X hours
- Total transcript entries: N
- High-energy moments detected: N
## Topics Covered
| Topic | Mentions | Peak Timestamp |
|-------|----------|----------------|
| Cursor | 15 | 01:23:45 |
| Claude | 12 | 02:15:30 |
| ...
## Energy Peaks
Timestamps with highest concentration of reaction words:
1. [01:23:45] "Dude, this is insane..." (Score: 12)
2. [02:15:30] "Whoa, that actually worked..." (Score: 10)
3. ...
## Clip Potential
- High priority clips: N
- Medium priority clips: N
- Estimated short-form content: N minutes
## Value Props Mentioned
- Free: X times
- Open source: X times
- Local/offline: X times
## Recommended Extractions
1. [Timestamp] - [Brief description] - **Clip + X Post**
2. [Timestamp] - [Brief description] - **Script Notes**
3. ...
For full clip extraction, delegate to clipper:
# Claude internally:
# 1. Ensure parsed.json exists
# 2. Follow clipper SKILL.md workflow
# 3. Create segments.json
# 4. Extract clips
The clipper skill handles:
For comprehensive X post extraction:
# Claude internally:
# 1. Ensure parsed.json exists
# 2. Follow x-post-extractor SKILL.md workflow
# 3. Create x_posts_output/ folder
# 4. Include media capture
The x-post-extractor skill handles:
For screenshots and supplementary media:
# Screenshots of tools/websites
python .claude/skills/media-fetcher/scripts/capture_screenshot.py "<url>"
# Fetch logos
python .claude/skills/media-fetcher/scripts/fetch_logos.py "<domain>"
# Extract video clips
ffmpeg -ss <start> -i "<video>" -t <duration> clip.mp4
When analyzing transcripts, look for these signals (see voice-patterns.md):
TOOLS = [
'cursor', 'claude', 'gpt', 'copilot', 'windsurf', 'augment',
'ollama', 'llama', 'mistral', 'groq', 'anthropic', 'openai'
]
From clip-criteria.md:
When running full pipeline:
./
├── transcript.txt # Fetched auto-captions
├── transcript.json # Structured transcript
├── parsed.json # Clipper-compatible format
├── segments.json # Identified clips
├── youtube_metadata.md # Title, description, tags
├── clips/ # Extracted video clips
│ ├── 001_Tool_Demo.mp4
│ ├── 002_Discovery.mp4
│ └── cleaned/
│ └── ...
├── script_notes/ # Recording notes
│ ├── 01_tool_demo.md
│ └── 02_discovery.md
├── x_posts_output/ # X post drafts
│ ├── 01_tool.txt
│ ├── 02_discovery.txt
│ └── README.md
└── media/ # Screenshots, logos
├── cursor.com.png
└── github_tool.png
yt-dlp - Download auto-captions (brew install yt-dlp)ffmpeg - Video processing (brew install ffmpeg)playwright - Browser automation (pip install playwright && playwright install chromium)User: "Process this stream https://youtube.com/watch?v=abc123"
Claude:
1. Fetches transcript → transcript.txt
2. Parses for clipper → parsed.json
3. Analyzes and finds 15 clips
4. Generates script notes for top 5
5. Drafts 3 X posts
6. Captures screenshots for tools mentioned
7. Reports summary with links to all outputs
User: "Generate script notes for the segment at 1:23:45"
Claude:
1. Loads transcript
2. Extracts 60-second window around timestamp
3. Generates HOOK/VALUE/DEMO/CLOSE structure
4. Outputs to terminal or saves to file
User: "Analyze this stream for clip potential"
Claude:
1. Scans transcript for voice patterns
2. Counts topics, energy peaks, value props
3. Identifies top clip candidates
4. Generates analytics report
tr alias)Weekly Installs
1
Repository
GitHub Stars
1
First Seen
Today
Security Audits
Gen Agent Trust HubPassSocketFailSnykWarn
Installed on
amp1
cline1
openclaw1
opencode1
cursor1
kimi-cli1
Skills CLI 使用指南:AI Agent 技能包管理器安装与管理教程
31,600 周安装