npx skills add https://github.com/agentlyhq/skills --skill google-drive-knowledge-bank将 Google Docs 会议纪要文件夹转换为可查询的知识库。此技能会摄取会议纪要,将其解析为结构化格式并存储,以便在回答问题时进行快速、准确的检索。
从 Google Drive 拉取会议纪要,解析其内容,并将其存储为结构化数据。
搜索存储的知识库以准确回答问题,始终将回答基于实际的会议内容。
此技能需要 Google Workspace CLI (gws):
# 检查是否已安装
gws --version
# 如果未安装
npm install -g @googleworkspace/cli
# 身份验证
gws auth login
# 验证访问权限
gws drive files list --max-results 5
向用户询问他们的会议纪要文件夹。他们可以提供:
按名称查找文件夹:
gws drive files search \
--query "name = 'Meeting Notes' and mimeType = 'application/vnd.google-apps.folder'" \
--format json
从 URL 提取文件夹 ID: URL 格式:
Transform a folder of Google Docs meeting notes into a queryable knowledge bank. This skill ingests meeting notes, parses them into structured format, and stores them for fast, accurate retrieval when answering questions.
Pull meeting notes from Google Drive, parse their content, and store as structured data.
Search the stored knowledge bank to answer questions accurately, always grounding responses in the actual meeting content.
This skill requires the Google Workspace CLI (gws):
# Check if installed
gws --version
# If not installed
npm install -g @googleworkspace/cli
# Authenticate
gws auth login
# Verify access
gws drive files list --max-results 5
Ask the user for their meeting notes folder. They can provide:
Find folder by name:
gws drive files search \
--query "name = 'Meeting Notes' and mimeType = 'application/vnd.google-apps.folder'" \
--format json
广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
https://drive.google.com/drive/folders/FOLDER_ID# 获取文件夹中的所有 Google Docs
gws drive files list \
--parent-id "FOLDER_ID" \
--query "mimeType = 'application/vnd.google-apps.document'" \
--format json > /tmp/meeting_files_list.json
# 解析以查看获取的内容
cat /tmp/meeting_files_list.json | jq -r '.files[] | "\(.name) | \(.id) | \(.modifiedTime)"'
# 创建知识库目录
mkdir -p /tmp/knowledge_bank
# 解析文件列表并导出每个文档
FILE_IDS=($(cat /tmp/meeting_files_list.json | jq -r '.files[].id'))
for file_id in "${FILE_IDS[@]}"; do
# 获取文件元数据
FILE_NAME=$(cat /tmp/meeting_files_list.json | jq -r ".files[] | select(.id == \"$file_id\") | .name")
MODIFIED=$(cat /tmp/meeting_files_list.json | jq -r ".files[] | select(.id == \"$file_id\") | .modifiedTime")
# 导出为纯文本
echo "正在摄取:$FILE_NAME"
gws drive files export \
--file-id "$file_id" \
--mime-type "text/plain" \
--output-file "/tmp/knowledge_bank/${file_id}.txt"
# 存储元数据
echo "$file_id|$FILE_NAME|$MODIFIED" >> /tmp/knowledge_bank/metadata.txt
done
echo "✓ 已摄取 ${#FILE_IDS[@]} 份会议纪要"
对于每个会议纪要,提取结构化信息:
# 创建解析后的知识库
mkdir -p /tmp/knowledge_bank/parsed
for file in /tmp/knowledge_bank/*.txt; do
if [ "$file" = "/tmp/knowledge_bank/metadata.txt" ]; then
continue
fi
FILE_ID=$(basename "$file" .txt)
# 读取完整内容
CONTENT=$(cat "$file")
# 从内容中解析元数据(文件名模式:"会议类型 - YYYY-MM-DD")
METADATA=$(grep "$FILE_ID" /tmp/knowledge_bank/metadata.txt)
FILE_NAME=$(echo "$METADATA" | cut -d'|' -f2)
MODIFIED=$(echo "$METADATA" | cut -d'|' -f3)
# 如果存在,从文件名中提取会议日期
MEETING_DATE=$(echo "$FILE_NAME" | grep -oE '[0-9]{4}-[0-9]{2}-[0-9]{2}' || echo "unknown")
# 创建结构化 JSON 条目
cat > "/tmp/knowledge_bank/parsed/${FILE_ID}.json" <<EOF
{
"file_id": "$FILE_ID",
"filename": "$FILE_NAME",
"meeting_date": "$MEETING_DATE",
"modified_time": "$MODIFIED",
"content": $(echo "$CONTENT" | jq -Rs .),
"word_count": $(echo "$CONTENT" | wc -w),
"indexed_at": "$(date --iso-8601=seconds)"
}
EOF
done
echo "✓ 已解析并结构化所有会议纪要"
使用关键词构建可搜索索引:
# 创建索引文件
echo "# 会议纪要知识库索引" > /tmp/knowledge_bank/INDEX.md
echo "生成时间:$(date)" >> /tmp/knowledge_bank/INDEX.md
echo "" >> /tmp/knowledge_bank/INDEX.md
# 列出所有会议
echo "## 已索引的会议" >> /tmp/knowledge_bank/INDEX.md
echo "" >> /tmp/knowledge_bank/INDEX.md
for json_file in /tmp/knowledge_bank/parsed/*.json; do
FILE_NAME=$(jq -r '.filename' "$json_file")
MEETING_DATE=$(jq -r '.meeting_date' "$json_file")
WORD_COUNT=$(jq -r '.word_count' "$json_file")
FILE_ID=$(jq -r '.file_id' "$json_file")
echo "- **$FILE_NAME** ($MEETING_DATE) - $WORD_COUNT 字 - ID: $FILE_ID" >> /tmp/knowledge_bank/INDEX.md
done
echo "" >> /tmp/knowledge_bank/INDEX.md
echo "会议总数:$(ls /tmp/knowledge_bank/parsed/*.json | wc -l)" >> /tmp/knowledge_bank/INDEX.md
cat /tmp/knowledge_bank/INDEX.md
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "✓ 知识库准备就绪"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "位置:/tmp/knowledge_bank/"
echo "已索引会议数:$(ls /tmp/knowledge_bank/parsed/*.json | wc -l)"
echo "可以回答问题!"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
告诉用户:"我已摄取并索引了您的会议纪要。您现在可以向我询问有关它们的问题了!"
始终遵循此工作流程:
if [ ! -d "/tmp/knowledge_bank/parsed" ]; then
echo "错误:未找到知识库。请先运行摄取流程。"
exit 1
fi
if [ $(ls /tmp/knowledge_bank/parsed/*.json 2>/dev/null | wc -l) -eq 0 ]; then
echo "错误:知识库为空。请先摄取会议纪要。"
exit 1
fi
按内容关键词搜索:
# 用户提问:"关于 API 重新设计我们决定了什么?"
KEYWORD="API redesign"
# 搜索所有已解析的纪要
grep -l "$KEYWORD" /tmp/knowledge_bank/*.txt 2>/dev/null | while read file; do
FILE_ID=$(basename "$file" .txt)
if [ -f "/tmp/knowledge_bank/parsed/${FILE_ID}.json" ]; then
FILE_NAME=$(jq -r '.filename' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")
MEETING_DATE=$(jq -r '.meeting_date' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")
echo "在以下文件中找到:$FILE_NAME ($MEETING_DATE) - ID: $FILE_ID"
fi
done
按会议日期搜索:
# 用户提问:"3月1日的会议发生了什么?"
TARGET_DATE="2026-03-01"
for json_file in /tmp/knowledge_bank/parsed/*.json; do
MEETING_DATE=$(jq -r '.meeting_date' "$json_file")
if [ "$MEETING_DATE" = "$TARGET_DATE" ]; then
FILE_NAME=$(jq -r '.filename' "$json_file")
FILE_ID=$(jq -r '.file_id' "$json_file")
echo "找到:$FILE_NAME - ID: $FILE_ID"
fi
done
按会议类型搜索:
# 用户提问:"显示所有工程同步会议"
MEETING_TYPE="Engineering Sync"
for json_file in /tmp/knowledge_bank/parsed/*.json; do
FILE_NAME=$(jq -r '.filename' "$json_file")
if echo "$FILE_NAME" | grep -q "$MEETING_TYPE"; then
MEETING_DATE=$(jq -r '.meeting_date' "$json_file")
FILE_ID=$(jq -r '.file_id' "$json_file")
echo "找到:$FILE_NAME ($MEETING_DATE) - ID: $FILE_ID"
fi
done
一旦确定了相关会议,阅读其完整内容:
# 从解析后的 JSON 中读取
CONTENT=$(jq -r '.content' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "正在阅读:$(jq -r '.filename' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")"
echo "日期:$(jq -r '.meeting_date' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "$CONTENT"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
关键规则:
响应格式:
**答案**:[来自会议内容的直接答案]
**来源**:[会议名称] - [会议日期]
**完整上下文**:[纪要中的相关摘录]
**文件 ID**:[用于参考的 file_id]
如果多个会议讨论了该主题:
# 查找所有相关会议
RELEVANT_IDS=()
for file in /tmp/knowledge_bank/*.txt; do
if grep -q "$KEYWORD" "$file"; then
FILE_ID=$(basename "$file" .txt)
RELEVANT_IDS+=("$FILE_ID")
fi
done
echo "找到 ${#RELEVANT_IDS[@]} 个讨论 '$KEYWORD' 的会议"
# 读取并综合
for file_id in "${RELEVANT_IDS[@]}"; do
FILE_NAME=$(jq -r '.filename' "/tmp/knowledge_bank/parsed/${file_id}.json")
MEETING_DATE=$(jq -r '.meeting_date' "/tmp/knowledge_bank/parsed/${file_id}.json")
CONTENT=$(jq -r '.content' "/tmp/knowledge_bank/parsed/${file_id}.json")
echo "═══════════════════════════════════════════════"
echo "$FILE_NAME - $MEETING_DATE"
echo "═══════════════════════════════════════════════"
echo "$CONTENT"
echo ""
done
然后按时间顺序对所有会议进行综合。
用户:"关于定价决定了什么?"
工作流程:
1. 在所有会议纪要中搜索"pricing"
2. 查找匹配的会议
3. 阅读完整内容
4. 提取决策陈述
5. 引用来源会议
用户:"API 讨论是如何演变的?"
工作流程:
1. 在所有纪要中搜索"API"
2. 按会议日期对匹配项排序
3. 按时间顺序阅读每个会议
4. 显示讨论的进展
5. 在时间线中引用每个会议
用户:"我的行动项是什么?"
工作流程:
1. 在所有纪要中搜索"action"或"@"或"TODO"
2. 提取提及用户姓名的行
3. 按会议分组
4. 列出并注明来源和日期
用户:"总结关于招聘的讨论"
工作流程:
1. 在所有纪要中搜索"hiring"或"recruitment"
2. 阅读所有匹配的会议
3. 综合关键点
4. 引用每个来源
当添加新会议时:
# 重新运行摄取以拉取新纪要
# 它将覆盖具有相同 ID 的现有文件
# 新文件将被添加
echo "知识库状态:"
echo "位置:/tmp/knowledge_bank/"
echo "会议总数:$(ls /tmp/knowledge_bank/parsed/*.json 2>/dev/null | wc -l)"
echo "最后更新:$(stat -c %y /tmp/knowledge_bank/INDEX.md 2>/dev/null)"
echo ""
echo "最近的会议:"
ls -t /tmp/knowledge_bank/parsed/*.json | head -5 | while read file; do
jq -r '"\(.filename) - \(.meeting_date)"' "$file"
done
# 显示最常见的关键词
cat /tmp/knowledge_bank/*.txt | tr '[:space:]' '\n' | \
tr '[:upper:]' '[:lower:]' | \
grep -v '^$' | \
sort | uniq -c | sort -rn | head -20
if [ ! -d "/tmp/knowledge_bank" ]; then
echo "⚠ 知识库未初始化。"
echo "请运行:'ingest my meeting notes from [folder]'"
exit 1
fi
if [ ${#RELEVANT_IDS[@]} -eq 0 ]; then
echo "未找到讨论 '$KEYWORD' 的会议"
echo ""
echo "尝试:"
echo " - 替代关键词"
echo " - 更广泛的搜索词"
echo " - 检查会议日期范围"
echo ""
echo "可用会议:"
cat /tmp/knowledge_bank/INDEX.md | grep "^- "
fi
"我在 [日期] 的会议中找到了关于 X 的部分信息,但讨论似乎不完整。纪要提到了 [已有的内容],但没有包括 [缺失的内容]。"
在回答任何问题之前:
# ═══════════════════════════════════════════════════
# 阶段 1:摄取
# ═══════════════════════════════════════════════════
用户:"从 'Team Meetings' 文件夹摄取我的会议纪要"
Claude:
$ gws drive files search --query "name = 'Team Meetings' and mimeType = 'application/vnd.google-apps.folder'" --format json
# 找到文件夹 ID:abc123xyz
$ gws drive files list --parent-id "abc123xyz" --query "mimeType = 'application/vnd.google-apps.document'" --format json > /tmp/meeting_files_list.json
# 找到 23 份会议纪要
$ mkdir -p /tmp/knowledge_bank/parsed
# 正在导出所有 23 个文档...
# [显示进度]
✓ 已构建包含 23 份会议纪要的知识库
✓ 可以回答问题!
# ═══════════════════════════════════════════════════
# 阶段 2:查询
# ═══════════════════════════════════════════════════
用户:"关于 Q2 路线图决定了什么?"
Claude:
$ grep -l "Q2 roadmap" /tmp/knowledge_bank/*.txt
# 在 3 个会议中找到
$ # 正在阅读 Engineering Sync - 2026-02-15
$ jq -r '.content' /tmp/knowledge_bank/parsed/file123.json
$ # 正在阅读 Product Planning - 2026-02-20
$ jq -r '.content' /tmp/knowledge_bank/parsed/file456.json
$ # 正在阅读 All Hands - 2026-03-01
$ jq -r '.content' /tmp/knowledge_bank/parsed/file789.json
**答案**:Q2 路线图已最终确定,包含三个优先事项:
1. 移动应用重新设计(优先事项 #1)
2. API v2 发布(优先事项 #2)
3. 分析仪表板(移至 Q3)
**来源**:
- Engineering Sync - 2026年2月15日
- Product Planning - 2026年2月20日
- All Hands - 2026年3月1日
**时间线**:
- 2月15日:讨论了初步提案
- 2月20日:讨论了优先事项,移动应用被选为 #1
- 3月1日:向公司宣布了最终路线图
**关键决策**:由于资源限制,分析仪表板被降低优先级(在2月20日的会议中注明)。
知识库存储在:/tmp/knowledge_bank/
结构:
/tmp/knowledge_bank/
├── metadata.txt # 文件 ID 到名称的映射
├── INDEX.md # 人类可读的索引
├── *.txt # 原始导出的内容
└── parsed/
└── *.json # 结构化的解析后纪要
注意:这位于 /tmp/ 中用于会话存储。对于跨会话的持久存储,请使用不同的位置或实现保存/加载功能。
🎯 核心原则:此技能是一个两阶段系统:
知识库是您的真相来源。始终从中读取,切勿编造内容。
每周安装数
1
仓库
首次出现
1 天前
安全审计
安装于
amp1
cline1
openclaw1
opencode1
cursor1
kimi-cli1
Extract folder ID from URL: URL format: https://drive.google.com/drive/folders/FOLDER_ID
# Get all Google Docs in the folder
gws drive files list \
--parent-id "FOLDER_ID" \
--query "mimeType = 'application/vnd.google-apps.document'" \
--format json > /tmp/meeting_files_list.json
# Parse to see what we got
cat /tmp/meeting_files_list.json | jq -r '.files[] | "\(.name) | \(.id) | \(.modifiedTime)"'
# Create knowledge bank directory
mkdir -p /tmp/knowledge_bank
# Parse file list and export each document
FILE_IDS=($(cat /tmp/meeting_files_list.json | jq -r '.files[].id'))
for file_id in "${FILE_IDS[@]}"; do
# Get file metadata
FILE_NAME=$(cat /tmp/meeting_files_list.json | jq -r ".files[] | select(.id == \"$file_id\") | .name")
MODIFIED=$(cat /tmp/meeting_files_list.json | jq -r ".files[] | select(.id == \"$file_id\") | .modifiedTime")
# Export as plain text
echo "Ingesting: $FILE_NAME"
gws drive files export \
--file-id "$file_id" \
--mime-type "text/plain" \
--output-file "/tmp/knowledge_bank/${file_id}.txt"
# Store metadata
echo "$file_id|$FILE_NAME|$MODIFIED" >> /tmp/knowledge_bank/metadata.txt
done
echo "✓ Ingested ${#FILE_IDS[@]} meeting notes"
For each meeting note, extract structured information:
# Create parsed knowledge base
mkdir -p /tmp/knowledge_bank/parsed
for file in /tmp/knowledge_bank/*.txt; do
if [ "$file" = "/tmp/knowledge_bank/metadata.txt" ]; then
continue
fi
FILE_ID=$(basename "$file" .txt)
# Read full content
CONTENT=$(cat "$file")
# Parse metadata from content (filename pattern: "Meeting Type - YYYY-MM-DD")
METADATA=$(grep "$FILE_ID" /tmp/knowledge_bank/metadata.txt)
FILE_NAME=$(echo "$METADATA" | cut -d'|' -f2)
MODIFIED=$(echo "$METADATA" | cut -d'|' -f3)
# Extract meeting date from filename if present
MEETING_DATE=$(echo "$FILE_NAME" | grep -oE '[0-9]{4}-[0-9]{2}-[0-9]{2}' || echo "unknown")
# Create structured JSON entry
cat > "/tmp/knowledge_bank/parsed/${FILE_ID}.json" <<EOF
{
"file_id": "$FILE_ID",
"filename": "$FILE_NAME",
"meeting_date": "$MEETING_DATE",
"modified_time": "$MODIFIED",
"content": $(echo "$CONTENT" | jq -Rs .),
"word_count": $(echo "$CONTENT" | wc -w),
"indexed_at": "$(date --iso-8601=seconds)"
}
EOF
done
echo "✓ Parsed and structured all meeting notes"
Build a searchable index with keywords:
# Create index file
echo "# Meeting Notes Knowledge Bank Index" > /tmp/knowledge_bank/INDEX.md
echo "Generated: $(date)" >> /tmp/knowledge_bank/INDEX.md
echo "" >> /tmp/knowledge_bank/INDEX.md
# List all meetings
echo "## Meetings Indexed" >> /tmp/knowledge_bank/INDEX.md
echo "" >> /tmp/knowledge_bank/INDEX.md
for json_file in /tmp/knowledge_bank/parsed/*.json; do
FILE_NAME=$(jq -r '.filename' "$json_file")
MEETING_DATE=$(jq -r '.meeting_date' "$json_file")
WORD_COUNT=$(jq -r '.word_count' "$json_file")
FILE_ID=$(jq -r '.file_id' "$json_file")
echo "- **$FILE_NAME** ($MEETING_DATE) - $WORD_COUNT words - ID: $FILE_ID" >> /tmp/knowledge_bank/INDEX.md
done
echo "" >> /tmp/knowledge_bank/INDEX.md
echo "Total meetings: $(ls /tmp/knowledge_bank/parsed/*.json | wc -l)" >> /tmp/knowledge_bank/INDEX.md
cat /tmp/knowledge_bank/INDEX.md
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "✓ KNOWLEDGE BANK READY"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Location: /tmp/knowledge_bank/"
echo "Meetings indexed: $(ls /tmp/knowledge_bank/parsed/*.json | wc -l)"
echo "Ready to answer questions!"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
Tell the user: "I've ingested and indexed your meeting notes. You can now ask me questions about them!"
Always follow this workflow:
if [ ! -d "/tmp/knowledge_bank/parsed" ]; then
echo "ERROR: Knowledge bank not found. Please run ingestion first."
exit 1
fi
if [ $(ls /tmp/knowledge_bank/parsed/*.json 2>/dev/null | wc -l) -eq 0 ]; then
echo "ERROR: Knowledge bank is empty. Please ingest meeting notes first."
exit 1
fi
Search by keyword in content:
# User asks: "What did we decide about API redesign?"
KEYWORD="API redesign"
# Search all parsed notes
grep -l "$KEYWORD" /tmp/knowledge_bank/*.txt 2>/dev/null | while read file; do
FILE_ID=$(basename "$file" .txt)
if [ -f "/tmp/knowledge_bank/parsed/${FILE_ID}.json" ]; then
FILE_NAME=$(jq -r '.filename' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")
MEETING_DATE=$(jq -r '.meeting_date' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")
echo "Found in: $FILE_NAME ($MEETING_DATE) - ID: $FILE_ID"
fi
done
Search by meeting date:
# User asks: "What happened in the March 1st meeting?"
TARGET_DATE="2026-03-01"
for json_file in /tmp/knowledge_bank/parsed/*.json; do
MEETING_DATE=$(jq -r '.meeting_date' "$json_file")
if [ "$MEETING_DATE" = "$TARGET_DATE" ]; then
FILE_NAME=$(jq -r '.filename' "$json_file")
FILE_ID=$(jq -r '.file_id' "$json_file")
echo "Found: $FILE_NAME - ID: $FILE_ID"
fi
done
Search by meeting type:
# User asks: "Show me all engineering syncs"
MEETING_TYPE="Engineering Sync"
for json_file in /tmp/knowledge_bank/parsed/*.json; do
FILE_NAME=$(jq -r '.filename' "$json_file")
if echo "$FILE_NAME" | grep -q "$MEETING_TYPE"; then
MEETING_DATE=$(jq -r '.meeting_date' "$json_file")
FILE_ID=$(jq -r '.file_id' "$json_file")
echo "Found: $FILE_NAME ($MEETING_DATE) - ID: $FILE_ID"
fi
done
Once you've identified relevant meetings, read their FULL content:
# Read from parsed JSON
CONTENT=$(jq -r '.content' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Reading: $(jq -r '.filename' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")"
echo "Date: $(jq -r '.meeting_date' "/tmp/knowledge_bank/parsed/${FILE_ID}.json")"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "$CONTENT"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
CRITICAL RULES:
Response Format:
**Answer**: [Direct answer from meeting content]
**Source**: [Meeting Name] - [Meeting Date]
**Full Context**: [Relevant excerpt from notes]
**File ID**: [file_id for reference]
If multiple meetings discuss the topic:
# Find all relevant meetings
RELEVANT_IDS=()
for file in /tmp/knowledge_bank/*.txt; do
if grep -q "$KEYWORD" "$file"; then
FILE_ID=$(basename "$file" .txt)
RELEVANT_IDS+=("$FILE_ID")
fi
done
echo "Found ${#RELEVANT_IDS[@]} meetings discussing '$KEYWORD'"
# Read and synthesize
for file_id in "${RELEVANT_IDS[@]}"; do
FILE_NAME=$(jq -r '.filename' "/tmp/knowledge_bank/parsed/${file_id}.json")
MEETING_DATE=$(jq -r '.meeting_date' "/tmp/knowledge_bank/parsed/${file_id}.json")
CONTENT=$(jq -r '.content' "/tmp/knowledge_bank/parsed/${file_id}.json")
echo "═══════════════════════════════════════════════"
echo "$FILE_NAME - $MEETING_DATE"
echo "═══════════════════════════════════════════════"
echo "$CONTENT"
echo ""
done
Then synthesize across all meetings chronologically.
User: "What was decided about pricing?"
Workflow:
1. Search for "pricing" in all meeting notes
2. Find matching meetings
3. Read full content
4. Extract decision statements
5. Cite source meeting(s)
User: "How did the API discussion evolve?"
Workflow:
1. Search for "API" across all notes
2. Sort matches by meeting date
3. Read each in chronological order
4. Show progression of discussion
5. Cite each meeting in timeline
User: "What are my action items?"
Workflow:
1. Search for "action" OR "@" OR "TODO" across notes
2. Extract lines mentioning user's name
3. Group by meeting
4. List with source and date
User: "Summarize discussions about hiring"
Workflow:
1. Search for "hiring" OR "recruitment" across notes
2. Read all matching meetings
3. Synthesize key points
4. Cite each source
When new meetings are added:
# Re-run ingestion to pull new notes
# It will overwrite existing files with same ID
# New files will be added
echo "Knowledge Bank Status:"
echo "Location: /tmp/knowledge_bank/"
echo "Total meetings: $(ls /tmp/knowledge_bank/parsed/*.json 2>/dev/null | wc -l)"
echo "Last updated: $(stat -c %y /tmp/knowledge_bank/INDEX.md 2>/dev/null)"
echo ""
echo "Recent meetings:"
ls -t /tmp/knowledge_bank/parsed/*.json | head -5 | while read file; do
jq -r '"\(.filename) - \(.meeting_date)"' "$file"
done
# Show most common keywords
cat /tmp/knowledge_bank/*.txt | tr '[:space:]' '\n' | \
tr '[:upper:]' '[:lower:]' | \
grep -v '^$' | \
sort | uniq -c | sort -rn | head -20
if [ ! -d "/tmp/knowledge_bank" ]; then
echo "⚠ Knowledge bank not initialized."
echo "Please run: 'ingest my meeting notes from [folder]'"
exit 1
fi
if [ ${#RELEVANT_IDS[@]} -eq 0 ]; then
echo "No meetings found discussing '$KEYWORD'"
echo ""
echo "Try:"
echo " - Alternative keywords"
echo " - Broader search terms"
echo " - Checking meeting date range"
echo ""
echo "Available meetings:"
cat /tmp/knowledge_bank/INDEX.md | grep "^- "
fi
"I found partial information about X in the [date] meeting, but the
discussion appears incomplete. The notes mention [what's there] but
don't include [what's missing]."
Before answering any question:
# ═══════════════════════════════════════════════════
# PHASE 1: INGESTION
# ═══════════════════════════════════════════════════
User: "Ingest my meeting notes from the 'Team Meetings' folder"
Claude:
$ gws drive files search --query "name = 'Team Meetings' and mimeType = 'application/vnd.google-apps.folder'" --format json
# Found folder ID: abc123xyz
$ gws drive files list --parent-id "abc123xyz" --query "mimeType = 'application/vnd.google-apps.document'" --format json > /tmp/meeting_files_list.json
# Found 23 meeting notes
$ mkdir -p /tmp/knowledge_bank/parsed
# Exporting all 23 documents...
# [Shows progress]
✓ Knowledge bank built with 23 meeting notes
✓ Ready to answer questions!
# ═══════════════════════════════════════════════════
# PHASE 2: QUERYING
# ═══════════════════════════════════════════════════
User: "What was decided about the Q2 roadmap?"
Claude:
$ grep -l "Q2 roadmap" /tmp/knowledge_bank/*.txt
# Found in 3 meetings
$ # Reading Engineering Sync - 2026-02-15
$ jq -r '.content' /tmp/knowledge_bank/parsed/file123.json
$ # Reading Product Planning - 2026-02-20
$ jq -r '.content' /tmp/knowledge_bank/parsed/file456.json
$ # Reading All Hands - 2026-03-01
$ jq -r '.content' /tmp/knowledge_bank/parsed/file789.json
**Answer**: The Q2 roadmap was finalized with three priorities:
1. Mobile app redesign (priority #1)
2. API v2 launch (priority #2)
3. Analytics dashboard (moved to Q3)
**Sources**:
- Engineering Sync - February 15, 2026
- Product Planning - February 20, 2026
- All Hands - March 1, 2026
**Timeline**:
- Feb 15: Initial proposal discussed
- Feb 20: Priorities debated, mobile chosen as #1
- Mar 1: Final roadmap announced to company
**Key Decision**: Analytics dashboard deprioritized due to
resource constraints (noted in Feb 20 meeting).
Knowledge bank stored at: /tmp/knowledge_bank/
Structure:
/tmp/knowledge_bank/
├── metadata.txt # File ID to name mapping
├── INDEX.md # Human-readable index
├── *.txt # Raw exported content
└── parsed/
└── *.json # Structured parsed notes
Note : This is in /tmp/ for session storage. For persistent storage across sessions, use a different location or implement saving/loading.
🎯 Core Principle : This skill is a two-phase system:
The knowledge bank is your source of truth. Always read from it, never make things up.
Weekly Installs
1
Repository
First Seen
1 day ago
Security Audits
Installed on
amp1
cline1
openclaw1
opencode1
cursor1
kimi-cli1
通过 LiteLLM 代理让 Claude Code 对接 GitHub Copilot 运行 | 高级变通方案指南
31,600 周安装