axiom-swift-performance by charleswiltgen/axiom
npx skills add https://github.com/charleswiltgen/axiom --skill axiom-swift-performance核心原则:通过理解语言级别的性能特性——值语义、ARC 行为、泛型特化和内存布局——来优化 Swift 代码,从而编写快速、高效的代码,避免过早的微观优化。
Swift 版本:Swift 6.2+(适用于 InlineArray、Span、@concurrent)Xcode:16+ 平台:iOS 18+、macOS 15+
相关技能:
axiom-performance-profiling — 使用 Instruments 进行测量(首先做这个!)axiom-swiftui-performance — SwiftUI 特定优化axiom-build-performance — 编译速度axiom-swift-concurrency — 以正确性为中心的并发模式广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
Performance issue identified?
│
├─ Profiler shows excessive copying?
│ └─ → Part 1: Noncopyable Types
│ └─ → Part 2: Copy-on-Write
│
├─ Retain/release overhead in Time Profiler?
│ └─ → Part 4: ARC Optimization
│
├─ Generic code in hot path?
│ └─ → Part 5: Generics & Specialization
│
├─ Collection operations slow?
│ └─ → Part 7: Collection Performance
│
├─ Async/await overhead visible?
│ └─ → Part 8: Concurrency Performance
│
├─ Struct vs class decision?
│ └─ → Part 3: Value vs Reference
│
└─ Memory layout concerns?
└─ → Part 9: Memory Layout
来自 WWDC 2024-10217:Swift 的低级性能特性归结为四个领域。每个领域对应本技能中的一个部分。
| 原则 | 成本 | 技能覆盖 |
|---|---|---|
| 函数调用 | 调度开销,优化障碍 | Part 5(泛型),Part 6(内联) |
| 内存分配 | 栈 vs 堆,分配频率 | Part 3(值 vs 引用),Part 7(集合) |
| 内存布局 | 缓存局部性,填充,连续性 | Part 9(内存布局),Part 11(Span) |
| 值复制 | COW 触发,防御性复制,ARC 操作 | Part 1(Noncopyable),Part 2(COW),Part 4(ARC) |
理解哪个原则导致了你的瓶颈,决定了使用哪个部分。
Swift 6.0+ 引入了不可复制类型,适用于希望避免隐式复制的性能关键场景。
// Noncopyable type
struct FileHandle: ~Copyable {
private let fd: Int32
init(path: String) throws {
self.fd = open(path, O_RDONLY)
guard fd != -1 else { throw FileError.openFailed }
}
deinit {
close(fd)
}
// Must explicitly consume
consuming func close() {
_ = consume self
}
}
// Usage
func processFile() throws {
let handle = try FileHandle(path: "/data.txt")
// handle is automatically consumed at end of scope
// Cannot accidentally copy handle
}
// consuming - 获取所有权,调用者之后不能使用
func process(consuming data: [UInt8]) {
// data is consumed
}
// borrowing - 临时访问,无所有权
func validate(borrowing data: [UInt8]) -> Bool {
// data can still be used by caller
return data.count > 0
}
// inout - 可变访问
func modify(inout data: [UInt8]) {
data.append(0)
}
Swift 集合使用 COW 进行高效的内存共享。理解何时发生复制对于性能至关重要。
var array1 = [1, 2, 3] // Single allocation
var array2 = array1 // Share storage (no copy)
array2.append(4) // Now copies (array1 modified array2)
对于自定义 COW 实现,请参见下面的复制粘贴模式 1(COW 包装器)。
// ❌ 循环中的意外复制
for i in 0..<array.count {
array[i] = transform(array[i]) // Copy on first mutation if shared!
}
// ✅ 首先预留容量(确保唯一性)
array.reserveCapacity(array.count)
for i in 0..<array.count {
array[i] = transform(array[i])
}
// ❌ 多次修改触发多次唯一性检查
array.append(1)
array.append(2)
array.append(3)
// ✅ 单次预留
array.reserveCapacity(array.count + 3)
array.append(contentsOf: [1, 2, 3])
来自 WWDC 2024-10217:当 Swift 无法证明一个值不会通过共享引用被修改时,有时会插入_防御性复制_。
class DataStore {
var items: [Item] = [] // COW type stored in class
}
func process(_ store: DataStore) {
for item in store.items {
// Swift may defensively copy `items` because:
// 1. store.items is a class property (another reference could mutate it)
// 2. The loop needs a stable snapshot
handle(item)
}
}
如何避免:首先复制到局部变量——一次显式复制代替重复的防御性复制:
func process(_ store: DataStore) {
let items = store.items // One copy
for item in items {
handle(item) // No more defensive copies
}
}
在性能分析器中:防御性复制表现为意外的 swift_retain/swift_release 对或在你未预期分配时出现的 Array.__allocating_init 调用。
在 struct 和 class 之间选择具有显著的性能影响。
| 因素 | 使用 Struct | 使用 Class |
|---|---|---|
| 大小 | ≤ 64 字节 | > 64 字节或包含大数据 |
| 标识 | 不需要标识 | 需要标识 (===) |
| 继承 | 不需要 | 需要继承 |
| 修改 | 不频繁 | 频繁的原位更新 |
| 共享 | 不需要共享 | 必须在作用域间共享 |
// ✅ 快速 - 适合寄存器,无堆分配
struct Point {
var x: Double // 8 bytes
var y: Double // 8 bytes
} // Total: 16 bytes - excellent for struct
struct Color {
var r, g, b, a: UInt8 // 4 bytes total - perfect for struct
}
// ❌ 慢速 - 过度复制
struct HugeData {
var buffer: [UInt8] // 1MB
var metadata: String
}
func process(_ data: HugeData) { // Copies 1MB!
// ...
}
// ✅ 对大数据使用引用语义
final class HugeData {
var buffer: [UInt8]
var metadata: String
}
func process(_ data: HugeData) { // Only copies pointer (8 bytes)
// ...
}
对于外部需要值语义而内部需要引用存储的大型数据,使用 COW 包装器模式——参见下面的复制粘贴模式 1。
自动引用计数会增加开销。尽可能减少它。
class Parent {
var child: Child?
}
class Child {
// ❌ Weak 增加开销(可选,线程安全的归零)
weak var parent: Parent?
}
// ✅ 当你知道生命周期保证时使用 Unowned
class Child {
unowned let parent: Parent // No overhead, crashes if parent deallocated
}
性能:unowned 比 weak 快约 2 倍(无原子操作)。
使用时机:子对象生命周期 < 父对象生命周期(有保证时)。
class DataProcessor {
var data: [Int]
// ❌ 强捕获 self,然后使用 weak - 不必要的 weak 开销
func process(completion: @escaping () -> Void) {
DispatchQueue.global().async { [weak self] in
guard let self else { return }
self.data.forEach { print($0) }
completion()
}
}
// ✅ 仅捕获你需要的部分
func process(completion: @escaping () -> Void) {
let data = self.data // Copy value type
DispatchQueue.global().async {
data.forEach { print($0) } // No self captured
completion()
}
}
}
来自 WWDC 2024-10217:闭包根据是否逃逸具有不同的性能特征。
// 非逃逸闭包 — 栈分配的上下文,零 ARC 开销
func processItems(_ items: [Item], using transform: (Item) -> Result) -> [Result] {
items.map(transform) // Closure context lives on stack
}
// 逃逸闭包 — 堆分配的上下文,每个捕获的引用都有 ARC 操作
func processItemsLater(_ items: [Item], transform: @escaping (Item) -> Result) {
// Closure context heap-allocated as anonymous class instance
// Each captured reference gets retain/release
self.pending = { items.map(transform) }
}
为什么这很重要:@Sendable 闭包总是逃逸的,意味着每个 Task 闭包都会堆分配其捕获上下文。
在热路径中:优先使用非逃逸闭包。如果你在 Time Profiler 中看到闭包上下文的 swift_allocObject,请寻找可以改为非逃逸的逃逸闭包。
来自 WWDC 2021-10216:对象的生命周期在最后一次使用时结束,而不是在右大括号处。
// ❌ 依赖观察到的生命周期是脆弱的
class Traveler {
weak var account: Account?
deinit {
print("Deinitialized") // May run BEFORE expected with ARC optimizations!
}
}
func test() {
let traveler = Traveler()
let account = Account(traveler: traveler)
// traveler's last use is above - may deallocate here!
account.printSummary() // weak reference may be nil!
}
// ✅ 需要时显式延长生命周期
func test() {
let traveler = Traveler()
let account = Account(traveler: traveler)
withExtendedLifetime(traveler) {
account.printSummary() // traveler guaranteed to live
}
}
对象生命周期可能因 Xcode 版本、Debug 与 Release 模式以及不相关的代码更改而变化。在开发期间启用“优化对象生命周期”(Xcode 13+)以尽早暴露隐藏的生命周期错误。
泛型代码根据特化情况可能快也可能慢。
// Generic function
func process<T>(_ value: T) {
print(value)
}
// Calling with concrete type
process(42) // Compiler specializes: process_Int(42)
process("hello") // Compiler specializes: process_String("hello")
protocol Drawable {
func draw()
}
// ❌ 存在类型容器 - 昂贵(堆分配,间接访问)
func drawAll(shapes: [any Drawable]) {
for shape in shapes {
shape.draw() // Dynamic dispatch through witness table
}
}
// ✅ 带约束的泛型 - 可以特化
func drawAll<T: Drawable>(shapes: [T]) {
for shape in shapes {
shape.draw() // Static dispatch after specialization
}
}
性能:泛型版本快约 10 倍(消除了见证表开销)。
来自 WWDC 2016-416:any Protocol 使用一个 40 字节的存在类型容器(64 位系统上为 5 个字)。该容器存储类型元数据 + 协议见证表(16 字节)加上一个 24 字节的内联值缓冲区。大小 ≤24 字节的类型直接存储在缓冲区中(快速,访问约 5ns);更大的类型需要堆分配和指针间接访问(较慢,约 15ns)。some Protocol 消除了所有容器开销(约 2ns)。
当some不可用时(异构集合需要 any):
将类型大小减少到 ≤24 字节 — 保持符合协议的类型足够小以进行内联存储(3 个字:例如,Point { x, y, z: Double } 正好适合)
改用枚举分发 — 完全消除容器,用性能换取开放的可扩展性:
// ❌ 存在类型:每个元素 40 字节,见证表分发 let shapes: [any Drawable] = [circle, rect]
// ✅ 枚举:值大小,通过 switch 进行静态分发 enum Shape { case circle(Circle), rect(Rect) } func draw(_ shape: Shape) { switch shape { case .circle(let c): c.draw() case .rect(let r): r.draw() } }
批量操作 — 通过分块处理而不是逐个处理来分摊每个元素的存在类型开销
先测量 — 存在类型开销(约 10ns/次访问)仅在紧密循环中重要;对于 UI 级别的代码可以忽略不计
@_specialize 属性当编译器没有自动进行特化时,强制对常见类型进行特化:
@_specialize(where T == Int)
@_specialize(where T == String)
func process<T: Comparable>(_ value: T) -> T { value }
// Generates specialized versions + generic fallback
内联消除了函数调用开销但增加了代码大小。
// ✅ 小型、频繁调用的函数
@inlinable
public func fastAdd(_ a: Int, _ b: Int) -> Int {
return a + b
}
// ❌ 大型函数 - 代码膨胀
@inlinable // Don't do this!
public func complexAlgorithm() {
// 100 lines of code...
}
// Framework code
public struct Point {
public var x: Double
public var y: Double
// ✅ Inlinable for cross-module optimization
@inlinable
public func distance(to other: Point) -> Double {
let dx = x - other.x
let dy = y - other.y
return sqrt(dx*dx + dy*dy)
}
}
// Client code
let p1 = Point(x: 0, y: 0)
let p2 = Point(x: 3, y: 4)
let d = p1.distance(to: p2) // Inlined across module boundary
@usableFromInline// Internal helper that can be inlined
@usableFromInline
internal func helperFunction() { }
// Public API that uses it
@inlinable
public func publicAPI() {
helperFunction() // Can inline internal function
}
权衡:@inlinable 暴露了实现,阻止了未来的优化。
选择正确的集合并正确使用它很重要。
// ❌ Array<T> - 可能使用 NSArray 桥接(Swift/ObjC 互操作)
let array: Array<Int> = [1, 2, 3]
// ✅ ContiguousArray<T> - 保证连续内存(无桥接)
let array: ContiguousArray<Int> = [1, 2, 3]
使用ContiguousArray当:不需要 ObjC 桥接(纯 Swift),快约 15%。
// ❌ 多次重新分配
var array: [Int] = []
for i in 0..<10000 {
array.append(i) // Reallocates ~14 times
}
// ✅ 单次分配
var array: [Int] = []
array.reserveCapacity(10000)
for i in 0..<10000 {
array.append(i) // No reallocations
}
struct BadKey: Hashable {
var data: [Int]
// ❌ 昂贵的哈希(遍历整个数组)
func hash(into hasher: inout Hasher) {
for element in data {
hasher.combine(element)
}
}
}
struct GoodKey: Hashable {
var id: UUID // Fast hash
var data: [Int] // Not hashed
// ✅ 仅哈希唯一标识符
func hash(into hasher: inout Hasher) {
hasher.combine(id)
}
}
固定大小的数组直接存储在栈上——无堆分配,无 COW 开销。使用值泛型将大小编码在类型中。
// Traditional Array - heap allocated, COW overhead
var sprites: [Sprite] = Array(repeating: .default, count: 40)
// InlineArray - stack allocated, no COW (value generic syntax)
var sprites = InlineArray<40, Sprite>(repeating: .default)
一致性:RandomAccessCollection、MutableCollection、BitwiseCopyable、Sendable。支持 ~Copyable 元素类型。
何时使用 InlineArray:
InlineArray 是栈分配的(无堆),急切复制(非 COW),并提供 .span/.mutableSpan 用于零复制访问。针对分配/复制/修改的权衡与 Array 相比,请自行测量基准。
复制语义警告:
// ❌ 意外:InlineArray 急切复制
func processLarge(_ data: InlineArray<1000, UInt8>) {
// Copies all 1000 bytes on call!
}
// ✅ 使用 Span 避免复制
func processLarge(_ data: Span<UInt8>) {
// Zero-copy view, no matter the size
}
// Best practice: Store InlineArray, pass Span
struct Buffer {
var storage = InlineArray<1000, UInt8>(repeating: 0)
func process() {
helper(storage.span) // Pass view, not copy
}
}
何时不使用 InlineArray:
// ❌ 急切求值 - 处理整个数组
let result = array
.map { expensive($0) }
.filter { $0 > 0 }
.first // Only need first element!
// ✅ 惰性求值 - 在第一个匹配处停止
let result = array
.lazy
.map { expensive($0) }
.filter { $0 > 0 }
.first // Only evaluates until first match
Async/await 和 actor 会增加开销。适当使用。
actor Counter {
private var value = 0
// ❌ 简单操作的 actor 调用开销
func increment() {
value += 1
}
}
// Calling from different isolation domain
for _ in 0..<10000 {
await counter.increment() // 10,000 actor hops!
}
// ✅ 批量操作以减少 actor 开销
actor Counter {
private var value = 0
func incrementBatch(_ count: Int) {
value += count
}
}
await counter.incrementBatch(10000) // Single actor hop
每个 async 暂停成本约 20-30μs。保持同步操作同步——如果函数不需要等待,不要将其标记为 async。
// ❌ 每个项目创建任务(每个约 100μs 开销)
for item in items {
Task {
await process(item)
}
}
// ✅ 批量使用单个任务
Task {
for item in items {
await process(item)
}
}
// ✅ 或使用 TaskGroup 进行并行处理
await withTaskGroup(of: Void.self) { group in
for item in items {
group.addTask {
await process(item)
}
}
}
@concurrent 属性 (Swift 6.2)// Force background execution
@concurrent
func expensiveComputation() -> Int {
// Always runs on background thread, even if called from MainActor
return complexCalculation()
}
// Safe to call from main actor without blocking
@MainActor
func updateUI() async {
let result = await expensiveComputation() // Guaranteed off main thread
label.text = "\(result)"
}
关于 nonisolated 性能模式和详细的 actor 隔离指导,请参见 axiom-swift-concurrency。
理解内存布局有助于优化缓存性能并减少分配。
// ❌ 布局不佳(由于填充为 24 字节)
struct BadLayout {
var a: Bool // 1 byte + 7 padding
var b: Int64 // 8 bytes
var c: Bool // 1 byte + 7 padding
}
print(MemoryLayout<BadLayout>.size) // 24 bytes
// ✅ 优化布局(16 字节)
struct GoodLayout {
var b: Int64 // 8 bytes
var a: Bool // 1 byte
var c: Bool // 1 byte + 6 padding
}
print(MemoryLayout<GoodLayout>.size) // 16 bytes
// Query alignment
print(MemoryLayout<Double>.alignment) // 8
print(MemoryLayout<Int32>.alignment) // 4
// Structs align to largest member
struct Mixed {
var int32: Int32 // 4 bytes, 4-byte aligned
var double: Double // 8 bytes, 8-byte aligned
}
print(MemoryLayout<Mixed>.alignment) // 8 (largest member)
// ❌ 缓存局部性差
struct PointerBased {
var next: UnsafeMutablePointer<Node>? // Pointer chasing
}
// ✅ 基于数组的缓存局部性
struct ArrayBased {
var data: ContiguousArray<Int> // Contiguous memory
}
// Array iteration ~10x faster due to cache prefetching
来自 WWDC 2025-312:当编译器无法静态证明内存安全时,运行时排他性强制执行(swift_beginAccess/swift_endAccess)会出现在 Time Profiler 中。
它们是什么:Swift 强制执行,如果一个是写操作,则对同一变量的两次访问不能重叠。对于结构体属性,这是在编译时检查的。对于类的存储属性,会插入运行时检查。
如何识别:在 Time Profiler 或 Processor Trace 火焰图中查找 swift_beginAccess 和 swift_endAccess。
// ❌ 类属性需要运行时排他性检查
class Parser {
var state: ParserState
var cache: [Int: Pixel]
func parse() {
state.advance() // swift_beginAccess / swift_endAccess
cache[key] = pixel // swift_beginAccess / swift_endAccess
}
}
// ✅ 结构体属性在编译时检查 — 零运行时成本
struct Parser {
var state: ParserState
var cache: InlineArray<64, Pixel>
mutating func parse() {
state.advance() // No runtime check
cache[key] = pixel // No runtime check
}
}
实际影响:在 WWDC 2025-312 的 QOI 图像解析器中,将属性从类移动到结构体消除了所有运行时排他性检查,作为 >700 倍总改进的一部分,贡献了可测量的加速。
类型化抛出可以通过避免存在类型开销而比非类型化更快。
// Untyped - existential container for error
func fetchData() throws -> Data {
// Can throw any Error
throw NetworkError.timeout
}
// Typed - concrete error type
func fetchData() throws(NetworkError) -> Data {
// Can only throw NetworkError
throw NetworkError.timeout
}
// Measure with tight loop
func untypedThrows() throws -> Int {
throw GenericError.failed
}
func typedThrows() throws(GenericError) -> Int {
throw GenericError.failed
}
// Benchmark: typed ~5-10% faster (no existential overhead)
Swift 6.2+ 引入了 Span——一种非逃逸、非拥有的内存视图,提供对连续数据的安全、高效访问。
Span 是 UnsafeBufferPointer 的现代替代品,提供:
空间安全性:边界检查操作防止越界访问
时间安全性:生命周期继承自源,防止释放后使用
零开销:无堆分配,无引用计数
非逃逸性:不能比它引用的数据存活更久
// Traditional unsafe approach func processUnsafe(_ data: UnsafeMutableBufferPointer<UInt8>) { data[100] = 0 // Crashes if out of bounds! }
// Safe Span approach func processSafe(_ data: MutableSpan<UInt8>) { data[100] = 0 // Traps with clear error if out of bounds }
| 使用场景 | 推荐 |
|---|---|
| 拥有数据 | Array(完全所有权,COW) |
| 用于读取的临时视图 | Span(安全,快速) |
| 用于写入的临时视图 | MutableSpan(安全,快速) |
| C 互操作,性能关键 | RawSpan(无类型字节) |
| 不安全的性能 | UnsafeBufferPointer(遗留,避免) |
let array = [1, 2, 3, 4, 5]
let span = array.span // Read-only view
print(span[0]) // Subscript access
for element in span { } // Safe iteration
let slice = span[1..<3] // Span slice, no copy
var array = [10, 20, 30, 40, 50]
let mutableSpan = array.mutableSpan
mutableSpan[0] = 100 // Modifies array in-place, bounds-checked
func parsePacket(_ data: RawSpan) -> PacketHeader? {
guard data.count >= MemoryLayout<PacketHeader>.size else { return nil }
// Safe byte-level access via subscript
return PacketHeader(version: data[0], flags: data[1],
length: UInt16(data[3]) << 8 | UInt16(data[2]))
}
let header = parsePacket(bytes.rawSpan) // .rawSpan on any [UInt8]
所有 Swift 6.2 集合都提供 .span 和 .mutableSpan 属性,包括 Array、ContiguousArray 和 UnsafeBufferPointer(迁移路径)。Span 访问速度与 UnsafeBufferPointer 相当(约 2ns),并带有边界检查。
Span 的生命周期与其源绑定。编译器阻止在源将被释放的函数中返回 Span——这与 UnsafeBufferPointer 不同,后者允许此错误静默发生。
func dangerousSpan() -> Span<Int> {
let array = [1, 2, 3]
return array.span // ❌ Error: Cannot return non-escapable value
}
InlineArray 也提供 .span/.mutableSpan —— 关于 InlineArray 用法和通过 Span 避免复制,请参见 Part 7。
// ❌ 旧:不安全,无边界检查
func parseLegacy(_ buffer: UnsafeBufferPointer<UInt8>) -> Header {
Header(magic: buffer[0], version: buffer[1]) // Silent OOB crash
}
// ✅ 新:安全,边界检查,相同性能
func parseModern(_ span: Span<UInt8>) -> Header {
Header(magic: span[0], version: span[1]) // Traps on OOB
}
// Bridge: existing UnsafeBufferPointer → Span
let span = buffer.span // Wrap unsafe in safe span
parseModern(span)
OutputSpan/OutputRawSpan 替代 UnsafeMutableBufferPointer 用于初始化新集合,无需中间分配。
// Binary serialization: write header bytes safely
@lifetime(&output)
func writeHeader(to output: inout OutputRawSpan) {
output.append(0x01) // version
output.append(0x00) // flags
output.append(UInt16(42)) // length (type-safe)
}
用于构建字节数组、二进制序列化、图像像素数据。Apple 的开源 Swift Binary Parsing 库完全基于 Span 类型构建。
.span 访问final class Storage<T> {
var value: T
init(_ value: T) { self.value = value }
}
struct COWWrapper<T> {
private var storage: Storage<T>
init(_ value: T) {
storage = Storage(value)
}
var value: T {
get { storage.value }
set {
if !isKnownUniquelyReferenced(&storage) {
storage = Storage(newValue)
} else {
storage.value = newValue
}
}
}
}
func processLargeArray(_ input: [Int]) -> [Int] {
var result = ContiguousArray<Int>()
result.reserveCapacity(input.count)
for element in input {
result.append(transform(element))
}
return Array(result)
}
private var cache: [Key: Value] = [:]
@inlinable
func getCached(_ key: Key) -> Value? {
return cache[key] // Inlined across modules
}
| 反模式 | 问题 | 修复 |
|---|---|---|
| 过早优化 | 没有性能分析数据的复杂 COW/ContiguousArray | 从简单开始,分析,优化真正重要的部分 |
| 到处使用 Weak | 每个委托都使用 weak(原子操作开销) | 当生命周期有保证时使用 unowned(见 Part 4) |
| 一切皆 Actor | 简单计数器上的 Actor 隔离(约 100μs/次调用) | 对于简单的同步数据使用无锁原子操作(ManagedAtomic) |
isKnownUniquelyReferencedreserveCapacitysome 而不是 any@_specializeContiguousArray 而不是 Arrayhash(into:) 实现async@concurrent (Swift 6.2)压力:经理在性能分析器中看到"慢",要求
Core Principle : Optimize Swift code by understanding language-level performance characteristics—value semantics, ARC behavior, generic specialization, and memory layout—to write fast, efficient code without premature micro-optimization.
Swift Version : Swift 6.2+ (for InlineArray, Span, @concurrent) Xcode : 16+ Platforms : iOS 18+, macOS 15+
Related Skills :
axiom-performance-profiling — Use Instruments to measure (do this first!)axiom-swiftui-performance — SwiftUI-specific optimizationsaxiom-build-performance — Compilation speedaxiom-swift-concurrency — Correctness-focused concurrency patternsPerformance issue identified?
│
├─ Profiler shows excessive copying?
│ └─ → Part 1: Noncopyable Types
│ └─ → Part 2: Copy-on-Write
│
├─ Retain/release overhead in Time Profiler?
│ └─ → Part 4: ARC Optimization
│
├─ Generic code in hot path?
│ └─ → Part 5: Generics & Specialization
│
├─ Collection operations slow?
│ └─ → Part 7: Collection Performance
│
├─ Async/await overhead visible?
│ └─ → Part 8: Concurrency Performance
│
├─ Struct vs class decision?
│ └─ → Part 3: Value vs Reference
│
└─ Memory layout concerns?
└─ → Part 9: Memory Layout
From WWDC 2024-10217: Swift's low-level performance characteristics come down to four areas. Each maps to a Part in this skill.
| Principle | What It Costs | Skill Coverage |
|---|---|---|
| Function Calls | Dispatch overhead, optimization barriers | Part 5 (Generics), Part 6 (Inlining) |
| Memory Allocation | Stack vs heap, allocation frequency | Part 3 (Value vs Reference), Part 7 (Collections) |
| Memory Layout | Cache locality, padding, contiguity | Part 9 (Memory Layout), Part 11 (Span) |
| Value Copying | COW triggers, defensive copies, ARC traffic | Part 1 (Noncopyable), Part 2 (COW), Part 4 (ARC) |
Understanding which principle is causing your bottleneck determines which Part to use.
Swift 6.0+ introduces noncopyable types for performance-critical scenarios where you want to avoid implicit copies.
// Noncopyable type
struct FileHandle: ~Copyable {
private let fd: Int32
init(path: String) throws {
self.fd = open(path, O_RDONLY)
guard fd != -1 else { throw FileError.openFailed }
}
deinit {
close(fd)
}
// Must explicitly consume
consuming func close() {
_ = consume self
}
}
// Usage
func processFile() throws {
let handle = try FileHandle(path: "/data.txt")
// handle is automatically consumed at end of scope
// Cannot accidentally copy handle
}
// consuming - takes ownership, caller cannot use after
func process(consuming data: [UInt8]) {
// data is consumed
}
// borrowing - temporary access without ownership
func validate(borrowing data: [UInt8]) -> Bool {
// data can still be used by caller
return data.count > 0
}
// inout - mutable access
func modify(inout data: [UInt8]) {
data.append(0)
}
Swift collections use COW for efficient memory sharing. Understanding when copies happen is critical for performance.
var array1 = [1, 2, 3] // Single allocation
var array2 = array1 // Share storage (no copy)
array2.append(4) // Now copies (array1 modified array2)
For custom COW implementation, see Copy-Paste Pattern 1 (COW Wrapper) below.
// ❌ Accidental copy in loop
for i in 0..<array.count {
array[i] = transform(array[i]) // Copy on first mutation if shared!
}
// ✅ Reserve capacity first (ensures unique)
array.reserveCapacity(array.count)
for i in 0..<array.count {
array[i] = transform(array[i])
}
// ❌ Multiple mutations trigger multiple uniqueness checks
array.append(1)
array.append(2)
array.append(3)
// ✅ Single reservation
array.reserveCapacity(array.count + 3)
array.append(contentsOf: [1, 2, 3])
From WWDC 2024-10217: Swift sometimes inserts defensive copies when it cannot prove a value won't be mutated through a shared reference.
class DataStore {
var items: [Item] = [] // COW type stored in class
}
func process(_ store: DataStore) {
for item in store.items {
// Swift may defensively copy `items` because:
// 1. store.items is a class property (another reference could mutate it)
// 2. The loop needs a stable snapshot
handle(item)
}
}
How to avoid : Copy to a local variable first — one explicit copy instead of repeated defensive copies:
func process(_ store: DataStore) {
let items = store.items // One copy
for item in items {
handle(item) // No more defensive copies
}
}
In profiler : Defensive copies appear as unexpected swift_retain/swift_release pairs or Array.__allocating_init calls when you didn't expect allocation.
Choosing between struct and class has significant performance implications.
| Factor | Use Struct | Use Class |
|---|---|---|
| Size | ≤ 64 bytes | > 64 bytes or contains large data |
| Identity | No identity needed | Needs identity (===) |
| Inheritance | Not needed | Inheritance required |
| Mutation | Infrequent | Frequent in-place updates |
| Sharing | No sharing needed | Must be shared across scope |
// ✅ Fast - fits in registers, no heap allocation
struct Point {
var x: Double // 8 bytes
var y: Double // 8 bytes
} // Total: 16 bytes - excellent for struct
struct Color {
var r, g, b, a: UInt8 // 4 bytes total - perfect for struct
}
// ❌ Slow - excessive copying
struct HugeData {
var buffer: [UInt8] // 1MB
var metadata: String
}
func process(_ data: HugeData) { // Copies 1MB!
// ...
}
// ✅ Use reference semantics for large data
final class HugeData {
var buffer: [UInt8]
var metadata: String
}
func process(_ data: HugeData) { // Only copies pointer (8 bytes)
// ...
}
For large data that needs value semantics externally with reference storage internally, use the COW Wrapper pattern — see Copy-Paste Pattern 1 below.
Automatic Reference Counting adds overhead. Minimize it where possible.
class Parent {
var child: Child?
}
class Child {
// ❌ Weak adds overhead (optional, thread-safe zeroing)
weak var parent: Parent?
}
// ✅ Unowned when you know lifetime guarantees
class Child {
unowned let parent: Parent // No overhead, crashes if parent deallocated
}
Performance : unowned is ~2x faster than weak (no atomic operations).
Use when : Child lifetime < Parent lifetime (guaranteed).
class DataProcessor {
var data: [Int]
// ❌ Captures self strongly, then uses weak - unnecessary weak overhead
func process(completion: @escaping () -> Void) {
DispatchQueue.global().async { [weak self] in
guard let self else { return }
self.data.forEach { print($0) }
completion()
}
}
// ✅ Capture only what you need
func process(completion: @escaping () -> Void) {
let data = self.data // Copy value type
DispatchQueue.global().async {
data.forEach { print($0) } // No self captured
completion()
}
}
}
From WWDC 2024-10217: Closures have different performance profiles depending on whether they escape.
// Non-escaping closure — stack-allocated context, zero ARC overhead
func processItems(_ items: [Item], using transform: (Item) -> Result) -> [Result] {
items.map(transform) // Closure context lives on stack
}
// Escaping closure — heap-allocated context, ARC on every captured reference
func processItemsLater(_ items: [Item], transform: @escaping (Item) -> Result) {
// Closure context heap-allocated as anonymous class instance
// Each captured reference gets retain/release
self.pending = { items.map(transform) }
}
Why this matters : @Sendable closures are always escaping, meaning every Task closure heap-allocates its capture context.
In hot paths : Prefer non-escaping closures. If you see swift_allocObject in Time Profiler for closure contexts, look for escaping closures that could be non-escaping.
From WWDC 2021-10216 : Object lifetimes end at last use , not at closing brace.
// ❌ Relying on observed lifetime is fragile
class Traveler {
weak var account: Account?
deinit {
print("Deinitialized") // May run BEFORE expected with ARC optimizations!
}
}
func test() {
let traveler = Traveler()
let account = Account(traveler: traveler)
// traveler's last use is above - may deallocate here!
account.printSummary() // weak reference may be nil!
}
// ✅ Explicitly extend lifetime when needed
func test() {
let traveler = Traveler()
let account = Account(traveler: traveler)
withExtendedLifetime(traveler) {
account.printSummary() // traveler guaranteed to live
}
}
Object lifetimes can change between Xcode versions, Debug vs Release, and unrelated code changes. Enable "Optimize Object Lifetimes" (Xcode 13+) during development to expose hidden lifetime bugs early.
Generic code can be fast or slow depending on specialization.
// Generic function
func process<T>(_ value: T) {
print(value)
}
// Calling with concrete type
process(42) // Compiler specializes: process_Int(42)
process("hello") // Compiler specializes: process_String("hello")
protocol Drawable {
func draw()
}
// ❌ Existential container - expensive (heap allocation, indirection)
func drawAll(shapes: [any Drawable]) {
for shape in shapes {
shape.draw() // Dynamic dispatch through witness table
}
}
// ✅ Generic with constraint - can specialize
func drawAll<T: Drawable>(shapes: [T]) {
for shape in shapes {
shape.draw() // Static dispatch after specialization
}
}
Performance : Generic version ~10x faster (eliminates witness table overhead).
From WWDC 2016-416 : any Protocol uses a 40-byte existential container (5 words on 64-bit). The container stores type metadata + protocol witness table (16 bytes) plus a 24-byte inline value buffer. Types ≤24 bytes are stored directly in the buffer (fast, ~5ns access); larger types require a heap allocation with pointer indirection (slower, ~15ns). some Protocol eliminates all container overhead (~2ns).
Whensome isn't available (heterogeneous collections require any):
Reduce type sizes to ≤24 bytes — keep protocol-conforming types small enough for inline storage (3 words: e.g., Point { x, y, z: Double } fits exactly)
Use enum dispatch instead — eliminates containers entirely, trades open extensibility for performance:
// ❌ Existential: 40 bytes/element, witness table dispatch let shapes: [any Drawable] = [circle, rect]
// ✅ Enum: value-sized, static dispatch via switch enum Shape { case circle(Circle), rect(Rect) } func draw(_ shape: Shape) { switch shape { case .circle(let c): c.draw() case .rect(let r): r.draw() } }
Batch operations — amortize per-element existential overhead by processing in chunks rather than one-at-a-time
Measure first — existential overhead (~10ns/access) only matters in tight loops; for UI-level code it's negligible
@_specialize AttributeForce specialization for common types when the compiler doesn't do it automatically:
@_specialize(where T == Int)
@_specialize(where T == String)
func process<T: Comparable>(_ value: T) -> T { value }
// Generates specialized versions + generic fallback
Inlining eliminates function call overhead but increases code size.
// ✅ Small, frequently called functions
@inlinable
public func fastAdd(_ a: Int, _ b: Int) -> Int {
return a + b
}
// ❌ Large functions - code bloat
@inlinable // Don't do this!
public func complexAlgorithm() {
// 100 lines of code...
}
// Framework code
public struct Point {
public var x: Double
public var y: Double
// ✅ Inlinable for cross-module optimization
@inlinable
public func distance(to other: Point) -> Double {
let dx = x - other.x
let dy = y - other.y
return sqrt(dx*dx + dy*dy)
}
}
// Client code
let p1 = Point(x: 0, y: 0)
let p2 = Point(x: 3, y: 4)
let d = p1.distance(to: p2) // Inlined across module boundary
@usableFromInline// Internal helper that can be inlined
@usableFromInline
internal func helperFunction() { }
// Public API that uses it
@inlinable
public func publicAPI() {
helperFunction() // Can inline internal function
}
Trade-off : @inlinable exposes implementation, prevents future optimization.
Choosing the right collection and using it correctly matters.
// ❌ Array<T> - may use NSArray bridging (Swift/ObjC interop)
let array: Array<Int> = [1, 2, 3]
// ✅ ContiguousArray<T> - guaranteed contiguous memory (no bridging)
let array: ContiguousArray<Int> = [1, 2, 3]
UseContiguousArray when: No ObjC bridging needed (pure Swift), ~15% faster.
// ❌ Multiple reallocations
var array: [Int] = []
for i in 0..<10000 {
array.append(i) // Reallocates ~14 times
}
// ✅ Single allocation
var array: [Int] = []
array.reserveCapacity(10000)
for i in 0..<10000 {
array.append(i) // No reallocations
}
struct BadKey: Hashable {
var data: [Int]
// ❌ Expensive hash (iterates entire array)
func hash(into hasher: inout Hasher) {
for element in data {
hasher.combine(element)
}
}
}
struct GoodKey: Hashable {
var id: UUID // Fast hash
var data: [Int] // Not hashed
// ✅ Hash only the unique identifier
func hash(into hasher: inout Hasher) {
hasher.combine(id)
}
}
Fixed-size arrays stored directly on the stack—no heap allocation, no COW overhead. Uses value generics to encode size in the type.
// Traditional Array - heap allocated, COW overhead
var sprites: [Sprite] = Array(repeating: .default, count: 40)
// InlineArray - stack allocated, no COW (value generic syntax)
var sprites = InlineArray<40, Sprite>(repeating: .default)
Conformances : RandomAccessCollection, MutableCollection, BitwiseCopyable, Sendable. Supports ~Copyable element types.
When to Use InlineArray :
InlineArray is stack-allocated (no heap), eagerly copied (not COW), and provides .span/.mutableSpan for zero-copy access. Measure your own benchmarks for allocation/copy/mutation trade-offs vs Array.
Copy Semantics Warning :
// ❌ Unexpected: InlineArray copies eagerly
func processLarge(_ data: InlineArray<1000, UInt8>) {
// Copies all 1000 bytes on call!
}
// ✅ Use Span to avoid copy
func processLarge(_ data: Span<UInt8>) {
// Zero-copy view, no matter the size
}
// Best practice: Store InlineArray, pass Span
struct Buffer {
var storage = InlineArray<1000, UInt8>(repeating: 0)
func process() {
helper(storage.span) // Pass view, not copy
}
}
When NOT to Use InlineArray :
// ❌ Eager evaluation - processes entire array
let result = array
.map { expensive($0) }
.filter { $0 > 0 }
.first // Only need first element!
// ✅ Lazy evaluation - stops at first match
let result = array
.lazy
.map { expensive($0) }
.filter { $0 > 0 }
.first // Only evaluates until first match
Async/await and actors add overhead. Use appropriately.
actor Counter {
private var value = 0
// ❌ Actor call overhead for simple operation
func increment() {
value += 1
}
}
// Calling from different isolation domain
for _ in 0..<10000 {
await counter.increment() // 10,000 actor hops!
}
// ✅ Batch operations to reduce actor overhead
actor Counter {
private var value = 0
func incrementBatch(_ count: Int) {
value += count
}
}
await counter.incrementBatch(10000) // Single actor hop
Each async suspension costs ~20-30μs. Keep synchronous operations synchronous—don't mark a function async if it doesn't need to await.
// ❌ Creating task per item (~100μs overhead each)
for item in items {
Task {
await process(item)
}
}
// ✅ Single task for batch
Task {
for item in items {
await process(item)
}
}
// ✅ Or use TaskGroup for parallelism
await withTaskGroup(of: Void.self) { group in
for item in items {
group.addTask {
await process(item)
}
}
}
@concurrent Attribute (Swift 6.2)// Force background execution
@concurrent
func expensiveComputation() -> Int {
// Always runs on background thread, even if called from MainActor
return complexCalculation()
}
// Safe to call from main actor without blocking
@MainActor
func updateUI() async {
let result = await expensiveComputation() // Guaranteed off main thread
label.text = "\(result)"
}
For nonisolated performance patterns and detailed actor isolation guidance, see axiom-swift-concurrency.
Understanding memory layout helps optimize cache performance and reduce allocations.
// ❌ Poor layout (24 bytes due to padding)
struct BadLayout {
var a: Bool // 1 byte + 7 padding
var b: Int64 // 8 bytes
var c: Bool // 1 byte + 7 padding
}
print(MemoryLayout<BadLayout>.size) // 24 bytes
// ✅ Optimized layout (16 bytes)
struct GoodLayout {
var b: Int64 // 8 bytes
var a: Bool // 1 byte
var c: Bool // 1 byte + 6 padding
}
print(MemoryLayout<GoodLayout>.size) // 16 bytes
// Query alignment
print(MemoryLayout<Double>.alignment) // 8
print(MemoryLayout<Int32>.alignment) // 4
// Structs align to largest member
struct Mixed {
var int32: Int32 // 4 bytes, 4-byte aligned
var double: Double // 8 bytes, 8-byte aligned
}
print(MemoryLayout<Mixed>.alignment) // 8 (largest member)
// ❌ Poor cache locality
struct PointerBased {
var next: UnsafeMutablePointer<Node>? // Pointer chasing
}
// ✅ Array-based for cache locality
struct ArrayBased {
var data: ContiguousArray<Int> // Contiguous memory
}
// Array iteration ~10x faster due to cache prefetching
From WWDC 2025-312: Runtime exclusivity enforcement (swift_beginAccess/swift_endAccess) appears in Time Profiler when the compiler cannot prove memory safety statically.
What they are : Swift enforces that no two accesses to the same variable overlap if one is a write. For struct properties, this is checked at compile time. For class stored properties, runtime checks are inserted.
How to identify : Look for swift_beginAccess and swift_endAccess in Time Profiler or Processor Trace flame graphs.
// ❌ Class properties require runtime exclusivity checks
class Parser {
var state: ParserState
var cache: [Int: Pixel]
func parse() {
state.advance() // swift_beginAccess / swift_endAccess
cache[key] = pixel // swift_beginAccess / swift_endAccess
}
}
// ✅ Struct properties checked at compile time — zero runtime cost
struct Parser {
var state: ParserState
var cache: InlineArray<64, Pixel>
mutating func parse() {
state.advance() // No runtime check
cache[key] = pixel // No runtime check
}
}
Real-world impact : In WWDC 2025-312's QOI image parser, moving properties from a class to a struct eliminated all runtime exclusivity checks, contributing to a measurable speedup as part of a >700x total improvement.
Typed throws can be faster than untyped by avoiding existential overhead.
// Untyped - existential container for error
func fetchData() throws -> Data {
// Can throw any Error
throw NetworkError.timeout
}
// Typed - concrete error type
func fetchData() throws(NetworkError) -> Data {
// Can only throw NetworkError
throw NetworkError.timeout
}
// Measure with tight loop
func untypedThrows() throws -> Int {
throw GenericError.failed
}
func typedThrows() throws(GenericError) -> Int {
throw GenericError.failed
}
// Benchmark: typed ~5-10% faster (no existential overhead)
Swift 6.2+ introduces Span—a non-escapable, non-owning view into memory that provides safe, efficient access to contiguous data.
Span is a modern replacement for UnsafeBufferPointer that provides:
Spatial safety : Bounds-checked operations prevent out-of-bounds access
Temporal safety : Lifetime inherited from source, preventing use-after-free
Zero overhead : No heap allocation, no reference counting
Non-escapable : Cannot outlive the data it references
// Traditional unsafe approach func processUnsafe(_ data: UnsafeMutableBufferPointer<UInt8>) { data[100] = 0 // Crashes if out of bounds! }
// Safe Span approach func processSafe(_ data: MutableSpan<UInt8>) { data[100] = 0 // Traps with clear error if out of bounds }
| Use Case | Recommendation |
|---|---|
| Own the data | Array (full ownership, COW) |
| Temporary view for reading | Span (safe, fast) |
| Temporary view for writing | MutableSpan (safe, fast) |
| C interop, performance-critical | RawSpan (untyped bytes) |
| Unsafe performance | UnsafeBufferPointer (legacy, avoid) |
let array = [1, 2, 3, 4, 5]
let span = array.span // Read-only view
print(span[0]) // Subscript access
for element in span { } // Safe iteration
let slice = span[1..<3] // Span slice, no copy
var array = [10, 20, 30, 40, 50]
let mutableSpan = array.mutableSpan
mutableSpan[0] = 100 // Modifies array in-place, bounds-checked
func parsePacket(_ data: RawSpan) -> PacketHeader? {
guard data.count >= MemoryLayout<PacketHeader>.size else { return nil }
// Safe byte-level access via subscript
return PacketHeader(version: data[0], flags: data[1],
length: UInt16(data[3]) << 8 | UInt16(data[2]))
}
let header = parsePacket(bytes.rawSpan) // .rawSpan on any [UInt8]
All Swift 6.2 collections provide .span and .mutableSpan properties, including Array, ContiguousArray, and UnsafeBufferPointer (migration path). Span access speed matches UnsafeBufferPointer (~2ns) with bounds checking.
Span's lifetime is bound to its source. The compiler prevents returning a Span from a function where the source would be deallocated — unlike UnsafeBufferPointer, which allows this bug silently.
func dangerousSpan() -> Span<Int> {
let array = [1, 2, 3]
return array.span // ❌ Error: Cannot return non-escapable value
}
InlineArray also provides .span/.mutableSpan — see Part 7 for InlineArray usage and copy-avoidance via Span.
// ❌ Old: unsafe, no bounds checking
func parseLegacy(_ buffer: UnsafeBufferPointer<UInt8>) -> Header {
Header(magic: buffer[0], version: buffer[1]) // Silent OOB crash
}
// ✅ New: safe, bounds-checked, same performance
func parseModern(_ span: Span<UInt8>) -> Header {
Header(magic: span[0], version: span[1]) // Traps on OOB
}
// Bridge: existing UnsafeBufferPointer → Span
let span = buffer.span // Wrap unsafe in safe span
parseModern(span)
OutputSpan/OutputRawSpan replace UnsafeMutableBufferPointer for initializing new collections without intermediate allocations.
// Binary serialization: write header bytes safely
@lifetime(&output)
func writeHeader(to output: inout OutputRawSpan) {
output.append(0x01) // version
output.append(0x00) // flags
output.append(UInt16(42)) // length (type-safe)
}
Use for building byte arrays, binary serialization, image pixel data. Apple's open-source Swift Binary Parsing library is built entirely on Span types.
.span access via computed propertyfinal class Storage<T> {
var value: T
init(_ value: T) { self.value = value }
}
struct COWWrapper<T> {
private var storage: Storage<T>
init(_ value: T) {
storage = Storage(value)
}
var value: T {
get { storage.value }
set {
if !isKnownUniquelyReferenced(&storage) {
storage = Storage(newValue)
} else {
storage.value = newValue
}
}
}
}
func processLargeArray(_ input: [Int]) -> [Int] {
var result = ContiguousArray<Int>()
result.reserveCapacity(input.count)
for element in input {
result.append(transform(element))
}
return Array(result)
}
private var cache: [Key: Value] = [:]
@inlinable
func getCached(_ key: Key) -> Value? {
return cache[key] // Inlined across modules
}
| Anti-Pattern | Problem | Fix |
|---|---|---|
| Premature optimization | Complex COW/ContiguousArray with no profiling data | Start simple, profile, optimize what matters |
| Weak everywhere | weak on every delegate (atomic overhead) | Use unowned when lifetime is guaranteed (see Part 4) |
| Actor for everything | Actor isolation on simple counters (~100μs/call) | Use lock-free atomics (ManagedAtomic) for simple sync data |
isKnownUniquelyReferenced before mutationreserveCapacity when size is knownsome instead of any where possible@_specializeContiguousArray over Arrayhash(into:) implementationsasync@concurrent (Swift 6.2)The Pressure : Manager sees "slow" in profiler, demands immediate action.
Red Flags :
Time Cost Comparison :
How to Push Back Professionally :
"I want to optimize effectively. Let me spend 30 minutes with Instruments
to find the actual bottleneck. This prevents wasting time on code that's
not the problem. I've seen this save days of work."
The Pressure : Team adopts Swift 6, decides "everything should be an actor."
Red Flags :
Time Cost Comparison :
How to Push Back Professionally :
"Actors are great for isolation, but they add overhead. For this simple
counter, lock-free atomics are 10x faster. Let's use actors where we need
them—shared mutable state—and avoid them for pure value types."
The Pressure : Someone reads that inlining is faster, marks everything @inlinable.
Red Flags :
@inlinableTime Cost Comparison :
How to Push Back Professionally :
"Inlining trades code size for speed. The compiler already inlines when
beneficial. Manual @inlinable should be for small, frequently called
functions. Let's profile and inline the 3 actual hotspots, not everything."
Problem : Processing 1000 images takes 30 seconds.
Investigation :
// Original code
func processImages(_ images: [UIImage]) -> [ProcessedImage] {
var results: [ProcessedImage] = []
for image in images {
results.append(expensiveProcess(image)) // Reallocations!
}
return results
}
Solution :
func processImages(_ images: [UIImage]) -> [ProcessedImage] {
var results = ContiguousArray<ProcessedImage>()
results.reserveCapacity(images.count) // Single allocation
for image in images {
results.append(expensiveProcess(image))
}
return Array(results)
}
Result : 30s → 8s (73% faster) by eliminating reallocations.
Problem : Protocol-based rendering is slow.
Investigation :
// Original - existential overhead
func render(shapes: [any Shape]) {
for shape in shapes {
shape.draw() // Dynamic dispatch
}
}
Solution :
// Specialized generic
func render<S: Shape>(shapes: [S]) {
for shape in shapes {
shape.draw() // Static dispatch after specialization
}
}
// Or use @_specialize
@_specialize(where S == Circle)
@_specialize(where S == Rectangle)
func render<S: Shape>(shapes: [S]) { }
Result : 100ms → 10ms (10x faster) by eliminating witness table overhead.
WWDC : 2025-312, 2024-10217, 2024-10170, 2021-10216, 2016-416
Docs : /swift/inlinearray, /swift/span, /swift/outputspan
Skills : axiom-performance-profiling, axiom-swift-concurrency, axiom-swiftui-performance
Weekly Installs
115
Repository
GitHub Stars
590
First Seen
Jan 21, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
opencode99
codex93
gemini-cli90
claude-code90
github-copilot86
cursor84
ESLint迁移到Oxlint完整指南:JavaScript/TypeScript项目性能优化工具
1,700 周安装
Google Workspace CLI 团队负责人技能:自动化站会、任务协调与团队沟通工具
7,900 周安装
Google Workspace 事件订阅命令 gws-events-subscribe:实时流式监控与自动化处理
8,000 周安装
Google Calendar 会议重新安排技能 - 自动更新会议时间并通知参与者
7,900 周安装
冲刺回顾模板:敏捷团队回顾会议指南与模板(开始-停止-继续/愤怒-悲伤-高兴/4Ls)
10,400 周安装
任务估算指南:敏捷开发故事点、计划扑克、T恤尺码法详解
10,500 周安装
站立会议模板:敏捷开发每日站会指南与工具(含远程团队异步模板)
10,500 周安装