add-uint-support by pytorch/pytorch
npx skills add https://github.com/pytorch/pytorch --skill add-uint-support此技能通过更新 AT_DISPATCH 宏,帮助为 PyTorch 运算符添加无符号整数类型(uint16、uint32、uint64)的支持。
在以下情况下使用此技能:
向现有调度添加无符号类型:
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES));
// After (方法 1:显式添加无符号类型)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES));
// After (方法 2:如果存在 AT_INTEGRAL_TYPES,则使用 V2 整数类型)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
无符号类型组:
AT_BAREBONES_UNSIGNED_TYPES:kUInt16、kUInt32、kUInt64AT_INTEGRAL_TYPES_V2:AT_INTEGRAL_TYPES + AT_BAREBONES_UNSIGNED_TYPES广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
关系:
AT_INTEGRAL_TYPES // kByte, kChar, kInt, kLong, kShort
AT_BAREBONES_UNSIGNED_TYPES // kUInt16, kUInt32, kUInt64
AT_INTEGRAL_TYPES_V2 // INTEGRAL_TYPES + BAREBONES_UNSIGNED_TYPES
检查文件是否使用 AT_DISPATCH_V2:
如果使用旧的 AT_DISPATCH:
如果已经使用 AT_DISPATCH_V2:
识别当前使用的类型组:
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
// body
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
^^^^^^^^^^^^^^^^^^^^^^^^^
当前类型覆盖范围
常见模式:
AT_EXPAND(AT_ALL_TYPES) → 包含 AT_INTEGRAL_TYPES + AT_FLOATING_TYPESAT_EXPAND(AT_INTEGRAL_TYPES) → 仅包含有符号整数AT_EXPAND(AT_FLOATING_TYPES) → 浮点类型两种方法:
方法 1:显式添加 AT_BAREBONES_UNSIGNED_TYPES
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) 添加到类型列表中方法 2:用 AT_INTEGRAL_TYPES_V2 替换 AT_INTEGRAL_TYPES
AT_EXPAND(AT_INTEGRAL_TYPES) 时方法 1 示例:
// Before
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
kBFloat16, kHalf, kBool
);
// After (添加无符号类型)
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
kBFloat16, kHalf, kBool
);
方法 2 示例:
// Before
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES)
);
// After (替换为 V2)
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES_V2)
);
如果调度使用 AT_EXPAND(AT_ALL_TYPES):
AT_ALL_TYPES = AT_INTEGRAL_TYPES + AT_FLOATING_TYPESAT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) 添加到列表中如果调度分别列出 INTEGRAL 和 FLOATING:
// Before
AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES)
// After (首选方法 2)
AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES)
检查文件中所有需要 uint 支持的调度宏:
检查:
AT_EXPAND()// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES));
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
// Before (需要先进行 v2 转换)
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, dtype, "op", [&]() {
kernel<scalar_t>();
});
// After v2 转换
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After 添加 uint 支持
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
对于包含多个函数的文件:
void min_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cuda", AT_WRAP([&]() {
impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// 已添加 uint 支持
}
void min_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_cuda", AT_WRAP([&]() {
gpu_reduce_kernel<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// 此处也已添加 uint 支持
}
使用此决策树确定方法:
文件是否使用 AT_DISPATCH_V2?
├─ 否 → 先使用 at-dispatch-v2 技能,然后继续
└─ 是
└─ 是否使用 AT_EXPAND(AT_INTEGRAL_TYPES)?
├─ 是 → 替换为 AT_EXPAND(AT_INTEGRAL_TYPES_V2)
└─ 否 → 将 AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) 添加到类型列表
如果运算符仅支持浮点类型,不要添加 uint 支持:
// 保持原样 - 仅浮点运算符
AT_DISPATCH_V2(dtype, "float_op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_FLOATING_TYPES), kHalf);
无符号类型可以与复数类型共存:
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
AT_EXPAND(AT_COMPLEX_TYPES),
kHalf, kBFloat16);
检查是否已存在 uint 类型:
AT_INTEGRAL_TYPES_V2 → 已具有 uint 支持AT_BAREBONES_UNSIGNED_TYPES 已在列表中 → 已具有 uint 支持当被要求添加 uint 支持时:
添加 uint 支持后,运算符应接受 uint16、uint32 和 uint64 张量。用户负责功能测试。
每周安装次数
203
代码仓库
GitHub 星标数
98.5K
首次出现
2026年1月20日
安全审计
安装于
claude-code191
opencode190
cursor186
codex184
gemini-cli183
github-copilot174
This skill helps add support for unsigned integer types (uint16, uint32, uint64) to PyTorch operators by updating their AT_DISPATCH macros.
Use this skill when:
Add unsigned types to existing dispatch:
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES));
// After (method 1: add unsigned types explicitly)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES));
// After (method 2: use V2 integral types if AT_INTEGRAL_TYPES present)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
Unsigned type groups:
AT_BAREBONES_UNSIGNED_TYPES: kUInt16, kUInt32, kUInt64AT_INTEGRAL_TYPES_V2: AT_INTEGRAL_TYPES + AT_BAREBONES_UNSIGNED_TYPESRelationship:
AT_INTEGRAL_TYPES // kByte, kChar, kInt, kLong, kShort
AT_BAREBONES_UNSIGNED_TYPES // kUInt16, kUInt32, kUInt64
AT_INTEGRAL_TYPES_V2 // INTEGRAL_TYPES + BAREBONES_UNSIGNED_TYPES
Check if the file uses AT_DISPATCH_V2:
If using old AT_DISPATCH:
If already using AT_DISPATCH_V2:
Identify what type groups are currently in use:
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
// body
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
^^^^^^^^^^^^^^^^^^^^^^^^^
Current type coverage
Common patterns:
AT_EXPAND(AT_ALL_TYPES) → includes AT_INTEGRAL_TYPES + AT_FLOATING_TYPESAT_EXPAND(AT_INTEGRAL_TYPES) → signed integers onlyAT_EXPAND(AT_FLOATING_TYPES) → floating point typesTwo approaches:
Method 1: Add AT_BAREBONES_UNSIGNED_TYPES explicitly
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) to the type listMethod 2: Substitute AT_INTEGRAL_TYPES with AT_INTEGRAL_TYPES_V2
AT_EXPAND(AT_INTEGRAL_TYPES)Method 1 example:
// Before
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
kBFloat16, kHalf, kBool
);
// After (add unsigned types)
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
kBFloat16, kHalf, kBool
);
Method 2 example:
// Before
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES)
);
// After (substitute with V2)
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES_V2)
);
If the dispatch uses AT_EXPAND(AT_ALL_TYPES):
AT_ALL_TYPES = AT_INTEGRAL_TYPES + AT_FLOATING_TYPESAT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) to the listIf the dispatch separately lists INTEGRAL and FLOATING:
// Before
AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES)
// After (Method 2 preferred)
AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES)
Check the file for ALL dispatch macros that need uint support:
Check that:
AT_EXPAND()// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES));
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
// Before (needs v2 conversion first)
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, dtype, "op", [&]() {
kernel<scalar_t>();
});
// After v2 conversion
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After adding uint support
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
For a file with multiple functions:
void min_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cuda", AT_WRAP([&]() {
impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// Added uint support
}
void min_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_cuda", AT_WRAP([&]() {
gpu_reduce_kernel<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// Added uint support here too
}
Use this decision tree to determine the approach:
Is the file using AT_DISPATCH_V2?
├─ No → Use at-dispatch-v2 skill first, then continue
└─ Yes
└─ Does it use AT_EXPAND(AT_INTEGRAL_TYPES)?
├─ Yes → Replace with AT_EXPAND(AT_INTEGRAL_TYPES_V2)
└─ No → Add AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) to type list
If the operator only supports floating point types, don't add uint support:
// Leave as-is - floating point only operator
AT_DISPATCH_V2(dtype, "float_op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_FLOATING_TYPES), kHalf);
Unsigned types work alongside complex types:
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
AT_EXPAND(AT_COMPLEX_TYPES),
kHalf, kBFloat16);
Check if uint types are already present:
AT_INTEGRAL_TYPES_V2 is used → already has uint supportAT_BAREBONES_UNSIGNED_TYPES is already in list → already has uint supportWhen asked to add uint support:
After adding uint support, the operator should accept uint16, uint32, and uint64 tensors. The user is responsible for functional testing.
Weekly Installs
203
Repository
GitHub Stars
98.5K
First Seen
Jan 20, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykPass
Installed on
claude-code191
opencode190
cursor186
codex184
gemini-cli183
github-copilot174
React 组合模式指南:Vercel 组件架构最佳实践,提升代码可维护性
111,800 周安装