vertex-ai-api-dev by google-gemini/gemini-skills
npx skills add https://github.com/google-gemini/gemini-skills --skill vertex-ai-api-dev通过 Vertex AI 中的 Gemini API 访问 Google 为企业用例构建的最先进 AI 模型。
提供以下关键能力:
google-genai,JS/TS 用 @google/genai,Go 用 google.golang.org/genai,Java 用 com.google.genai:google-genai,C# 用 Google.GenAI)。广告位招租
在这里展示您的产品或服务
触达数万 AI 开发者,精准高效
google-cloud-aiplatform、@google-cloud/vertexai 或 google-generativeai。pip install google-genai 安装 google-genainpm install @google/genai 安装 @google/genaigo get google.golang.org/genai 安装 google.golang.org/genaidotnet add package Google.GenAI 安装 Google.GenAIgroupId:com.google.genai,artifactId:google-genai
最新版本可在此处找到:https://central.sonatype.com/artifact/com.google.genai/google-genai/versions(我们称其为 LAST_VERSION)
在 build.gradle 中安装:
implementation("com.google.genai:google-genai:${LAST_VERSION}")
在 pom.xml 中安装 Maven 依赖项:
<dependency>
<groupId>com.google.genai</groupId>
<artifactId>google-genai</artifactId>
<version>${LAST_VERSION}</version>
</dependency>
[!WARNING] 诸如
google-cloud-aiplatform、@google-cloud/vertexai和google-generativeai之类的旧版 SDK 已弃用。请按照迁移指南紧急迁移到上述新 SDK。
创建客户端时,优先使用环境变量而非硬编码参数。初始化客户端时不带参数,以自动获取这些值。
为标准 Google Cloud 身份验证设置以下变量:
export GOOGLE_CLOUD_PROJECT='your-project-id'
export GOOGLE_CLOUD_LOCATION='global'
export GOOGLE_GENAI_USE_VERTEXAI=true
location="global" 访问全局端点,该端点提供自动路由到具有可用容量的区域。us-central1、europe-west4),请在 GOOGLE_CLOUD_LOCATION 参数中指定该区域。如有需要,请参考支持的区域文档。在使用 Express 模式和 API 密钥时设置以下变量:
export GOOGLE_API_KEY='your-api-key'
export GOOGLE_GENAI_USE_VERTEXAI=true
初始化客户端时不带参数以获取环境变量:
from google import genai
client = genai.Client()
或者,您也可以在创建客户端时硬编码参数。
from google import genai
client = genai.Client(vertexai=True, project="your-project-id", location="global")
gemini-3.1-pro-preview 进行复杂推理、编码、研究(100 万令牌)gemini-3-flash-preview 获得快速、均衡的性能,支持多模态(100 万令牌)gemini-3-pro-image-preview 进行 Nano Banana Pro 图像生成和编辑gemini-live-2.5-flash-native-audio 用于包含原生音频的实时 API如果明确要求,请使用以下模型:
gemini-2.5-flash-image 进行 Nano Banana 图像生成和编辑gemini-2.5-flashgemini-2.5-flash-litegemini-2.5-pro[!IMPORTANT] 诸如
gemini-2.0-*、gemini-1.5-*、gemini-1.0-*、gemini-pro等模型是旧版且已弃用。请使用上述新模型。您的知识已过时。对于生产环境,请查阅 Vertex AI 文档以获取稳定模型版本(例如gemini-3-flash)。
from google import genai
client = genai.Client()
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents="Explain quantum computing"
)
print(response.text)
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ vertexai: { project: "your-project-id", location: "global" } });
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "Explain quantum computing"
});
console.log(response.text);
package main
import (
"context"
"fmt"
"log"
"google.golang.org/genai"
)
func main() {
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
Backend: genai.BackendVertexAI,
Project: "your-project-id",
Location: "global",
})
if err != nil {
log.Fatal(err)
}
resp, err := client.Models.GenerateContent(ctx, "gemini-3-flash-preview", genai.Text("Explain quantum computing"), nil)
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.Text)
}
import com.google.genai.Client;
import com.google.genai.types.GenerateContentResponse;
public class GenerateTextFromTextInput {
public static void main(String[] args) {
Client client = Client.builder().vertexAi(true).project("your-project-id").location("global").build();
GenerateContentResponse response =
client.models.generateContent(
"gemini-3-flash-preview",
"Explain quantum computing",
null);
System.out.println(response.text());
}
}
using Google.GenAI;
var client = new Client(
project: "your-project-id",
location: "global",
vertexAI: true
);
var response = await client.Models.GenerateContent(
"gemini-3-flash-preview",
"Explain quantum computing"
);
Console.WriteLine(response.Text);
在为 Vertex AI 实现或调试 API 集成时,请参考官方的 Google Cloud Vertex AI 文档:
Vertex AI 上的 Gen AI SDK 使用 v1beta1 或 v1 REST API 端点(例如,https://{LOCATION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}:generateContent)。
[!TIP] 使用开发者知识 MCP 服务器 :如果
search_documents或get_document工具可用,请使用它们直接在上下文中查找和检索 Google Cloud 和 Vertex AI 的官方文档。这是获取最新 API 详细信息和代码片段的首选方法。
参考 Python 文档示例仓库 以获取更多代码示例和特定使用场景。
根据具体的用户请求,参考以下参考文件以获取详细的代码示例和使用模式(Python 示例):
每周安装量
336
仓库
GitHub 星标数
2.3K
首次出现
2026年3月5日
安全审计
安装于
gemini-cli291
cursor287
codex284
cline283
kimi-cli283
opencode283
Access Google's most advanced AI models built for enterprise use cases using the Gemini API in Vertex AI.
Provide these key capabilities:
google-genai for Python, @google/genai for JS/TS, google.golang.org/genai for Go, com.google.genai:google-genai for Java, Google.GenAI for C#).google-cloud-aiplatform, @google-cloud/vertexai, or google-generativeai.google-genai with pip install google-genai@google/genai with npm install @google/genaigoogle.golang.org/genai with go get google.golang.org/genaiGoogle.GenAI with dotnet add package Google.GenAIgroupId: , artifactId:
[!WARNING] Legacy SDKs like
google-cloud-aiplatform,@google-cloud/vertexai, andgoogle-generativeaiare deprecated. Migrate to the new SDKs above urgently by following the Migration Guide.
Prefer environment variables over hard-coding parameters when creating the client. Initialize the client without parameters to automatically pick up these values.
Set these variables for standard Google Cloud authentication:
export GOOGLE_CLOUD_PROJECT='your-project-id'
export GOOGLE_CLOUD_LOCATION='global'
export GOOGLE_GENAI_USE_VERTEXAI=true
location="global" to access the global endpoint, which provides automatic routing to regions with available capacity.us-central1, europe-west4), specify that region in the GOOGLE_CLOUD_LOCATION parameter instead. Reference the supported regions documentation if needed.Set these variables when using Express Mode with an API key:
export GOOGLE_API_KEY='your-api-key'
export GOOGLE_GENAI_USE_VERTEXAI=true
Initialize the client without arguments to pick up environment variables:
from google import genai
client = genai.Client()
Alternatively, you can hard-code in parameters when creating the client.
from google import genai
client = genai.Client(vertexai=True, project="your-project-id", location="global")
gemini-3.1-pro-preview for complex reasoning, coding, research (1M tokens)gemini-3-flash-preview for fast, balanced performance, multimodal (1M tokens)gemini-3-pro-image-preview for Nano Banana Pro image generation and editinggemini-live-2.5-flash-native-audio for Live Realtime API including native audioUse the following models if explicitly requested:
gemini-2.5-flash-image for Nano Banana image generation and editinggemini-2.5-flashgemini-2.5-flash-litegemini-2.5-pro[!IMPORTANT] Models like
gemini-2.0-*,gemini-1.5-*,gemini-1.0-*,gemini-proare legacy and deprecated. Use the new models above. Your knowledge is outdated. For production environments, consult the Vertex AI documentation for stable model versions (e.g.gemini-3-flash).
from google import genai
client = genai.Client()
response = client.models.generate_content(
model="gemini-3-flash-preview",
contents="Explain quantum computing"
)
print(response.text)
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({ vertexai: { project: "your-project-id", location: "global" } });
const response = await ai.models.generateContent({
model: "gemini-3-flash-preview",
contents: "Explain quantum computing"
});
console.log(response.text);
package main
import (
"context"
"fmt"
"log"
"google.golang.org/genai"
)
func main() {
ctx := context.Background()
client, err := genai.NewClient(ctx, &genai.ClientConfig{
Backend: genai.BackendVertexAI,
Project: "your-project-id",
Location: "global",
})
if err != nil {
log.Fatal(err)
}
resp, err := client.Models.GenerateContent(ctx, "gemini-3-flash-preview", genai.Text("Explain quantum computing"), nil)
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.Text)
}
import com.google.genai.Client;
import com.google.genai.types.GenerateContentResponse;
public class GenerateTextFromTextInput {
public static void main(String[] args) {
Client client = Client.builder().vertexAi(true).project("your-project-id").location("global").build();
GenerateContentResponse response =
client.models.generateContent(
"gemini-3-flash-preview",
"Explain quantum computing",
null);
System.out.println(response.text());
}
}
using Google.GenAI;
var client = new Client(
project: "your-project-id",
location: "global",
vertexAI: true
);
var response = await client.Models.GenerateContent(
"gemini-3-flash-preview",
"Explain quantum computing"
);
Console.WriteLine(response.Text);
When implementing or debugging API integration for Vertex AI, refer to the official Google Cloud Vertex AI documentation:
The Gen AI SDK on Vertex AI uses the v1beta1 or v1 REST API endpoints (e.g., https://{LOCATION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT}/locations/{LOCATION}/publishers/google/models/{MODEL}:generateContent).
[!TIP] Use the Developer Knowledge MCP Server : If the
search_documentsorget_documenttools are available, use them to find and retrieve official documentation for Google Cloud and Vertex AI directly within the context. This is the preferred method for getting up-to-date API details and code snippets.
Reference the Python Docs Samples repository for additional code samples and specific usage scenarios.
Depending on the specific user request, refer to the following reference files for detailed code samples and usage patterns (Python examples):
Weekly Installs
336
Repository
GitHub Stars
2.3K
First Seen
Mar 5, 2026
Security Audits
Gen Agent Trust HubPassSocketPassSnykWarn
Installed on
gemini-cli291
cursor287
codex284
cline283
kimi-cli283
opencode283
Azure Data Explorer (Kusto) 查询技能:KQL数据分析、日志遥测与时间序列处理
102,600 周安装
com.google.genaigoogle-genaiLatest version can be found here: https://central.sonatype.com/artifact/com.google.genai/google-genai/versions (let's call it LAST_VERSION)
Install in build.gradle:
implementation("com.google.genai:google-genai:${LAST_VERSION}")
Install Maven dependency in pom.xml:
<dependency>
<groupId>com.google.genai</groupId>
<artifactId>google-genai</artifactId>
<version>${LAST_VERSION}</version>
</dependency>