使用 Gemini API 并结合 OpenAI 备用方案的 TypeScript 代码
Use the Gemini API with OpenAI Fallback in TypeScript

原始链接: https://sometechblog.com/posts/try-gemini-api-with-openai-fallback/

这段代码提供辅助函数,用于安全地使用 Gemini 的公共 API 和 OpenAI 的 TS/JS 库,包括针对速率限制问题的回退机制。首先,定义使用的模型(包括 Gemini),以便进行类型安全的自动建议。 `getCompletion` 函数接收两个配置对象的数组,每个对象详细描述使用 OpenAI 的 `chat.completions.create` 进行的 AI 查询。如果第一个模型失败(例如,由于速率限制),该函数会自动尝试第二个模型。它会根据使用的模型动态地使用 Gemini 或 OpenAI API 密钥。 第二个辅助函数 `getJSONCompletion` 提供类型安全的结构化输出解析,用于处理复杂数据。它使用 `openai.beta.chat.completions.parse`,尝试第一个模型,并在失败时回退到第二个模型,类似于 `getCompletion`。Zod 用于定义预期的数据结构,确保输出与指定的格式匹配。这允许轻松可靠地使用 Gemini 和 OpenAI 模型,即使在 Gemini 的速率限制下也能正常工作。

Hacker News 上的一篇文章讨论了在 TypeScript 中使用 Gemini API 并以 OpenAI 作为备选方案,并参考了 sometechblog.com 上的一篇博文。一位评论者推荐使用 Vercel AI SDK 作为替代方案,因为它能够跨多个大型语言模型(包括本地模型)进行抽象,并使用 Zod 进行类型验证,从而方便模型切换。另一位评论者批评了 TypeScript 的视觉美观性,将其与 PHP 不利地对比,认为每行开头明显的关键字妨碍了代码扫描的可读性。
相关文章

原文

If you want to use Gemini’s public API, but at the same time have a safe fallback in case you have exhausted the rate limits, you can use the OpenAI TS/JS library and a few helper functions. In my particular case I needed a type-safe solution for a chartmaker app with a fallback since Gemini’s gemini-2.5-pro-exp-03-25 model is restricted to 20 request/min.

First, you need to define which models you want to use so that they appear as autosuggest when you use the helper functions:

type Model = ChatCompletionParseParams['model'] | 'gemini-2.5-pro-exp-03-25' | 'gemini-2.0-flash';

The helper function requires one argument; an array of 2 configuration objects for the desired AI queries (in principle, you can add as many as you want, or choose other AIs that are compatible with the OpenAI library):

export const getCompletion = async (
  options: [
    Omit<ChatCompletionParseParams, 'model'> & { model: Model },
    Omit<ChatCompletionParseParams, 'model'> & { model: Model },
  ],
) => {
  try {
    const isGemini = options[0].model.includes('gemini');
    const openai = new OpenAI(
      isGemini
        ? {
            apiKey: process.env.GEMINI_API_KEY,
            baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/',
          }
        : { apiKey: process.env.OPENAI_API_KEY },
    );

    return await openai.chat.completions.create(options[0]);
  } catch (error) {
    console.log(`Failed completion for first model (${options[0].model})`, error);

    const isGemini = options[1].model.includes('gemini');
    const openai = new OpenAI(
      isGemini
        ? {
            apiKey: process.env.GEMINI_API_KEY,
            baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/',
          }
        : { apiKey: process.env.OPENAI_API_KEY },
    );

    return await openai.chat.completions.create(options[1]);
  }
};

The help function can be used in the following ways:


const messages = [{ role: 'user', content: 'Tell a short joke.' }];
const completion = await getCompletion([
  { model: 'gemini-2.0-flash', messages },
  { model: 'gpt-3.5-turbo', messages },
]);

console.log(completion);
// {
//   "choices": [
//     {
//       "finish_reason": "stop",
//       "index": 0,
//       "message": {
//         "content": "Why don't scientists trust atoms?\n\nBecause they make up everything!\n",
//         "role": "assistant"
//       }
//     }
//   ],
//   "created": 1743757243,
//   "model": "gemini-2.0-flash",
//   "object": "chat.completion",
//   "usage": {
//     "completion_tokens": 16,
//     "prompt_tokens": 5,
//     "total_tokens": 21
//   }
// }

You can also create a helper function for type-safe structured output:

export const getJSONCompletion = async <T>(
  options: [
    Omit<ChatCompletionParseParams, 'model'> & { model: Model },
    Omit<ChatCompletionParseParams, 'model'> & { model: Model },
  ],
): Promise<ParsedChatCompletion<T> & { _request_id?: string | null | undefined }> => {
  try {
    const isGemini = options[0].model.includes('gemini');
    const openai = new OpenAI(
      isGemini
        ? {
            apiKey: process.env.GEMINI_API_KEY,
            baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/',
          }
        : { apiKey: process.env.OPENAI_API_KEY },
    );

    return await openai.beta.chat.completions.parse({ ...options[0] });
  } catch (error) {
    console.log('Failed completion for first model', error);

    const isGemini = options[1].model.includes('gemini');
    const openai = new OpenAI(
      isGemini
        ? {
            apiKey: process.env.GEMINI_API_KEY,
            baseURL: 'https://generativelanguage.googleapis.com/v1beta/openai/',
          }
        : { apiKey: process.env.OPENAI_API_KEY },
    );

    return await openai.beta.chat.completions.parse({ ...options[1] });
  }
};

It can be used in the following way:

import z from 'zod';

//... Omitted for brevity

const messages = [{ role: "user", content: "Your instructions..."}] satisfies ChatCompletionMessageParam[];
const format = z.object({ customizations: z.array(z.string()) });
const responseFormat = zodResponseFormat(format, 'chart-customizations');

const completion = await getJSONCompletion<z.infer<typeof format>>(
  [
    { model: 'gemini-2.5-pro-exp-03-25', response_format: responseFormat, messages, temperature: 0 },
    { model: 'o3-mini-2025-01-31', reasoning_effort: 'high', response_format: responseFormat, messages },
  ],
);

const customizationsArr = completion.choices[0].message.parsed?.customizations;
联系我们 contact @ memedata.com