What Changed

The OpenAI provider configuration guidance has been updated to include more options for customizing the API usage. Specifically, the default OpenAI provider now reads OPENAI_BASE_URL and OPENAI_WEBSOCKET_BASE_URL for environment-based endpoint configuration. Additionally, the set_default_openai_api function allows for overriding the default OpenAI Responses API to use the Chat Completions API instead.

Why This Matters for GEO

This update matters for GEO as it can impact how content is generated and cited in AI responses. By customizing the OpenAI API and provider configuration, content creators can better optimize their content for AI-generated answers. For example, using the Chat Completions API can provide more conversational and engaging responses, while environment-based endpoint configuration can improve the security and flexibility of API usage.

What To Do

  1. Review OpenAI provider configuration: Check your current OpenAI provider configuration and update it to use environment-based endpoint configuration and customize the OpenAI API as needed.
  2. Use the set_default_openai_api function: Override the default OpenAI Responses API to use the Chat Completions API for more conversational and engaging responses.
  3. Configure tracing separately: If your model traffic uses one key or client but tracing should use a different OpenAI key, pass use_for_tracing=False when setting the default key or client, and configure tracing separately.
  4. Use MultiProvider for prefix-based model routing: If you need to mix different model names in one run, use MultiProvider and set openai_use_responses_websocket=True to enable prefix-based model routing.

Sources