Netlify AI Gateway logo

Netlify AI Gateway

Use AI models without managing keys or billing

2025-12-18

Product Introduction

  1. Definition: Netlify AI Gateway is a managed AI orchestration layer integrated directly into the Netlify platform. It acts as a secure proxy service between your Netlify-hosted applications (Serverless Functions, Edge Functions, etc.) and supported third-party AI providers (OpenAI, Anthropic, Google Gemini).
  2. Core Value Proposition: It eliminates the operational friction of integrating multiple AI models by automating API key management, credential security, and unified billing via Netlify credits. This enables developers to rapidly prototype and deploy AI-powered features directly from their Netlify projects without managing external provider accounts or infrastructure.

Main Features

  1. Automatic Environment Variable Injection:
    • How it works: Netlify injects provider-specific environment variables (OPENAI_API_KEY, OPENAI_BASE_URL, ANTHROPIC_API_KEY, ANTHROPIC_BASE_URL, GEMINI_API_KEY, GOOGLE_GEMINI_BASE_URL) into all Netlify compute contexts (Functions, Edge Functions, Preview) unless manually overridden by the user.
    • Technology: Netlify's build and runtime environment dynamically sets these values, ensuring official client libraries (OpenAI JS SDK, Anthropic SDK, Google GenAI SDK) work out-of-the-box with zero configuration.
  2. Unified AI Provider Proxy:
    • How it works: Requests made using the injected environment variables are routed through Netlify's AI Gateway infrastructure. The Gateway forwards the request to the target AI provider (OpenAI, Anthropic, Gemini), handles authentication, and returns the response.
    • Technology: Acts as a reverse proxy, abstracting the direct provider API endpoints. Uses NETLIFY_AI_GATEWAY_BASE_URL and NETLIFY_AI_GATEWAY_KEY internally for explicit routing if needed.
  3. Netlify Credit-Based Billing:
    • How it works: Token usage (input + output) from AI requests is converted into Netlify credits. Billing is consolidated within the user's Netlify account, removing the need for separate provider billing setups.
    • Technology: Netlify's backend systems meter token consumption per request based on provider responses and apply charges against the account's credit balance (Free, Personal, Pro, or Enterprise credits).

Problems Solved

  1. Pain Point: Operational overhead and security risks associated with manually managing API keys for multiple AI providers across different projects and environments.
  2. Target Audience:
    • Frontend/Full-stack Developers using frameworks like Next.js, Astro, Nuxt, SvelteKit, or TanStack Start deployed on Netlify.
    • DevOps Engineers seeking simplified, secure AI integration within CI/CD pipelines.
    • Product Teams rapidly iterating on AI features without infrastructure management.
  3. Use Cases:
    • Building production-ready chat applications using models like Claude Sonnet or GPT-5 without handling keys.
    • Adding AI-generated content (summaries, translations, product descriptions) to static sites or web apps.
    • Prototyping AI features locally and deploying instantly with identical configuration.

Unique Advantages

  1. Differentiation: Unlike direct provider integration or generic API gateways, Netlify AI Gateway is natively integrated into the deployment platform, offering automatic configuration, unified Netlify billing, and seamless local development via netlify dev or the Vite plugin. Competitors require manual key management and lack platform-specific billing integration.
  2. Key Innovation: Dynamic, context-aware environment variable injection specifically tailored for Netlify's compute primitives. This deep integration ensures official AI SDKs work immediately within the Netlify ecosystem, providing a zero-configuration developer experience unmatched by standalone proxy services.

Frequently Asked Questions (FAQ)

  1. Does Netlify AI Gateway work during local development?
    Yes, the AI Gateway is fully supported with netlify dev. For Vite-based projects, the Netlify Vite plugin also enables access. Note: A project requires at least one production deploy to activate the Gateway initially.
  2. How secure is Netlify AI Gateway for handling API keys?
    Netlify AI Gateway eliminates the need to store or expose your raw provider API keys in your code or environment. It uses secure, short-lived tokens internally. Netlify does not store prompts or model outputs.
  3. Can I use my own API keys with Netlify AI Gateway?
    Yes. If you set any provider-specific environment variable (e.g., OPENAI_API_KEY) manually in your Netlify project settings, Netlify will not override it or inject the corresponding BASE_URL. The Gateway will only proxy requests for providers where you haven't set your own keys.
  4. What happens if I exceed the AI Gateway rate limits?
    Requests exceeding the Tokens Per Minute (TPM) limit for a specific model on your plan will be rate-limited. Implement Netlify's built-in rate limiting on your Functions/Edge Functions to prevent client abuse and avoid hitting account-wide Gateway limits.
  5. Are all OpenAI, Anthropic, and Gemini models supported?
    Netlify AI Gateway supports specific, commonly used models from each provider (e.g., gpt-4o, claude-3-haiku, gemini-1.5-flash). Refer to the official "Model availability" table in the Netlify docs for the current, exhaustive list, as new models are added over time.

Subscribe to Our Newsletter

Get weekly curated tool recommendations and stay updated with the latest product news