VercelLogotypeVercelLogotype
    • AI Cloud
      • v0

        Build applications with AI

      • AI SDK

        The AI Toolkit for TypeScript

      • AI Gateway

        One endpoint, all your models

      • Vercel Agent

        An agent that knows your stack

      • Sandbox

        AI workflows in live environments

    • Core Platform
      • CI/CD

        Helping teams ship 6× faster

      • Content Delivery

        Fast, scalable, and reliable

      • Fluid Compute

        Servers, in serverless form

      • Observability

        Trace every step

    • Security
      • Bot Management

        Scalable bot protection

      • BotID

        Invisible CAPTCHA

      • Platform Security

        DDoS Protection, Firewall

      • Web Application Firewall

        Granular, custom protection

    • Company
      • Customers

        Trusted by the best teams

      • Blog

        The latest posts and changes

      • Changelog

        See what shipped

      • Press

        Read the latest news

      • Events

        Join us at an event

    • Learn
      • Docs

        Vercel documentation

      • Academy

        Linear courses to level up

      • Knowledge Base

        Find help quickly

      • Community

        Join the conversation

    • Open Source
      • Next.js

        The native Next.js platform

      • Nuxt

        The progressive web framework

      • Svelte

        The web’s efficient UI framework

      • Turborepo

        Speed with Enterprise scale

    • Use Cases
      • AI Apps

        Deploy at the speed of AI

      • Composable Commerce

        Power storefronts that convert

      • Marketing Sites

        Launch campaigns fast

      • Multi-tenant Platforms

        Scale apps with one codebase

      • Web Apps

        Ship features, not infrastructure

    • Tools
      • Marketplace

        Extend and automate workflows

      • Templates

        Jumpstart app development

      • Partner Finder

        Get help from solution partners

    • Users
      • Platform Engineers

        Automate away repetition

      • Design Engineers

        Deploy for every idea

  • Enterprise
  • Pricing
  • All Posts
  • Engineering
  • Community
  • Company News
  • Customers
  • v0
  • Changelog
  • Press
  • No "" results found at this time.
    Try again with a different keyword.

    Featured articles

  • Mar 3

    How Fluid compute works on Vercel

    Fluid compute is Vercel’s next-generation compute model designed to handle modern workloads with real-time scaling, cost efficiency, and minimal overhead. Traditional serverless architectures optimize for fast execution, but struggle with requests that spend significant time waiting on external models or APIs, leading to wasted compute. To address these inefficiencies, Fluid compute dynamically adjusts to traffic demands, reusing existing resources before provisioning new ones. At the center of Fluid is Vercel Functions router, which orchestrates function execution to minimize cold starts, maximize concurrency, and optimize resource usage. It dynamically routes invocations to pre-warmed or active instances, ensuring low-latency execution. By efficiently managing compute allocation, the router prevents unnecessary cold starts and scales capacity only when needed. Let's look at how it intelligently manages function execution.

    Mariano and Collier
  • Apr 14

    Migrating Grep from Create React App to Next.js

    Grep is extremely fast code search. You can search over a million repositories for specific code snippets, files, or paths. Search results need to appear instantly without loading spinners. Originally built with Create React App (CRA) as a fully client-rendered Single-Page App (SPA), Grep was fast—but with CRA now deprecated, we wanted to update the codebase to make it even faster and easier to maintain going forward. Here's how we migrated Grep to Next.js—keeping the interactivity of a SPA, but with the performance improvements from React Server Components.

    Ethan and Kevin
  • Jun 9

    Building secure AI agents

    An AI agent is a language model with a system prompt and a set of tools. Tools extend the model's capabilities by adding access to APIs, file systems, and external services. But they also create new paths for things to go wrong. The most critical security risk is prompt injection. Similar to SQL injection, it allows attackers to slip commands into what looks like normal input. The difference is that with LLMs, there is no standard way to isolate or escape input. Anything the model sees, including user input, search results, or retrieved documents, can override the system prompt or event trigger tool calls. If you are building an agent, you must design for worst case scenarios. The model will see everything an attacker can control. And it might do exactly what they want.

    Malte Ubl

    Latest news.

  • Engineering
    Feb 18

    We Ralph Wiggumed WebStreams to make them 10x faster

    When we started profiling Next.js server rendering earlier this year, one thing kept showing up in the flamegraphs: WebStreams. Not the application code running inside them, but the streams themselves. The Promise chains, the per-chunk object allocations, the microtask queue hops. After Theo Browne's server rendering benchmarks highlighted how much compute time goes into framework overhead, we started looking at where that time actually goes. A lot of it was in streams. Turns out that WebStreams have an incredibly complete test suite, and that makes them a great candidate for doing an AI-based re-implementation in a purely test-driven and benchmark-driven fashion. This post is about the performance work we did, what we learned, and how this work is already making its way into Node.js itself through Matteo Collina's upstream PR. The problem Node.js has two streaming APIs. The older one (stream.Readable, stream.Writable, stream.Transform) has been around for over a decade and is heavily optimized. Data moves through C++ internals. Backpressure is a boolean. Piping is a single function call. The newer one is the WHATWG Streams API: ReadableStream, WritableStream, TransformStream. This is the web standard. It powers fetch() response bodies, CompressionStream, TextDecoderStream, and increasingly, server-side rendering in frameworks like Next.js and React. The web standard is the right API to converge on. But on the server, it is slower than it needs to be. To understand why, consider what happens when you call reader.read() on a native WebStream in Node.js. Even if data is already sitting in the buffer: A ReadableStreamDefaultReadRequest object is allocated with three callback slots The request is enqueued into the stream's internal queue A new Promise is allocated and returned Resolution goes through the microtask queue That is four allocations and a microtask hop to return data that was already there. Now multiply that by every chunk flowing through every transform in a rendering pipeline. Or consider pipeTo(). Each chunk passes through a full Promise chain: read, write, check backpressure, repeat. An {value, done} result object is allocated per read. Error propagation creates additional Promise branches. None of this is wrong. These guarantees matter in the browser where streams cross security boundaries, where cancellation semantics need to be airtight, where you do not control both ends of a pipe. But on the server, when you are piping React Server Components through three transforms at 1KB chunks, the cost adds up. We benchmarked native WebStream pipeThrough at 630 MB/s for 1KB chunks. Node.js pipeline() with the same passthrough transform: ~7,900 MB/s. That is a 12x gap, and the difference is almost entirely Promise and object allocation overhead. What we built We have been working on a library called fast-webstreams that implements the WHATWG ReadableStream, WritableStream, and TransformStream APIs backed by Node.js streams internally. Same API, same error propagation, same spec compliance. The overhead is removed for the common cases. The core idea is to route operations through different fast paths depending on what you are actually doing: When you pipe between fast streams: zero Promises This is the biggest win. When you chain pipeThrough and pipeTo between fast streams, the library does not start piping immediately. Instead, it records upstream links: source → transform1 → transform2 → ... When pipeTo() is called at the end of the chain, it walks upstream, collects the underlying Node.js stream objects, and issues a single pipeline() call. One function call. Zero Promises per chunk. Data flows through Node's optimized C++ path. The result: ~6,200 MB/s. That is ~10x faster than native WebStreams and close to raw Node.js pipeline performance. If any stream in the chain is not a fast stream (say, a native CompressionStream), the library falls back to either native pipeThrough or a spec-compliant pipeTo implementation. When you read chunk by chunk: synchronous resolution When you call reader.read(), the library tries nodeReadable.read() synchronously. If data is there, you get Promise.resolve({value, done}). No event loop round-trip. No request object allocation. Only when the buffer is empty does it register a listener and return a pending Promise. The result: ~12,400 MB/s, or 3.7x faster than native. The React Flight pattern: where the gap is largest This is the one that matters most for Next.js. React Server Components use a specific byte stream pattern: create a ReadableStream with type: 'bytes', capture the controller in start(), enqueue chunks externally as the render produces them. Native WebStreams: ~110 MB/s. fast-webstreams: ~1,600 MB/s. That is 14.6x faster for the exact pattern used in production server rendering. The speed comes from LiteReadable, a minimal array-based buffer we wrote to replace Node.js's Readable for byte streams. It uses direct callback dispatch instead of EventEmitter, supports pull-based demand and BYOB readers, and costs about 5 microseconds less per construction. That matters when React Flight creates hundreds of byte streams per request. Fetch response bodies: streams you don't construct yourself The examples above all start with new ReadableStream(...). But on the server, most streams do not start that way. They start from fetch(). The response body is a native byte stream owned by Node.js's HTTP layer. You cannot swap it out. This is a common pattern in server-side rendering: fetch data from an upstream service, pipe the response through one or more transforms, and forward the result to the client. With native WebStreams, each hop in this chain pays the full Promise-per-chunk cost. Three transforms means roughly 6-9 Promises per chunk. At 1KB chunks, that gets you ~260 MB/s. The library handles this through deferred resolution. When patchGlobalWebStreams() is active, Response.prototype.body returns a lightweight fast shell wrapping the native byte stream. Calling pipeThrough() does not start piping immediately. It just records the link. Only when pipeTo() or getReader() is called at the end does the library resolve the full chain: it creates a single bridge from the native reader into Node.js pipeline() for the transform hops, then serves reads from the buffered output synchronously. The cost model: one Promise at the native boundary to pull data in. Zero Promises through the transform chain. Sync reads at the output. The result: ~830 MB/s, or 3.2x faster than native for the three-transform fetch pattern. For simple response forwarding without transforms, it is 2.0x faster (850 vs 430 MB/s). Benchmarks All numbers are throughput in MB/s at 1KB chunks on Node.js v22. Higher is better. Core operations Operation Node.js streams fast native fast vs native read loop 26,400 12,400 3,300 3.7x write loop 26,500 5,500 2,300 2.4x pipeThrough 7,900 6,200 630 9.8x pipeTo 14,000 2,500 1,400 1.8x for-await-of — 4,100 3,000 1.4x Transform chains The Promise-per-chunk overhead compounds with chain depth: Depth fast native fast vs native 3 transforms 2,900 300 9.7x 8 transforms 1,000 115 8.7x Byte streams Pattern fast native fast vs native start + enqueue (React Flight) 1,600 110 14.6x byte read loop 1,400 1,400 1.0x byte tee 1,200 750 1.6x Response body patterns Pattern fast native fast vs native Response.text() 900 910 1.0x Response forwarding 850 430 2.0x fetch → 3 transforms 830 260 3.2x Stream construction Creating streams is also faster, which matters for short-lived streams: Type fast native fast vs native ReadableStream 2,100 980 2.1x WritableStream 1,300 440 3.0x TransformStream 470 220 2.1x Spec compliance fast-webstreams passes 1,100 out of 1,116 Web Platform Tests. Node.js's native implementation passes 1,099. The 16 failures that remain are either shared with native (like the unimplemented type: 'owning' transfer mode) or are architectural differences that do not affect real applications. How we are deploying this The library can patch the global ReadableStream, WritableStream, and TransformStream constructors: The patch also intercepts Response.prototype.body to wrap native fetch response bodies in fast stream shells, so fetch() → pipeThrough() → pipeTo() chains hit the pipeline fast path automatically. At Vercel, we are looking at rolling this out across our fleet. We will do so carefully and incrementally. Streaming primitives sit at the foundation of request handling, response rendering, and compression. We are starting with the patterns where the gap is largest: React Server Component streaming, response body forwarding, and multi-transform chains. We will measure in production before expanding further. The right fix is upstream A userland library should not be the long-term answer here. The right fix is in Node.js itself. Work is already happening. After a conversation on X, Matteo Collina submitted nodejs/node#61807, "stream: add fast paths for webstreams read and pipeTo." The PR applies two ideas from this project directly to Node.js's native WebStreams: read() fast path: When data is already buffered, return a resolved Promise directly without creating a ReadableStreamDefaultReadRequest object. This is spec-compliant because read() returns a Promise either way, and resolved promises still run callbacks in the microtask queue. pipeTo() batch reads: When data is buffered, batch multiple reads from the controller queue without creating per-chunk request objects. Backpressure is respected by checking desiredSize after each write. The PR shows ~17-20% faster buffered reads and ~11% faster pipeTo. These improvements benefit every Node.js user for free. No library to install, no patching, no risk. James Snell's Node.js performance issue #134 outlines several additional opportunities: C++-level piping for internally-sourced streams, lazy buffering, eliminating double-buffering in WritableStream adapters. Each of these could close the gap further. We will keep contributing ideas upstream. The goal is not for fast-webstreams to exist forever. The goal is for WebStreams to be fast enough that it does not need to. What we learned the hard way The spec is smarter than it looks. We tried many shortcuts. Almost every one of them broke a Web Platform Test, and the test was usually right. The ReadableStreamDefaultReadRequest pattern, the Promise-per-read design, the careful error propagation: they exist because cancellation during reads, error identity through locked streams, and thenable interception are real edge cases that real code hits. Promise.resolve(obj) always checks for thenables. This is a language-level behavior you cannot avoid. If the object you resolve with has a .then property, the Promise machinery will call it. Some WPT tests deliberately put .then on read results and verify that the stream handles it correctly. We had to be very careful about where {value, done} objects get created in hot paths. Node.js pipeline() cannot replace WHATWG pipeTo. We hoped to use pipeline() for all piping. It causes 72 WPT failures. The error propagation, stream locking, and cancellation semantics are fundamentally different. pipeline() is only safe when we control the entire chain, which is why we collect upstream links and only use it for full fast-stream chains. Reflect.apply, not .call(). The WPT suite monkey-patches Function.prototype.call and verifies that implementations do not use it to invoke user-provided callbacks. Reflect.apply is the only safe way. This is a real spec requirement. We built most of fast-webstreams with AI Two things made that viable: The amazing Web Platform Tests gave us 1,116 tests as an immediate, machine-checkable answer to "did we break anything?" And we built a benchmark suite early on so we could measure whether each change actually moved throughput. The development loop was: implement an optimization, run the WPT suite, run benchmarks. When tests broke, we knew which spec invariant we had violated. When benchmarks did not move, we reverted. The WHATWG Streams spec is long and dense. The interesting optimization opportunities sit in the gap between what the spec requires and what current implementations do. read() must return a Promise, but nothing says that Promise cannot already be resolved when data is buffered. That kind of observation is straightforward when you can ask an AI to analyze algorithm steps for places where the observable behavior can be preserved with fewer allocations. Try it fast-webstreams is available on npm as experimental-fast-webstreams. The "experimental" prefix is intentional. We are confident in correctness, but this is an area of active development. If you are building a server-side JavaScript framework or runtime and hitting WebStreams performance limits, we would love to hear from you. And if you are interested in improving WebStreams in Node.js itself, Matteo's PR is a great place to start.

    Malte Ubl
  • Engineering
    Feb 3

    Making agent-friendly pages with content negotiation

    Agents browse the web, but they read differently than humans. They don't need CSS, client-side JavaScript, or images. All of that markup fills up their context window and consumes tokens without adding useful information. What agents need is clean, structured text. That's why we've updated our blog and changelog pages to make markdown accessible to agents while still delivering a full HTML and CSS experience to human readers. This works through content negotiation, an HTTP mechanism where the server returns different formats for the same content based on what the client requests. No duplicate content or separate sites. How agents request content  Agents use the HTTP Accept header to specify what formats they prefer. Claude Code, for example, sends this header when fetching pages: Accept: text/markdown, text/html, */* By listing text/markdown first, the agent signals that markdown is preferred over HTML when available. Many agents are starting to explicitly prefer markdown this way. Try it out by sending a curl request: curl https://vercel.com/blog/self-driving-infrastructure -H "accept: text/markdown" Our middleware examines the Accept header on incoming requests and detects these preferences. When markdown is preferred, it routes the request to a Next.js route handler that converts our Contentful rich-text content into markdown. This transformation preserves the content's structure. Code blocks keep their syntax highlighting markers, headings maintain their hierarchy, and links remain functional. The agent receives the same information as the HTML version, just in a format optimized for token efficiency. Performance benefits A typical blog post weighs 500KB with all the HTML, CSS, and JavaScript. However, the same content as markdown is only 2KB. That's a 99.6% reduction in payload size. For agents operating under token limits, smaller payloads mean they can consume more content per request and spend their budget on actual information instead of markup. They work faster and hit limits less often. We maintain synchronization between HTML and markdown versions using Next.js 16 remote cache and shared slugs. When content updates in Contentful, both versions refresh simultaneously. How agents discover available content Agents need to discover what's available. We implemented a markdown sitemap that lists all content in a format optimized for agent consumption. The sitemap includes metadata about each piece, including publication dates, content types, and direct links to both HTML and markdown versions. This gives agents a complete map of available information and lets them choose the format that works best for their needs. Want to see this in action? Add .md to the end of this page's URL to get the markdown version.

    Zach and Mitul
  • Engineering
    Jan 14

    Introducing: React Best Practices

    We've encapsulated 10+ years of React and Next.js optimization knowledge into react-best-practices, a structured repository optimized for AI agents and LLMs. React performance work is usually, well, reactive. A release goes out, the app feels slower, and the team starts chasing symptoms. That’s expensive, and it’s easy to optimize the wrong thing. We’ve seen the same root causes across production codebases for more than a decade: Async work that accidentally becomes sequential Large client bundles that grow over time Components that re-render more than they need to The “why” here is simple: these aren’t micro-optimizations. They show up as waiting ti...

    Shu and Andrew
  • Engineering
    Dec 22

    AI SDK 6

    With over 20 million monthly downloads and adoption by teams ranging from startups to Fortune 500 companies, the AI SDK is the leading TypeScript toolkit for building AI applications. It provides a unified API, allowing you to integrate with any AI provider, and seamlessly integrates with Next.js, React, Svelte, Vue, and Node.js. The AI SDK enables you to build everything from chatbots to complex background agents.

    +3
    Gregor, Lars, and 3 others
  • Engineering
    Dec 9

    Inside Workflow DevKit: How framework integrations work

    When we announced the Workflow Development Kit (WDK) at Ship AI just over a month ago, we wanted it to reflect our Open SDK Strategy, allowing developers to build with any framework and deploy to any platform. At launch, WDK supported Next.js and Nitro. Today it works with eight frameworks, including SvelteKit, Astro, Express, and Hono, with TanStack Start and React Router in active development. This post explains the pattern behind those integrations and how they work under the hood.

    Adrian Lam
  • Engineering
    Nov 24

    Workflow Builder: Build your own workflow automation platform

    Today we're open-sourcing Workflow Builder, a complete visual automation platform powered by the Workflow Development Kit (WDK). The project includes a visual editor, execution engine, and infrastructure, giving you what you need to build your own workflow automation tools and agents. Deploy it to Vercel and customize it for your use case. What's included in Workflow Builder Workflow Builder is a production-ready Next.js application with a fully interactive workflow editor, AI-assisted workflow generation, six prebuilt integration modules, and end-to-end observability. Visual workflow editor The visual workflow editor lets you build, connect, and execute workflows using drag-and-drop steps. You get real-time validation, undo/redo, auto-save, and persistent state without writing code. Prebuilt integrations include: Resend (emails) ...

    Chris, Hayden, and Adrian
  • Engineering
    Oct 31

    BotID Deep Analysis catches a sophisticated bot network in real-time

    On October 29 at 9:44am, BotID Deep Analysis detected an unusual spike in traffic patterns across one of our customer's projects. Traffic increased by 500% above normal baseline. What made this particularly interesting wasn't just the volume increase. The spike appeared to be coming from legitimate human users. Our team immediately began investigating and reached out to the customer to discuss what appeared to be an influx of bot traffic cleverly disguised as human activity. But before we could even complete that conversation, something remarkable happened: Deep Analysis, powered by Kasada’s machine learning backend, had already identified the threat and adapted to correctly classify it.

    Andrew and Liz
  • Engineering
    Oct 28

    Bun runtime on Vercel Functions

    We now support Bun as a runtime option for Vercel Functions, available in Public Beta. You can choose between Node.js and Bun for your project, configuring runtime behavior based on workload. We're working closely with the Bun team to bring this capability to production. This flexibility allows you to choose what works best for your use case. Use Node.js for maximum compatibility or switch to Bun for compute-intensive applications that benefit from faster execution. Through internal testing, we've found that Bun reduced average latency by 28% in CPU-bound Next.js rendering workloads compared to Node.js. These gains come from Bun's runtime architecture, built in Zig with optimized I/O and scheduling that reduce overhead in JavaScript execution and data handling.

    +3
    Tom, Javi, and 3 others
  • Engineering
    Sep 25

    Preventing the stampede: Request collapsing in the Vercel CDN

    When you deploy a Next.js app with Incremental Static Regeneration (ISR), pages get regenerated on-demand after their cache expires. ISR lets you get the performance benefits of static generation while keeping your content fresh. But there's a problem. When many users request the same ISR route at once and the cache is expired, each request can trigger its own function invocation. This is called a "cache stampede." It wastes compute, overloads your backend, and can cause downtime. The Vercel CDN now prevents this with request collapsing. When multiple requests hit the same uncached path, only one request per region invokes a function. The rest wait and get the cached response. Vercel automatically infers cacheability for each request through framework-defined infrastructure, configuring our globally distributed router. No manual configuration needed.

    Sachin Raja
  • Engineering
    Sep 19

    How we made global routing faster with Bloom filters

    Recently, we shipped an optimization to our global routing service that reduced its memory usage by 15%, improved time-to-first-byte (TTFB) from the 75th percentile and above by 10%, and significantly improved routing speeds for websites with many static paths. A small number of websites, with hundreds of thousands of static paths, were creating a bottleneck that slowed down our entire routing service. By replacing a slow JSON parsing operation with a Bloom filter, we brought path lookup latency down to nearly zero and improved performance for everyone.

    Matthew and Tim
  • Engineering
    Sep 9

    The second wave of MCP: Building for LLMs, not developers

    When the MCP standard first launched, many teams rushed to ship something. Many servers ended up as thin wrappers around existing APIs with minimal changes. A quick way to say "we support MCP". At the time, this made sense. MCP was new, teams wanted to get something out quickly, and the obvious approach was mirroring existing API structures. Why reinvent when you could repackage? But the problem with this approach is LLMs don’t work like developers. They don’t reuse past code or keep long term state. Each conversation starts fresh. LLMs have to rediscover which tools exist, how to use them, and in what order. With low level API wrappers, this leads to repeated orchestration, inconsistent behavior, and wasted effort as LLMs repeatedly solve the same puzzles. MCP works best when tools handle complete user intentions rather than exposing individual API operations. One tool that deploys a project end-to-end works better than four tools that each handle a piece of the deployment pipeline.

    Boris and Andrew
  • Engineering
    Sep 4

    Stress testing Biome's noFloatingPromises lint rule

    Recently we partnered with the Biome team to strengthen their noFloatingPromises lint rule to catch more subtle edge cases. This rule prevents unhandled Promises, which can cause silent errors and unpredictable behavior. Once Biome had an early version ready, they asked if we could help stress test it with some test cases. At Vercel, we know good tests require creativity just as much as attention to detail. To ensure strong coverage, we wanted to stretch the rule to its limits and so we thought it would be fun to turn this into a friendly internal competition. Who could come up with the trickiest examples that would still break the updated lint rule? Part of the fun was learning together, but before we dive into the snippets, let’s revisit what makes a Promise “float”.

    Dimitri Mitropoulos

Ready to deploy? Start building with a free account. Speak to an expert for your Pro or Enterprise needs.

Start Deploying
Talk to an Expert

Explore Vercel Enterprise with an interactive product tour, trial, or a personalized demo.

Explore Enterprise

Get Started

  • Templates
  • Supported frameworks
  • Marketplace
  • Domains

Build

  • Next.js on Vercel
  • Turborepo
  • v0

Scale

  • Content delivery network
  • Fluid compute
  • CI/CD
  • Observability
  • AI GatewayNew
  • Vercel AgentNew

Secure

  • Platform security
  • Web Application Firewall
  • Bot management
  • BotID
  • SandboxNew

Resources

  • Pricing
  • Customers
  • Enterprise
  • Startups
  • Solution partners

Learn

  • Docs
  • Blog
  • Changelog
  • Knowledge Base
  • Academy
  • Community

Frameworks

  • Next.js
  • Nuxt
  • Svelte
  • Nitro
  • Turbo

SDKs

  • AI SDK
  • Workflow DevKitNew
  • Flags SDK
  • Chat SDK
  • Streamdown AINew

Use Cases

  • Composable commerce
  • Multi-tenant platforms
  • Web apps
  • Marketing sites
  • Platform engineers
  • Design engineers

Company

  • About
  • Careers
  • Help
  • Press
  • Legal
  • Privacy Policy

Community

  • Open source program
  • Events
  • Shipped on Vercel
  • GitHub
  • LinkedIn
  • X
  • YouTube

Loading status…

Select a display theme:
v0

Build applications with AI

AI SDK

The AI Toolkit for TypeScript

AI Gateway

One endpoint, all your models

Vercel Agent

An agent that knows your stack

Sandbox

AI workflows in live environments

CI/CD

Helping teams ship 6× faster

Content Delivery

Fast, scalable, and reliable

Fluid Compute

Servers, in serverless form

Observability

Trace every step

Bot Management

Scalable bot protection

BotID

Invisible CAPTCHA

Platform Security

DDoS Protection, Firewall

Web Application Firewall

Granular, custom protection

Customers

Trusted by the best teams

Blog

The latest posts and changes

Changelog

See what shipped

Press

Read the latest news

Events

Join us at an event

Docs

Vercel documentation

Academy

Linear courses to level up

Knowledge Base

Find help quickly

Community

Join the conversation

Next.js

The native Next.js platform

Nuxt

The progressive web framework

Svelte

The web’s efficient UI framework

Turborepo

Speed with Enterprise scale

AI Apps

Deploy at the speed of AI

Composable Commerce

Power storefronts that convert

Marketing Sites

Launch campaigns fast

Multi-tenant Platforms

Scale apps with one codebase

Web Apps

Ship features, not infrastructure

Marketplace

Extend and automate workflows

Templates

Jumpstart app development

Partner Finder

Get help from solution partners

Platform Engineers

Automate away repetition

Design Engineers

Deploy for every idea