Remote Chrome for your automation.
Drop in a connect URL and your existing Puppeteer or Playwright code runs on our infrastructure — stealth countermeasures included, billed per session, no browser farm to maintain.
import puppeteer from 'puppeteer';
const browser = await puppeteer.connect({
browserWSEndpoint: 'wss://browser.datasonar.dev?key=osk_...',
});
const page = await browser.newPage();
await page.goto('https://example.com');
const title = await page.title();
console.log(title);
await browser.disconnect();Drop-in for Puppeteer and Playwright
Replace your local Chrome with a remote endpoint. Same APIs, same calls, no code changes beyond the connect URL. Your existing automation code runs against our infrastructure with stealth countermeasures applied automatically.
Stealth by default
Every browser session is fingerprint-randomized, headless-flag stripped, and protected against the JavaScript checks that flag normal headless Chrome. The same protections that power our scrape endpoints apply to your direct CDP sessions.
Pay per session, not per server
No idle browser farms. No autoscaling tuning. No memory leaks at 3 a.m. Open a session, do work, close it — billed per request like the rest of the API.
Works from any language
Anything that speaks Chrome DevTools Protocol — Puppeteer, Playwright, chrome-remote-interface, Selenium 4 CDP mode, custom CDP clients — connects with a single URL.
When to use the browser endpoint
Migrate off self-hosted browser farms
Teams running their own Puppeteer pool on Kubernetes can swap the connect URL and decommission the cluster. Lower ops burden, predictable per-session cost.
Complex multi-step flows
When a single scrape call isn't enough — multi-page checkout flows, OAuth dances, dashboard interactions — drive the browser directly with the framework you already know.
Integration testing
Run your existing Playwright test suite against the staging environment from inside CI without spinning up a local browser. Faster CI runs, fewer flake-y tests.
Custom screenshot pipelines
Generate screenshots at any resolution, capture full-page images, record traces — all the standard browser-automation patterns, hosted.
Remote browser questions
Is this the same as Browserless or browserbase? ▾
What browser engines are supported? ▾
How do I authenticate the connection? ▾
wss://browser.datasonar.dev?key=osk_.... The connection is rejected if the key is invalid, revoked, or over its monthly quota.Are sessions isolated? ▾
Can I use proxies with the browser endpoint? ▾
How is this billed? ▾
What's the difference between the browser endpoint and /v1/scrape? ▾
/v1/scrape is one-shot: send a URL, get cleaned data back. The browser endpoint is for interactive multi-step automation where you need fine-grained control — drive specific actions, wait for specific events, capture screenshots at specific moments. Use scrape for content; use browser for workflows.