Empowering developers and site-owners with flexible, on-device AI chat solutions that respect privacy and reduce costs.
ChatDelta.com began as a proof-of-concept by David Christian Liedle (@DavidCanHelp)—a developer-advocate and lifelong tinkerer. The idea was simple but ambitious: one SDK that lets any app tap into multiple AI models (OpenAI, Claude, Gemini, open-weights, you name it) without the vendor lock-in.
While ChatDelta handled the plumbing, David quickly realised many people just wanted a ready-made chat box they could drop onto a site. Enter ChatEmbed.ai: a thin, self-hostable widget built as ChatDelta’s “first customer”.
ChatDelta acts as a broker: your code hits one API surface; ChatDelta fans the request out to whichever model (or blend of models) best fits cost, latency, or accuracy. It’s designed to be run in your own infra or as a managed service. That's the idea, at least. The v0 API just returns "Ok" when you visit api.chatdelta.com, and has rate limiting enabled.
PicoChat (ChatEmbed) is the lightweight front-end.
It's the v0 for ChatEmbed.ai.
Shipped as a single .html, it boots an open-source model via
WebLLM so
answers happen right in the browser—no server calls, no PII leakage, and
zero invoice surprises. PicoChat is the free offering from ChatEmbed.ai.
You can embed PicoChat in your site by visiting https://chatembed.ai/picochat.html
which serves as an example of embedding a ChatEmbed widget.
ChatEmbed.ai offers three tiers of AI chat widgets, each designed for different use cases and requirements:
Our free, privacy-first offering. Runs entirely in the browser using WebLLM with no server calls. Perfect for demos, personal projects, or privacy-conscious applications.
Premium widget powered by ChatDelta's API. Offers faster responses and more advanced capabilities while maintaining the simple embed experience.
Enterprise-grade solution with custom model routing, analytics, and advanced features. Built for high-traffic applications requiring maximum flexibility.