Product Engineer (Integrations)

ClickHouse

ClickHouse

Product

Berlin, Germany · Munich, Germany

EUR 90k-160k / year + Equity

Posted on Apr 24, 2026

Location

Europe; Berlin; London; Munich; Paris; Zurich

Employment Type

Full time

Location Type

Hybrid

Department

Engineering

Compensation

  • Salary €90K – €160K • Offers Equity

About Langfuse

Open Source LLM Engineering Platform that helps teams build useful AI applications via tracing, evaluation, and prompt management (mission, product). We are now part of ClickHouse.

We're building the "Datadog" of this category; model capabilities continue to improve, but building useful applications is really hard, both in startups and enterprises.

Largest open source solution in this category: trusted by 19 of the Fortune 50, >2k customers, >26M monthly SDK downloads, >6M Docker pulls.

We joined ClickHouse in January 2026 because LLM observability is fundamentally a data problem and Langfuse already ran on ClickHouse. Together we can move faster on product while staying true to open source and self-hosting, and join forces on GTM and sales to accelerate revenue.

Previously backed by Y Combinator, Lightspeed, and General Catalyst.

We're a small, engineering-heavy, and experienced team in Berlin and San Francisco. We are also hiring for engineering in EU timezones and expect one week per month in our Berlin office (how we work).

Why Integrations Engineering at Langfuse

Your work puts Langfuse into developers’ hands.

Our SDKs are downloaded 26M+ times per month, and for many developers the first thing they touch is an integration — a few lines of code that connect their favorite framework to Langfuse. When that experience is seamless, they’re wow’ed. When it’s not, we might have lost them. You’ll own that critical first impression across 40+ framework integrations.

You’ll live at the frontier of LLM application development.

The AI framework ecosystem moves fast — new agent frameworks, orchestration libraries, and model providers emerge every week. You’ll be among the first to instrument them, giving you unmatched exposure to how cutting-edge AI applications are built. The developers you serve are some of the most ambitious software engineers in the world, and working closely with them will make you an expert on LLM engineering yourself.

Everything you build is open source and immediately visible.

All Langfuse integrations are MIT-licensed. When you ship a new integration or improve an existing one, thousands of developers benefit the same day — and they’ll tell you about it in GitHub issues, on Twitter, and in our community channels.

What You’ll Do

  • Build and maintain framework integrations. Langfuse integrates with 40+ frameworks and model providers: OpenAI SDK, Vercel AI SDK, LangChain, LlamaIndex, Pydantic AI, OpenAI Agents, CrewAI, Amazon Bedrock AgentCore, LiveKit, and many more. You’ll own these integrations end-to-end — from initial implementation to ongoing maintenance as frameworks evolve. When a new framework gains traction, you’ll be among the first to instrument it.

  • Design new integration patterns for emerging frameworks. When a framework or a new agent orchestrator appears, you’ll evaluate it, design the right instrumentation approach (callback handler, decorator, OTEL auto-instrumentation, or a combination), build the integration, write the docs, and ship it. You’ll develop strong opinions on what makes a great integration experience.

  • Contribute to the core SDKs. While integrations are your primary focus, your work will surface needs in our Python and TypeScript SDKs. You’ll contribute improvements to the core SDK when your integration work demands it — whether that’s a new hook point, better context propagation, or performance optimizations.

  • Write documentation and integration guides. At Langfuse, docs are part of our core product. When you ship a new integration, the guide ships with it. You’ll own the integration docs, quickstart tutorials, cookbooks, and migration paths that help developers get started in minutes.

  • Be a voice in the developer community. You’ll engage with framework communities, respond to integration-related GitHub issues, write blog posts about new integrations, and be present in our Slack/Discord channels. You’ll build relationships with framework maintainers and represent Langfuse in the broader AI developer ecosystem.

What We’re Looking For

  • Passionate about the LLM ecosystem. You’ve built real applications with frameworks like LangChain, Pydantic AI, Vercel AI SDK, LlamaIndex, or similar — or you have a deep willingness to go deep and get your hands dirty with every major framework in the space. You’re excited about this ecosystem, not just familiar with it.

  • Strong in Python and/or TypeScript. You write clean, reliable code. You don’t need to be a systems-level performance expert, but you care about code quality and understand that integrations run inside other people’s production systems.

  • Product-minded engineer. You think about the developer who’s going to use your integration at 11pm trying to ship a feature. You obsess over the getting-started experience: how many lines of code does it take? Is the error message helpful? Does the docs example actually work?

  • Self-directed and motivated. You know how to develop conviction about what to build and how to ship it. You don’t wait for detailed specs — you investigate the framework, talk to users, and propose the right approach.

  • Excited about open source and developer community. You genuinely enjoy talking to developers about their integration challenges, writing clear documentation, and contributing to open source projects.

  • Thrives in a small, accountable team. Your output is visible and matters. You’re comfortable owning outcomes, not just tasks.

CS or quantitative degree preferred, but not required. We care far more about what you’ve built and your hunger to learn.

Bonus Points

  • Experience with OpenTelemetry internals or observability instrumentation

  • Contributions to popular open source projects, SDKs, or developer tools

  • Experience building developer tooling, CLIs, or client libraries

  • Former founder or early startup experience

  • Active presence in AI/ML developer communities (blog posts, talks, open source)

No candidate checks all boxes. If you feel you are a good fit for this role, please go ahead and apply.

Projects You Could Own

  • Build and ship a Langfuse integration for an emerging agent framework (e.g., OpenClaw, new OTEL-based instrumentation)

  • Design the integration pattern for a new category of AI tools (e.g., voice agents via LiveKit/Pipecat)

  • Create comprehensive quickstart cookbooks that get developers from zero to traced in under 5 minutes

  • Work with the OpenTelemetry community to improve GenAI semantic conventions

  • Maintain and upgrade our most popular integrations (OpenAI, LangChain, Vercel AI SDK) as those frameworks ship breaking changes

Process

We can run the full process to your offer letter in less than 7 days (hiring process).

Tech Stack

We run a TypeScript monorepo: Next.js on the frontend, Express workers for background jobs, PostgreSQL for transactional data, ClickHouse for tracing at scale, S3 for file storage, and Redis for queues and caching. You should be familiar with a good chunk of this, but we trust you'll pick up the rest quickly (Stack, Architecture).

How we ship

Link to handbook

  • We trust you to take ownership (ownership overview) for your area. You identify what to build, propose solutions (RFCs), and ship them. Everyone here thinks about the user experience and the technical implementation at the same time. Everyone manages their own Linear.

  • You're never alone. Anyone from the team is happy to go into a whiteboard session with you. 15 minutes of shared discussion can very much improve the overall output.

  • We implement maker schedule and communication. There are two recurring meetings a week: Monday check-in on priorities (15 min) and a demo session on Fridays (60 min).

  • Code reviews are mentorship. New joiners get all PRs reviewed to learn the codebase, patterns, and how the systems work (onboarding guide).

  • We use AI as much as possible in our workflows to make our users happy. We encourage everyone to experiment with new tooling and AI workflows.

Why Langfuse (now part of ClickHouse)

  • This role puts you at the forefront of the AI revolution, partnering with engineering teams who are building the technology that will define the next decade(s).

  • This is an open-source devtools company. We ship daily, talk to customers constantly, and fight for great DX. Reliability and performance are central requirements.

  • Your work ships under your name. You'll appear on changelog posts for the features you build, and during launch weeks, you'll produce videos to announce what you've shipped to the community. You’ll own the full delivery end to end.

  • We're solving hard engineering problems: figuring out which features actually help users improve AI product performance, building SDKs developers love, visualizing data-rich traces, rendering massive LLM prompts and completions efficiently in the UI, and processing terabytes of data per day through our ingestion pipeline.

  • You'll work closely with the ClickHouse team and learn how they build a world-class infrastructure company. We're in a period of strong growth: Langfuse is growing organically and accelerating through ClickHouse's GTM. (Why we joined ClickHouse)

  • If you wonder what to build next, our users are a Slack message or a Github discussions post away.

  • You’re on a continuous learning journey. The AI space develops at breakneck speed and our customers are at the forefront. We need to be ready to meet them where they are and deliver the tools they need just-in-time.

Compensation Range: €90K - €160K