The startups we back grow fast.

Our Talent team is constantly connecting passionate doers with the ambitious, impressive, action-oriented teams in our portfolio. Find your fit in the postings below.

If you are interested in an internal role at Primary, you can check out our Primary jobs page here.

Head of Inference Kernels

Etched.ai

Etched.ai

San Jose, CA, USA
Posted on Jul 10, 2025

Head of Inference Kernels

About Etched

Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents.

Job Summary
As a core member of the team, you will play a pivotal role in leading a high-performing team to build a suite of optimized kernels and implement highly optimized inference stacks for a variety of state-of-the-art transformer models (e.g., Llama-3, Llama-4, Deepseek-R1, Qwen-3, Stable Diffusion-3 etc.). You will be responsible for managing and scaling a high-performance team to pioneer novel model mapping strategies, while co-designing inference time algorithms (e.g., speculative and parallel decoding, prefill-decode disaggregation etc.).

Key responsibilities

  • Architect Best-in-Class Inference Performance on Sohu: Deliver continuous batching throughput exceeding B200 by ≥10x on priority workloads

  • Develop Best-in-Performance Inference Mega Kernels: Develop complex, fused kernels (including basics like reordering and fusing, but also more complex work involving simultaneous computation and transmission of intermediate values for sequential matmuls) that increase chip utilization and reduce inference latency, and validate these optimizations through benchmarking and regression-tested in production pipelines.

  • Architect Model Mapping Strategies: Develop system level optimizations using a mix of techniques such tensor parallelism and expert parallelism for optimal performance.

  • Hardware-Software Co-design of Inference-time Algorithmic Innovation: Develop and deploy production-ready inference-time algorithmic improvements (e.g., speculative decoding, prefill-decode disaggregation, KV cache offloading)

  • Build Scalable Team and Roadmap: Grow and retain a team of high-performing inference optimization engineers.

  • Cross-Functional Performance Alignment: Ensure inference stack and performance goals are aligned with the software infrastructure teams (e.g., runtime, and scheduling support), GTM (e.g., latency SLAs, workload targets) and hardware teams (e.g., instruction design, memory bandwidth) for future generations of our hardware.

Representative projects

  • Develop optimized kernels for multi-head latent attention on Sohu

  • Develop optimization strategies to optimally hide compute and communication in mixture-of-expert layers

  • Organize the team to deliver production ready forward pass implementations of new state-of-the-art models within 2-weeks of their release. Build infrastructure to be able to build this in <1 week in the future.

You may be a good fit if you have

  • Experience in designing and optimizing GPU kernels for deep learning on GPUs using CUDA, and assembly (ASM). You should have experience with low-level programming to maximize performance for AI operations, leveraging tools like Compute Kernel (CK), CUTLASS, and Triton for multi-GPU and multi-platform performance.

  • Deep fluency with transformer inference architecture, optimization levers, and full-stack systems (e.g., vLLM, custom runtimes). History of delivering tangible perf wins on GPU hardware or custom AI accelerators.

  • Have solid understanding of roofline models of compute throughput, memory bandwidth and interconnect performance.

  • Experienced in running large-scale workloads on heterogeneous compute clusters, optimizing for efficiency and scalability of AI workloads.

  • Scopes projects crisply, sets aggressive but realistic milestones, and drives technical decision-making across the team. Anticipates blockers and shifts resources proactively.

Strong candidates may also have

  • Experience with implementation of state-of-the-art reasoning and chain-of-thought models at production scale

  • Experience with implementation of newer AI compute operations on hardware (e.g., flash attention, long-context attention variants and alternatives)

  • Analyzed and implemented strategies such as KV-cache offloading for efficient compute resource management

  • Familiarity with linear algebra (e.g. matrix decomposition, alternatives bases for vector spaces, matrix rank and its implications)

  • Managed lean, high-performing engineering teams and drove execution on timelines with high quality outcomes

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to San Jose (Santana Row)

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in San Jose (Santana Row), and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.