The startups we back grow fast.

Our Talent team is constantly connecting passionate doers with the ambitious, impressive, action-oriented teams in our portfolio. Find your fit in the postings below.

If you are interested in an internal role at Primary, you can check out our Primary jobs page here.

Head of Inference

Etched

Etched

San Jose, CA, USA
USD 200k-300k / year + Equity
Posted on Oct 21, 2025

Location

San Jose

Employment Type

Full time

Location Type

On-site

Department

Architecture

Head of Inference

About Etched

Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents.

Job Summary

You will play a pivotal role in leading a high-performing team to build a highly optimized inference stacks for a variety of state-of-the-art transformer models (e.g., Deepseek-R1, Qwen-3, GPT-OSS, Llama-3, Llama-4, Stable Diffusion-3 etc.). Etched has a top-tier kernel team that has written kernels for these models achieving 90%+ MFU - you will work alongside this team to build the right developer interfaces to let our customers control cache allocation, disaggregated serving, and other optimizations that live on top of the kernels level.

Key responsibilities

  • Scale and enhance Sohu’s runtime, including multi-node inference, intra-node execution, state management, and robust error handling

  • Optimize routing and communication layers using Sohu’s collectives

  • Develop tools for performance profiling and debugging, identifying bottlenecks and correctness issues

  • Support porting state-of-the-art models to our architecture. Help build programming abstractions and testing capabilities to rapidly iterate on model porting

You may be a good fit if you have

  • Strong engineering fundamentals & proficiency in Rust and/or C++.

  • Desire to continue to be a tech lead, as well as the ability to lead a team of engineers.

  • Solid systems knowledge, including Linux internals, accelerator architectures (e.g., GPUs, TPUs), and high-speed interconnects (e.g., NVLink, InfiniBand).

  • Good familiarity with PyTorch and/or JAX.

  • Good familiarity with SOTA model architectures.

  • Ported applications to non-standard or accelerator hardware platforms.

Strong candidates may also have experience with

  • Developed low-latency, high-performance applications using both kernel-level and user-space networking stacks.

  • Deep understanding of distributed systems concepts, algorithms, and challenges, including consensus protocols, consistency models, and communication patterns.

  • Solid grasp of large language model architectures, particularly Mixture-of-Experts (MoE).

  • Experience analyzing performance traces and logs from distributed systems and ML workloads.

  • Built applications with extensive SIMD (Single Instruction, Multiple Data) optimizations for performance-critical paths.

  • Experience designing and implementing CI/CD pipelines for MLOps workflows.

Salary Compensation

  • $200,000 - $300,000 + significant equity package

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to West San Jose

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.