The startups we back grow fast.

Our Talent team is constantly connecting passionate doers with the ambitious, impressive, action-oriented teams in our portfolio. Find your fit in the postings below.

If you are interested in an internal role at Primary, you can check out our Primary jobs page here.

Performance Engineer

Etched

Etched

Software Engineering
San Jose, CA, USA
Posted on Nov 11, 2025

Location

San Jose

Employment Type

Full time

Location Type

On-site

Department

Architecture

About Etched

Etched is building AI chips that are hard-coded for individual model architectures. Our first product (Sohu) only supports transformers, but has an order of magnitude more throughput and lower latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents.

Key responsibilities

  • Develop comprehensive performance models and projections for Sohu's transformer-specific architecture across varying workloads and configurations

  • Profile and analyze deep learning workloads on Sohu to identify micro-architectural bottlenecks and optimization opportunities

  • Build analytical and simulation-based models to predict performance under different architectural configurations and design trade-offs

  • Collaborate with hardware architects to inform micro-architectural decisions based on workload characteristics and performance analysis

  • Drive hardware/software co-optimization by identifying opportunities where architectural features can unlock significant performance improvements

  • Characterize and optimize memory hierarchy performance, interconnect utilization, and compute resource efficiency

  • Develop performance benchmarking frameworks and methodologies specific to transformer inference workloads

Representative projects

  • Build detailed roofline models and performance projections for Sohu across diverse transformer architectures (Llama, Mixtral, etc.)

  • Profile production inference workloads to identify and eliminate micro-architectural bottlenecks

  • Analyze memory bandwidth, compute utilization, and interconnect performance to guide next-generation architecture decisions

  • Develop performance modeling tools that predict chip behavior across different batch sizes, sequence lengths, and model configurations

  • Characterize the performance impact of architectural features like specialized datapaths, memory hierarchies, and on-chip interconnects

  • Compare Sohu's architectural efficiency against conventional GPU architectures through detailed bottleneck analysis

  • Inform hardware design decisions for future generations (Caelius and beyond) based on workload analysis and performance projections

You may be a good fit if you have

  • Deep expertise in computer architecture and micro-architecture, particularly for accelerators or domain-specific architectures

  • Strong performance modeling and analysis skills with experience building analytical or simulation-based performance models

  • Experience profiling and optimizing deep learning workloads on hardware accelerators (GPUs, TPUs, ASICs, FPGAs)

  • Strong understanding of hardware/software co-design principles and cross-layer optimization

  • Solid foundation in digital circuit design and how micro-architectural decisions impact performance

  • Experience with reconfigurable or heterogeneous architectures

  • Ability to reason quantitatively about performance bottlenecks across the full stack from circuits to workloads

Strong candidates may also have

  • PhD or equivalent research experience in Computer Architecture or related fields

  • Experience with ASIC, FPGA, or CGRA-based accelerator development

  • Published research in computer architecture, ML systems, or hardware acceleration

  • Deep knowledge of GPU architectures and CUDA programming model

  • Experience with architecture simulators and performance modeling tools (gem5, trace-driven simulators, custom models)

  • Track record of informing architectural decisions through rigorous performance analysis

  • Familiarity with transformer model architectures and inference serving optimizations

Benefits

  • Full medical, dental, and vision packages, with generous premium coverage

  • Housing subsidy of $2,000/month for those living within walking distance of the office

  • Daily lunch and dinner in our office

  • Relocation support for those moving to West San Jose

How we’re different

Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.

We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.