The startups we back grow fast.
If you are interested in an internal role at Primary, you can check out our Primary jobs page here.
Lyric
About the Company
Why We Built Lyric: Supply chains are more critical and complex than ever. Every day, large enterprises navigate trillions of possible decisions that could impact the bottom line. Powerful algorithms and AI can address these problems, yet most organizations struggle to leverage supply chain AI at scale. The current SCM technologies are either rigid, limited-scope point solutions or custom solutions built in-house, which demand immense expertise and investment.
That is…until now.
Enter Lyric: Lyric is an enterprise AI platform built specifically for supply chains, offering the best of both worlds:
Out-of-the-box AI solutions for optimizing networks, allocating inventory, scheduling routes, planning fulfillment capacity, promising orders, propagating demand, building predictions, analyzing scenarios, and more, plus
A platform-first approach that empowers both business and technical users with end-to-end product composability, leveraging no-code tools, their own code, or even forking our code to build and refine supply chain decision intelligence
With Lyric, enterprises no longer have to choose between flexibility and speed—they get both.
The Mission: We’re building a new era in supply chain with the team best equipped to lead it. With over 20 years at the intersection of supply chain and algorithms, we developed a deep conviction that global supply chains needed something like Lyric. Since our inception in December 2021, that conviction has been validated time and time again.
Today, a growing number of Fortune 500 companies, including Smurfit WestRock, Estée Lauder, Coca-Cola, Nike, and more, are innovating on their own terms with Lyric. We can’t wait to see what our customers, both current and future, are empowered to build with us next. Come build with us!
At Lyric, we’re building intelligent, scalable solutions that unlock the power of data and automation in supply chains. Generative AI is a core pillar of our product strategy — enabling richer insights, smarter decisions, and better user experiences.
We are looking for a Platform / Backend Engineer to design and build the GenAI Infrastructure Layer that powers these capabilities. You’ll develop secure, performant backend systems and APIs to orchestrate large language model (LLM) calls, manage context and grounding, and serve real-time or batch GenAI workloads at scale.
You’ll work at the intersection of backend engineering, distributed systems, and ML infrastructure — enabling our product and ML teams to build next-generation AI features on a rock-solid foundation.
Design, build, and operate the backend services and platform APIs that power GenAI features in Lyric’s products.
Implement orchestration layers to route, sequence, and manage LLM calls with context grounding and prompt engineering.
Build scalable and cost-efficient GenAI serving infrastructure, including support for multiple model providers and fallback strategies.
Ensure platform resilience, security, observability, and compliance when serving user-facing GenAI workloads.
Provide abstractions, SDKs, and tooling to enable product and ML engineers to experiment and ship GenAI features faster.
Collaborate with ML engineers, product managers, and designers to understand requirements and deliver performant, developer-friendly systems.
Monitor performance, optimize latency and cost, and stay ahead of trends in GenAI and LLM ops.
3–7 years of backend or platform engineering experience, building distributed systems or ML/AI platforms.
Proficiency in at least one backend language (e.g., Python, Go, Java, Scala, or similar) and in designing APIs (REST, gRPC, GraphQL).
Experience building and operating scalable, high-availability backend services in cloud-native environments.
Familiarity with LLM serving and orchestration (e.g., OpenAI APIs, Anthropic, Hugging Face Inference Endpoints, or open-source LLM serving frameworks).
Understanding of prompt engineering, context grounding, and retrieval-augmented generation (RAG) concepts is a plus.
Knowledge of containerization and orchestration (Docker, Kubernetes) and observability best practices.
Ability to thrive in ambiguous, fast-moving environments and work effectively across teams.
Experience with vector databases (e.g., Pinecone, Weaviate, Milvus, FAISS) and embedding workflows.
Familiarity with LLM fine-tuning, adapters (LoRA, PEFT), or hosting custom models.
Knowledge of data privacy, security, and compliance concerns specific to GenAI workloads.