NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by phenomenal technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.
As an AI Storage Platform Architect at NVIDIA, this position will be the linchpin between cutting-edge hardware platforms and real-world AI deployments - translating the capabilities of Rubin GPUs, Vera CPUs, BlueField DPUs, NVLink fabric, and Spectrum-X networking into validated, production-ready blueprints. Work hand-in-hand with storage ecosystem partners to co-develop reference architectures for the NVIDIA AI Data Platform and beyond, ensuring that every layer of the stack - compute, fabric, memory, and storage - is optimized for modern AI workloads!
What you’ll be doing:
Architect end-to-end reference architectures for disaggregated inference (aligned with NVIDIA Dynamo), large-scale foundation model training, and agentic AI pipelines — co-developed with storage and ecosystem partners.
Design and validate storage-optimized AI infrastructure, including KV Cache tiering strategies, checkpoint acceleration, and high-throughput dataset pipelines that leverage RDMA and NVMeoF fabrics.
Define system-level architectures spanning Rubin graphics processors, Vera central processing units, BlueField data processing units, NVLink interconnects, and Spectrum-X Ethernet to improve efficiency across the full AI lifecycle.
Develop and publish reference architectures, whitepapers, and deployment guides for the NVIDIA AI Data Platform and partner-integrated solutions.
Drive prototyping, benchmarking, and performance validation of AI infrastructure at scale - diagnosing bottlenecks across compute, networking, and storage layers.
Leverage DOCA to architect DPU-offloaded data services including storage acceleration, telemetry, security enforcement, and network virtualization.
Collaborate with RAG and autonomous AI teams to build retrieval-optimized storage architectures, including vector database integration, low-latency object access patterns, and inference-aware caching.
Partner with customers and collaborators in the ecosystem to co-innovate, deliver proof-of-concepts (POCs) and MVPs that demonstrate end-to-end AI platform performance leadership.
What we need to see:
12+ years of experience architecting datacenter-scale AI, HPC, or storage infrastructure as a Principal Architect, Solutions Architect, Principal Engineer, or equivalent.
Bachelors in Computer Science or related field (or equivalent experience).
Deep expertise in AI infrastructure build, including disaggregated inference architectures, LLM training pipelines, and autonomous AI system patterns.
Hands-on experience with RDMA (RoCEv2/InfiniBand), high-performance storage protocols (NVMeoF, GPFS, Lustre, or S3-compatible object storage), and low-latency fabric design.
Strong understanding of KV Cache management strategies, including tiered memory/storage hierarchies for inference optimization.
Familiarity with Retrieval-Augmented Generation (RAG) architectures and the storage, indexing, and retrieval patterns they demand at scale.
Experience with NVIDIA DOCA or equivalent DPU/SmartNIC programming frameworks for offloading data plane and storage services.
Proven foundation in networking: Spectrum-X Ethernet, InfiniBand, NVLink Switch fabrics, congestion control, and datacenter topologies.
Ways to stand out from the crowd:
Proven experience designing reference architectures jointly with storage or infrastructure OEM partners (e.g., NetApp, DDN, VAST, Pure Storage, Dell or similar).
Hands-on deployment experience with disaggregated inference systems, including prefill/decode separation, KV Cache offload, and request routing.
Deep familiarity with NVIDIA Grace-Hopper, Grace-Blackwell, or upcoming Vera-Rubin platforms and their system-level implications for AI workloads.
NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us. If you're creative and autonomous, we want to hear from you!
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 224,000 USD - 356,500 USD.
You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until March 17, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.