Update: 2026-03-10 (06:58 AM)
Here is the Technical Intelligence Analyst report for 2026-03-10.
Executive Summary
- Competitor Scale-Out: NVIDIA has announced a massive strategic partnership and investment in Thinking Machines Lab (led by Mira Murati), committing to deploy at least 1 gigawatt of power for its next-generation Vera Rubin architecture.
- Ecosystem Lock-in: The collaboration includes co-designing training and serving systems specifically for NVIDIA architectures, further solidifying NVIDIA’s grip on next-generation frontier AI development.
🤼♂️ Market & Competitors
[2026-03-10] NVIDIA and Thinking Machines Lab Announce Long-Term Gigawatt-Scale Strategic Partnership
Source: NVIDIA Blog
Key takeaway relevant to AMD:
- NVIDIA is securing unprecedented gigawatt-scale infrastructure deployments for its upcoming Vera Rubin architecture, establishing a massive footprint before AMD’s competing next-generation Instinct accelerators hit the market.
- NVIDIA’s direct financial investment and co-design partnership with Thinking Machines Lab creates a highly optimized, vendor-locked ecosystem that AMD will struggle to penetrate for this specific frontier model developer.
Summary:
- NVIDIA and Thinking Machines Lab have formed a multi-year strategic partnership to build out at least one gigawatt of AI infrastructure.
- NVIDIA has made a significant financial investment in the lab to accelerate the development of customizable, collaborative frontier AI models.
- The deployment will utilize NVIDIA’s next-generation Vera Rubin systems, with a target launch of early next year.
Details:
- Infrastructure Scale: The partnership commits to a massive deployment of at least one gigawatt of next-generation NVIDIA Vera Rubin systems. This represents one of the largest single architectural commitments publicly announced.
- Hardware Generation: Confirms that the deployment will rely on the “Vera Rubin” platform (NVIDIA’s successor to Blackwell), highlighting aggressive forward-purchasing and data center planning.
- Timeline: Deployment of the Rubin-based systems is officially targeted for early next year (2027).
- System Co-Design: The partnership goes beyond hardware procurement; it includes a joint effort to actively design both training and serving systems highly optimized specifically for NVIDIA architectures.
- Workload Focus: The gigawatt cluster will be utilized to support Thinking Machines’ frontier model training, as well as platforms designed to deliver customizable AI at scale.
- Leadership Context: Thinking Machines Lab is co-founded and led by Mira Murati (former OpenAI CTO), positioning the lab as a top-tier competitor in the frontier model space with significant, exclusive backing from NVIDIA.
- Market Strategy: The initiative aims to broaden enterprise, research, and scientific community access to frontier and open models, ensuring these models are deeply optimized for the CUDA/Rubin software and hardware stack.
📈 GitHub Stats
| Category | Repository | Total Stars | 1-Day | 7-Day | 30-Day |
|---|---|---|---|---|---|
| AMD Ecosystem | AMD-AGI/GEAK-agent | 69 | 0 | 0 | +8 |
| AMD Ecosystem | AMD-AGI/Primus | 79 | +1 | +5 | +6 |
| AMD Ecosystem | AMD-AGI/TraceLens | 63 | 0 | +3 | +5 |
| AMD Ecosystem | ROCm/MAD | 31 | 0 | 0 | 0 |
| AMD Ecosystem | ROCm/ROCm | 6,235 | +6 | +23 | +85 |
| Compilers | openxla/xla | 4,059 | +3 | +30 | +88 |
| Compilers | tile-ai/tilelang | 5,348 | +6 | +48 | +253 |
| Compilers | triton-lang/triton | 18,605 | +12 | +71 | +222 |
| Google / JAX | AI-Hypercomputer/JetStream | 415 | 0 | +1 | +10 |
| Google / JAX | AI-Hypercomputer/maxtext | 2,165 | +1 | +9 | +31 |
| Google / JAX | jax-ml/jax | 35,039 | +7 | +52 | +226 |
| HuggingFace | huggingface/transformers | 157,702 | +95 | +411 | +1483 |
| Inference Serving | alibaba/rtp-llm | 1,060 | +2 | +4 | +19 |
| Inference Serving | efeslab/Atom | 335 | -1 | -1 | -1 |
| Inference Serving | llm-d/llm-d | 2,592 | +1 | +35 | +132 |
| Inference Serving | sgl-project/sglang | 24,280 | +17 | +264 | +863 |
| Inference Serving | vllm-project/vllm | 72,723 | +181 | +951 | +2936 |
| Inference Serving | xdit-project/xDiT | 2,565 | +2 | +14 | +38 |
| NVIDIA | NVIDIA/Megatron-LM | 15,580 | +24 | +91 | +422 |
| NVIDIA | NVIDIA/TransformerEngine | 3,193 | +3 | +13 | +44 |
| NVIDIA | NVIDIA/apex | 8,928 | 0 | +2 | +17 |
| Optimization | deepseek-ai/DeepEP | 9,036 | +3 | +22 | +69 |
| Optimization | deepspeedai/DeepSpeed | 41,784 | +13 | +68 | +217 |
| Optimization | facebookresearch/xformers | 10,363 | +1 | +8 | +32 |
| PyTorch & Meta | meta-pytorch/monarch | 987 | 0 | +5 | +27 |
| PyTorch & Meta | meta-pytorch/torchcomms | 347 | 0 | +3 | +17 |
| PyTorch & Meta | meta-pytorch/torchforge | 637 | +2 | +11 | +23 |
| PyTorch & Meta | pytorch/FBGEMM | 1,538 | 0 | +3 | +11 |
| PyTorch & Meta | pytorch/ao | 2,727 | +1 | +15 | +60 |
| PyTorch & Meta | pytorch/audio | 2,835 | 0 | +1 | +13 |
| PyTorch & Meta | pytorch/pytorch | 98,175 | +102 | +282 | +934 |
| PyTorch & Meta | pytorch/torchtitan | 5,121 | +3 | +17 | +76 |
| PyTorch & Meta | pytorch/vision | 17,555 | +5 | +14 | +59 |
| RL & Post-Training | THUDM/slime | 4,661 | +22 | +125 | +952 |
| RL & Post-Training | radixark/miles | 963 | +2 | +27 | +112 |
| RL & Post-Training | volcengine/verl | 19,794 | +40 | +239 | +733 |