News: 2026-02-13
February 13, 2026 · Generated 08:36 AM PT
Technical Intelligence Analyst Report - 2026-02-13
Executive Summary
- EPYC 9005 “Turin” Confidential Computing: New benchmarking analysis evaluates the performance overhead of AMD SEV-SNP (Secure Encrypted Virtualization with Secure Nested Paging) on the latest EPYC 9005 processors within Microsoft Azure.
- Security vs. Performance: The report highlights the trade-offs involved in enabling hardware-backed security, generally citing a 2-10% performance cost, with higher impacts on I/O-heavy workloads.
- Software Ecosystem: Testing includes upcoming software stacks, specifically Ubuntu 26.04 development snapshots with GCC 15, indicating readiness for next-generation Linux enterprise deployments.
🔲 AMD Hardware & Products
[2026-02-13] Evaluating The Performance Cost To AMD SEV-SNP On EPYC 9005 VMs
Source: Phoronix
Key takeaway relevant to AMD:
- Validates the viability of EPYC 9005 “Turin” processors for confidential computing in major cloud environments (Azure).
- Provides critical data for enterprise customers weighing the performance penalty of enabling SEV-SNP against the security benefits.
- Demonstrates forward-looking compatibility with upcoming Linux distributions (Ubuntu 26.04) and compiler stacks (GCC 15).
Summary:
- Phoronix conducted a performance analysis of AMD SEV-SNP on EPYC 9005 “Turin” servers using Microsoft Azure.
- The comparison focused on the performance delta between a standard VM and a Confidential VM (CVM) with SEV-SNP enabled.
- Testing utilized both current Ubuntu 24.04 LTS and an early development snapshot of Ubuntu 26.04.
Details:
- Technology Scope: The review focuses on AMD SEV-SNP (Secure Encrypted Virtualization with Secure Nested Paging). This feature provides memory encryption, integrity protections, and defenses against malicious hypervisor-based attacks and side-channel exploits.
- Performance Expectations:
- The article notes a typical reported performance overhead of 2% to 10% when engaging SEV-SNP.
- I/O heavy workloads (e.g., database servers) may experience higher overheads, cited between 10% and 12%.
- Hardware Configuration:
- Platform: Microsoft Azure v7 series VMs.
- Processor: AMD EPYC 9V74 (80-core processors).
- Test Instance: A 16 vCPU configuration (8 physical cores + 16 threads), 64GB RAM, and 550GB virtual storage.
- New Feature Note: EPYC 9005 supports SEV Trusted I/O for PCIe device protection, though this specific feature was outside the scope of this test.
- Software Environment:
- Baseline: Ubuntu 24.04 LTS running Linux kernel 6.14 and GCC 13.2.
- Forward-Looking: Ubuntu 26.04 development snapshot tested to evaluate the impact of newer kernels and the GCC 15 compiler.
- Methodology: A direct 1:1 comparison was made between a non-confidential instance and a SEV-SNP enabled instance to isolate the performance cost of the security features.
📈 GitHub Stats
| Category | Repository | Total Stars | 1-Day | 7-Day | 30-Day |
|---|---|---|---|---|---|
| AMD Ecosystem | AMD-AGI/GEAK-agent | 63 | 0 | +2 | |
| AMD Ecosystem | AMD-AGI/Primus | 74 | 0 | +1 | |
| AMD Ecosystem | AMD-AGI/TraceLens | 58 | 0 | 0 | |
| AMD Ecosystem | ROCm/MAD | 31 | 0 | 0 | |
| AMD Ecosystem | ROCm/ROCm | 6,169 | +2 | +25 | |
| Compilers | openxla/xla | 3,983 | +2 | +13 | |
| Compilers | tile-ai/tilelang | 5,177 | +12 | +109 | |
| Compilers | triton-lang/triton | 18,408 | +1 | +45 | |
| Google / JAX | AI-Hypercomputer/JetStream | 407 | 0 | +3 | |
| Google / JAX | AI-Hypercomputer/maxtext | 2,138 | -2 | +6 | |
| Google / JAX | jax-ml/jax | 34,854 | +5 | +50 | |
| HuggingFace | huggingface/transformers | 156,438 | +26 | +283 | |
| Inference Serving | alibaba/rtp-llm | 1,049 | +1 | +8 | |
| Inference Serving | efeslab/Atom | 336 | 0 | 0 | |
| Inference Serving | llm-d/llm-d | 2,485 | +3 | +32 | |
| Inference Serving | sgl-project/sglang | 23,494 | -43 | +96 | |
| Inference Serving | vllm-project/vllm | 70,229 | +78 | +582 | |
| Inference Serving | xdit-project/xDiT | 2,539 | +2 | +13 | |
| NVIDIA | NVIDIA/Megatron-LM | 15,206 | +3 | +57 | |
| NVIDIA | NVIDIA/TransformerEngine | 3,160 | 0 | +18 | |
| NVIDIA | NVIDIA/apex | 8,915 | 0 | +4 | |
| Optimization | deepseek-ai/DeepEP | 8,976 | -4 | +11 | |
| Optimization | deepspeedai/DeepSpeed | 41,613 | +2 | +62 | |
| Optimization | facebookresearch/xformers | 10,336 | +1 | +10 | |
| PyTorch & Meta | meta-pytorch/monarch | 967 | -1 | +9 | |
| PyTorch & Meta | meta-pytorch/torchcomms | 331 | -1 | +3 | |
| PyTorch & Meta | meta-pytorch/torchforge | 620 | +4 | +7 | |
| PyTorch & Meta | pytorch/FBGEMM | 1,530 | 0 | +4 | |
| PyTorch & Meta | pytorch/ao | 2,685 | +6 | +18 | |
| PyTorch & Meta | pytorch/audio | 2,826 | -1 | +4 | |
| PyTorch & Meta | pytorch/pytorch | 97,382 | +21 | +182 | |
| PyTorch & Meta | pytorch/torchtitan | 5,066 | +3 | +25 | |
| PyTorch & Meta | pytorch/vision | 17,507 | -4 | +10 | |
| RL & Post-Training | THUDM/slime | 4,033 | +118 | +336 | |
| RL & Post-Training | radixark/miles | 874 | +1 | +29 | |
| RL & Post-Training | volcengine/verl | 19,194 | +17 | +159 |