Technical Intelligence Analyst Report Date: 2026-01-30

Executive Summary

  • AMD Server Dominance: Fresh benchmarks (Linux 6.18) confirm AMD EPYC 9755 (Zen 5) maintains decisive performance leadership over Intel Xeon 6 (Granite Rapids) across nearly 500 tests.
  • Intel AI Software Push: Intel released LLM-Scaler-vLLM 1.3 for Arc Battlemage, updating the stack to vLLM 0.11.1 and adding support for Qwen3 and DeepSeek-OCR, intensifying consumer AI competition.
  • Intel Mobile Power Adjustments: Intel is disabling Duty Cycle Control (DCC) for Panther Lake CPUs in the Linux driver due to latency regressions, potentially impacting power efficiency in early adoption.
  • NVIDIA Open Source Progress: The Rust-based “Nova” driver for NVIDIA GPUs is preparing Turing (RTX 20 series) support for Linux 7.0, marking continued maturation of NVIDIA’s open-source strategy.
  • Open Source AI Pushback: AerynOS (formerly Serpent OS) has officially banned LLM-generated code contributions, citing quality and ethical concerns, signaling a potential shift in open-source governance.

🔲 AMD Hardware & Products

[2026-01-30] AMD EPYC 9755 Delivers Decisive Performance Leadership Over Xeon 6 Granite Rapids With Nearly 500 Benchmarks

Source: Phoronix

Key takeaway relevant to AMD:

  • Competitive Advantage: Confirms Zen 5 (Turin) superiority over Intel’s latest Granite Rapids in high-core-count (128c) scenarios, reinforcing AMD’s value proposition for data center clients.
  • Platform Maturity: The benchmarks validate stable performance on upcoming Linux kernels (6.18 LTS) and compilers (GCC 15.2), ensuring readiness for enterprise Linux deployments (e.g., Ubuntu 26.04).

Summary:

  • A comprehensive benchmark comparison was conducted between dual-socket AMD EPYC 9755 (Turin, Zen 5) and Intel Xeon 6980P (Granite Rapids).
  • The testing expanded from 200 to nearly 500 benchmarks using the latest Linux software stack.
  • Results indicate AMD holds a “decisive performance leadership” position.

Details:

  • Test Environment:
    • OS: Ubuntu 25.10.
    • Kernel: Linux 6.18.1 LTS.
    • Compiler: GCC 15.2.
    • Storage: KIOXIA KCD8XPUG1T92 PCIe Gen 5 NVMe.
  • Hardware Configuration:
    • AMD: Dual EPYC 9755 (128 cores each) using standard DDR5.
    • Intel: Dual Xeon 6980P (128 cores each) using 24 x 64GB DDR5-8800 MRDIMMs (high-bandwidth memory).
  • Methodology:
    • Power consumption monitoring was restricted to CPU-only readings due to platform disparities (Reference platform for AMD vs. Gigabyte R284-A92-AAL1 for Intel).
    • Benchmarks were intensified to simulate workloads for future architectures (Intel Clearwater Forest/AMD Venice).

🤼‍♂️ Market & Competitors

[2026-01-30] Intel Releases LLM-Scaler-vLLM 1.3 With New LLM Model Support

Source: Phoronix

Key takeaway relevant to AMD:

  • Competitive Software Velocity: Intel is aggressively updating its Docker-based AI stack for Arc Battlemage, closely tracking upstream vLLM versions (0.11.1). AMD ROCm teams must ensure similar day-zero support for trending models like DeepSeek and Qwen3 to maintain competitiveness in the consumer/edge AI space.
  • Feature Parity: Intel now supports CPU KV cache offload and speculative decoding on consumer GPUs, features highly requested by local LLM users.

Summary:

  • Intel released version 1.3 of its llm-scaler-vllm tool, enabling new large language models on Intel Arc Battlemage graphics.
  • The update upgrades the underlying stack to vLLM 0.11.1 and PyTorch 2.9.

Details:

  • New Model Support:
    • Qwen3-Next-80B (Instruct & Thinking variants).
    • InternVL3.5-30B-A3B.
    • DeepSeek-OCR and PaddleOCR-VL.
    • Seed-OSS-36B-Instruct.
    • OpenAI Whisper-large-v3.
  • Technical Enhancements:
    • KV Cache: Enabled CPU KV cache offload and experimental FP8 KV cache.
    • Decoding: Added speculative decoding support with two additional methods.
    • Quantization: Added sym_int4 support for Qwen3-30B (Tensor Parallel 4/8) and Qwen3-235B (Tensor Parallel 16).
  • Distribution: Available via Docker and GitHub.

[2026-01-30] Intel Xe Linux Driver Updated To Disable GuC Power DCC For Panther Lake

Source: Phoronix

Key takeaway relevant to AMD:

  • Competitor Engineering Challenges: Intel is disabling a key power efficiency feature (DCC) on next-gen “Panther Lake” mobile chips due to latency issues. This may give AMD Ryzen AI mobile chips an efficiency or responsiveness edge in initial benchmarks against Core Ultra Series 3.

Summary:

  • Intel sent patches for the Xe kernel driver (targeting Linux 6.20/7.0) to disable GuC (Graphics Micro-controller) Duty Cycle Control (DCC) for Panther Lake GPUs.
  • The change addresses regressions caused by high latency in the DCC implementation.

Details:

  • Feature Affected: Duty Cycle Control (DCC) adjusts graphics frequency to allow low-power idle states. It is designed to enhance power efficiency.
  • Reason for Disabling: The patch notes state, “On PTL [Panther Lake], the recommendation is to disable DCC… as it may cause some regressions due to added latencies.”
  • Implementation: The driver forces DCC off in the Kernel Mode Driver (KMD) to ensure the fix propagates even if the user has older GuC microcode.
  • Impact: Likely allows for higher power consumption or fewer idle states on Panther Lake systems running Linux 6.20+, though specific power penalty metrics were not disclosed.

[2026-01-30] Open-Source Nova Driver In Linux 7.0 Continues Preparing For NVIDIA Turing GPU Support

Source: Phoronix

Key takeaway relevant to AMD:

  • Ecosystem Shift: NVIDIA is heavily investing in a Rust-based open-source driver (“Nova”). While currently playing catch-up to AMD’s mature amdgpu stack, this significantly reduces AMD’s historic “open-source friendliness” advantage in the Linux kernel over the long term.

Summary:

  • Rust DRM updates for Linux 7.0 were submitted, focusing on the NVIDIA “Nova” open-source driver and Arm “Tyr” driver.
  • Development is led by engineers from NVIDIA and Red Hat.

Details:

  • Target Hardware: The current focus is bringing up NVIDIA GeForce RTX 20 / GTX 1600 series (Turing architecture).
  • New Capabilities in Linux 7.0:
    • Nova Core can now parse Turing-specific firmware headers and sections.
    • Implementation of the Turing Falcon HAL (Hardware Abstraction Layer).
  • Current Status: Turing support is “preparing” but not yet fully enabled for end-users. Users must still rely on the older Nouveau driver or proprietary stack.
  • Codebase Improvements: Improved handling of unexpected firmware values and cleanup of redundant debug statements in Rust code.

[2026-01-30] AerynOS Establishes Policy Against LLM Contributions, 2026.01 ISO Refresh

Source: Phoronix

Key takeaway relevant to AMD:

  • Community Trend: AerynOS (formerly Serpent OS) serves as a bellwether for open-source project governance. The ban on LLM-generated code highlights growing resistance to AI coding tools in parts of the Linux ecosystem. This is relevant for AMD developers contributing to upstream projects or promoting AMD-based AI coding assistants.

Summary:

  • AerynOS developers released a 2026.01 ISO refresh and established a strict policy banning contributions generated by Large Language Models (LLMs).
  • The project cites ethical concerns, resource usage, quality degradation, and copyright risks as reasons for the ban.

Details:

  • Policy: No “AI” LLM-backed contributions accepted.
  • ISO 2026.01 Updates:
    • Desktop Environments: COSMIC 1.0.3, GNOME 49.3, KDE Plasma 6.5.5.
    • Core Software: Firefox 147.0.2, Fish 4.3.3.
    • Graphics Stack: Mesa 25.3.4 (Relevant for AMD GPU support).
  • Infrastructure: Launched a debuginfod instance for on-demand package debug information.

📈 GitHub Stats

Category Repository Total Stars 1-Day 7-Day 30-Day
AMD Ecosystem AMD-AGI/GEAK-agent 58 0 +2  
AMD Ecosystem AMD-AGI/Primus 71 +1 +5  
AMD Ecosystem AMD-AGI/TraceLens 56 0 0  
AMD Ecosystem ROCm/MAD 31 0 0  
AMD Ecosystem ROCm/ROCm 6,130 +3 +30  
Compilers openxla/xla 3,932 +1 +14  
Compilers tile-ai/tilelang 4,848 +12 +53  
Compilers triton-lang/triton 18,299 +15 +76  
Google / JAX AI-Hypercomputer/JetStream 403 0 0  
Google / JAX AI-Hypercomputer/maxtext 2,115 +1 +10  
Google / JAX jax-ml/jax 34,750 +13 +73  
HuggingFace huggingface/transformers 155,945 +48 +354  
Inference Serving alibaba/rtp-llm 1,037 +2 +7  
Inference Serving efeslab/Atom 335 0 +1  
Inference Serving llm-d/llm-d 2,421 +5 +28  
Inference Serving sgl-project/sglang 22,992 +35 +337  
Inference Serving vllm-project/vllm 69,059 +106 +738  
Inference Serving xdit-project/xDiT 2,516 0 +5  
NVIDIA NVIDIA/Megatron-LM 15,078 +10 +82  
NVIDIA NVIDIA/TransformerEngine 3,125 0 +20  
NVIDIA NVIDIA/apex 8,907 +2 +8  
Optimization deepseek-ai/DeepEP 8,942 +4 +25  
Optimization deepspeedai/DeepSpeed 41,476 +15 +106  
Optimization facebookresearch/xformers 10,313 +2 +22  
PyTorch & Meta meta-pytorch/monarch 953 0 0  
PyTorch & Meta meta-pytorch/torchcomms 323 0 +1  
PyTorch & Meta meta-pytorch/torchforge 604 +3 +4  
PyTorch & Meta pytorch/FBGEMM 1,521 +1 +2  
PyTorch & Meta pytorch/ao 2,652 +3 +10  
PyTorch & Meta pytorch/audio 2,819 0 +5  
PyTorch & Meta pytorch/pytorch 97,063 +25 +208  
PyTorch & Meta pytorch/torchtitan 5,020 +2 +26  
PyTorch & Meta pytorch/vision 17,486 +4 +18  
RL & Post-Training THUDM/slime 3,594 +17 +101  
RL & Post-Training radixark/miles 802 +9 +37  
RL & Post-Training volcengine/verl 18,840 +34 +202