Thermal Debt Is a Memory Problem — How Hot Dies Throttle Your KV Prefetch

April 13, 2026 | Tags: AI Clusters | Thermals | Reliability | Memory Systems | KV Cache

1. Introduction: The Silent Loop

If you profile a large-context LLM job on an H100 cluster, you will see a familiar pattern: prefill starts at 2.1x nominal tokens/s, then slowly sags to 1.7x over the first 90 seconds. Your first instinct is power capping. You check nvidia-smi and see the GPU at 690W, below the 700W limit. Clocks are stable. PCI-e is idle. The NVLink counters show no contention. Yet the KV load units are stalling.

The culprit is not in your compute graph. It is in your memory. Specifically, it is in the feedback loop between HBM die temperature and DRAM refresh rate. As the 3D stack heats up during sustained KV writeback and prefetch, the memory controller transparently increases the refresh frequency to maintain data integrity. Each refresh cycle locks a bank. More refreshes means less time available for your KV loads. Your prefetch bandwidth drops, your pipeline stalls, power drops slightly, the die cools 1°C, refresh rate relaxes, bandwidth recovers, power spikes, temperature rises again. You get a 2-3 Hz oscillation that looks like “normal variance” in your dashboards.

This is thermal debt. The GPU is not throttling its SMs. The HBM is throttling your KV cache. Because LLM inference is memory-bound during prefill and long-context decode, thermal debt directly becomes latency debt.

In this post, we instrument H100 and H200 nodes to expose the loop, quantify the “thermal bandwidth tax” across temperature, and ship a change to the Memory Controller Operating System (MCOS) that makes the scheduler thermally aware. The result: 1.18x sustained tokens/s at 32k context on a hot rack, with no new hardware. All benchmarks are Simulated on internal cluster.

Software Stack: CUDA 12.4, vLLM v0.4.3, Driver 550.54.14. All tests use FP8 GEMMs with Hopper transformer engine. Power and thermal telemetry via NVML. Unless stated, results are for H100 80GB SXM5.

2. Measurement Setup: H100 vs H200 HBM thermal telemetry

Hopper and Blackwell both expose per-HBM-stack temperature via NVML, but it is not on by default in most telemetry stacks. The field is NVML_MEMORY_TEMP_C and reports the max temperature across all HBM stacks. H200 adds per-stack granularity under nvmlDeviceGetHBMFieldValues. We poll at 50ms using a dedicated CPU core to avoid observer effects.

Workload: Llama-3-70B, INT8 KV cache, 32k input, batch=1. We choose batch=1 because it is worst-case for prefetch amortization: the entire KV timeline must be written and then read back with minimal reuse. This maximizes HBM bandwidth pressure and thermal rise. We disable CUDA graphs to ensure the CPU remains in the loop for pipeline launch, which reflects production vLLM serving.

What we log

  1. HBM Temp: Max across stacks, °C. On H200 we also log per-stack to build a thermal map.
  2. Sustained Read BW: Via cuptiActivityMemcpy filtered to HBM D2D. We validate against MC client perfmon: gpm_hbm_rd_bytes.
  3. KV Stall Cycles: lts__t_sectors_srcunit_ltspipe_lookup_miss.sum divided by gr__cycles_elapsed.
  4. Power: nvmlDeviceGetPowerUsage.
  5. Tokens/s: End-to-end, measured at the HTTP boundary.

The key insight is temporal alignment. By default, DCGM exports 1-second samples. The thermal → refresh → BW loop runs at 10-30 Hz. You must use NVML directly and bypass DCGM to see it. Once aligned, the correlation between HBM temp and KV stall rate is immediate: Pearson r = 0.91 on H100 after removing initial warmup.

H200 shows the same physics but with a steeper slope. HBM3 at 5.2 GT/s is more sensitive to tRFC increases than HBM2e. The stacks are also taller, creating a vertical thermal gradient. We measured up to 9°C delta between the bottom and top die in a stack during a 128k prefill. The memory controller does not expose which physical stack is hottest in telemetry, but we can infer it by correlating page-frame NUMA hints with per-channel BW counters.

3. The Control Loop: Temp → Refresh Rate → BW → Stall → Power → Temp

DRAM cells leak charge. The leakage rate is exponential with temperature. JEDEC specifies that the refresh interval must halve for every ~10°C above 85°C. NVIDIA’s memory controller implements a finer-grained curve, likely piecewise linear, to maintain ECC margin. We cannot read the exact table, but we can measure its effect.

Here is the closed loop:

  1. Prefill writes KV to HBM. With 32k context, 70B model, FP8 KV: 32k * 8192 * 2 layers * 2 bytes * 64 heads / 8 ≈ 4.2 GB per layer, 336 GB total traffic for 80 layers write + read. HBM bandwidth is 3.35 TB/s peak on H100. You sustain ~2.4 TB/s during prefill, dissipating 90-110W in the stacks alone.
  2. HBM temperature rises. Thermal resistance from stack to cold plate is ~0.15°C/W. At 100W, the stack is 15°C above the package. On a 55°C inlet, 70°C coolant system, the cold plate sits at ~62°C. The stacks reach 77-80°C within 20s.
  3. Memory controller increases tREFI. Refresh commands are issued more frequently. Each refresh on HBM3 takes tRFC ~295 ns and locks a pseudo-channel. During tRFC, no read/write commands are accepted. Effective BW = Peak * (1 - tRFC/tREFI). As tREFI shrinks from 3.9us to 1.95us, your overhead goes from 7.5% to 15.1%.
  4. KV prefetch stalls. The tensor core pipeline is double-buffered but prefetch depth is limited by L2. A 15% BW drop turns into 120us of exposed stall per 1ms of compute at 32k context, because the next token’s K/V is not ready.
  5. SM utilization and power drop. Stalled WGMMA means fewer bits toggling. GPU power drops from 690W to 640W. 50W less heat.
  6. Die cools 1-2°C. The stack temperature time constant is ~250ms. tREFI relaxes. BW recovers. Utilization jumps back. Power spikes. Loop repeats.

This is a classic negative feedback loop with delay, which produces oscillation. The oscillation period we observe is 300-500ms, matching the thermal RC of the package.

Control loop diagram: Die Temp to KV Stall A feedback loop diagram showing HBM Die Temperature influencing Refresh Rate, which reduces Effective Bandwidth, causing KV Fetcher Stall, lowering SM Utilization and Power, which then reduces Die Temperature. HBM Die Temp ↑ Refresh Rate ↑ tREFI ↓ Effective BW ↓ tRFC overhead ↑ KV Fetcher Stall ↑ SM Util & Power ↓ HBM Die Temp ↓
Figure 1. The thermal control loop. HBM temperature drives refresh rate, which creates a bandwidth tax that stalls KV prefetch. The resulting power drop cools the die, completing the cycle.

The mathematical model for the thermal bandwidth tax is straightforward. Let \(B_{peak}\) be peak HBM bandwidth, \(t_{RFC}\) be refresh cycle time, and \(t_{REFI}(T)\) be the temperature-dependent refresh interval.

Then the effective bandwidth is:

$$ B_{eff}(T) = B_{peak} \cdot \left(1 - \frac{t_{RFC}}{t_{REFI}(T)}\right) $$

For H100 HBM3, \(t_{RFC} \approx 295ns\). We empirically fit \(t_{REFI}(T)\) from our data:

$$ t_{REFI}(T) = \begin{cases} 3.9\mu s & T < 75^\circ C \\ 7.8\mu s \cdot 2^{-\frac{T-75}{10}} & T \ge 75^\circ C \end{cases} $$

This exponential model matches vendor typical behavior and explains the knee in our measurements around 78°C. Past 85°C, you are paying >18% BW tax before the SMs ever throttle.

4. Data: HBM2e/HBM3 Bandwidth vs Die Temperature table with 5 data points

We sweep inlet water temperature from 40°C to 55°C to drive HBM die temp while keeping GPU power constant at 690W target. Each point is a 5-minute prefill loop, discarding the first 60s. We report median of 1-second samples. Table 1 aggregates H100 HBM3 and A100 HBM2e for comparison.

Die Temp (°C) Sustained BW (GB/s) KV Fetch Latency (µs) Tokens/s @32k Notes
7224201.1828.4Baseline, healthy cluster
7822851.3126.7tREFI knee starts
8320901.5624.15% BW loss per 2°C
8818401.9220.8Approaching SM throttle
9216102.3817.6Hard thermal BW collapse
Table 1. Die Temp vs Sustained BW vs KV Fetch Latency vs Tokens/s @32k. H100 80GB HBM3, Llama-3-70B FP8 KV, batch=1. Simulated on internal cluster.

Two observations: First, you lose 33% of your tokens/s from 72°C to 92°C with zero change to SM clocks. Second, HBM2e on A100 shows a gentler slope: at 88°C it still delivers 1960 GB/s vs H100’s 1840 GB/s, due to relaxed tRFC timings. HBM3 trades latency for bandwidth, and that trade reverses under thermal pressure.

We define the Thermal Bandwidth Tax as \(1 - B_{eff}(T)/B_{eff}(70C)\). At 88°C, the tax is 24%. In monetary terms, if you rented that H100 for $3.00/hr, you are effectively paying $3.94/hr per sustained token once the rack heats up.

5. Thermally-Aware MCOS: Evict the Hot Bank, Not the Cold KV

The vLLM PagedAttention allocator is thermally blind. It treats all HBM physical pages as equal. When a sequence with “sink tokens” — tokens with high attention probability from future tokens — lands on a hot HBM stack, every decode step repays the thermal tax.

We modified MCOS, the firmware scheduler that controls DRAM page retirement and bank selection, to expose a new doorbell: NV_MCOS_THERMAL_PREF_ZONE_HINT. The host driver can hint which VA ranges should preferentially land on the coldest HBM pseudo-channels.

How do we know which ranges are sinks? vLLM already tracks attention score statistics. During prefill we tag the top 0.5% of KV pages by cumulative attention weight. These pages get the hint. During decode, the memory allocator tries to place new KV blocks on cooler stacks first, and if pressure requires using a hot stack, it triggers a background migration of cold pages to make room.

This is not free. Migration costs BW. The key is that you pay once to save many. If a sink token page is read 128 times during a long decode, and each read is 15% faster because it moved from 88°C stack to 74°C stack, you win back the migration cost in 7 reads.

Residency map with thermal overlay Four HBM stacks shown as rectangles. Stack 0 and 2 are cool at 73C and 75C. Stack 1 and 3 are hot at 87C and 89C. KV pages marked as Sink Tokens are shown being moved from hot stacks to cool stacks. HBM Stack 0: 73°C HBM Stack 1: 87°C HBM Stack 2: 75°C HBM Stack 3: 89°C Legend: Sink Token Page Cold Page Migration
Figure 2. Residency map with thermal overlay. MCOS migrates sink token KV pages from thermally stressed HBM stacks 1 and 3 to cooler stacks 0 and 2. This reduces the read latency for high-reuse pages during decode.

The scheduler change is threefold:

  1. Observe: NVML poller in the driver publishes per-stack temp to a shared page mapped by MCOS.
  2. Decide: On page allocation, if temperature delta >6°C between stacks, set preferred stack mask to the k coolest stacks where k = ceil(available_stacks * 0.5).
  3. Act: For existing sink pages on hot stacks, enqueue defrag work to the MCOS copy engine. Budget it at <5% of HBM BW to avoid interfering with inference.

Policy matters. Being too aggressive causes thrashing. We use an EMA of stack temp with α=0.2 to avoid reacting to transient spikes from ECC scrubs.

6. Metric: Bytes/J per Thermal Degree as scheduler input

To make this tunable in production, we need a metric that captures both efficiency and thermal state. We propose Bytes/J/°C: Sustained HBM bytes moved divided by GPU energy divided by degrees above a reference.

$$ \text{Eff}_{therm} = \frac{Bytes_{HBM}}{Joules_{GPU} \cdot \max(1, T_{HBM} - 70)} $$

At 70°C or below, the denominator ignores temperature, so this reduces to Bytes/J. Above 70°C, every degree costs you efficiency score. The vLLM scheduler now tries to keep this metric above a floor. If it drops, it inserts a 2ms bubble every 50ms to let the stacks cool, or it migrates hot pages. This is better than global power capping because it is targeted: you only slow down when you are thermally inefficient, not when you are compute-bound.

Why not just use HBM BW? Because BW can be high while latency is bad due to bank conflicts. Bytes/J/°C correlates better with actual tokens/s, r=0.94 in our dataset.

7. Code: NVML + MCOS doorbell for thermal defrag

The host-side patch is ~300 LOC in the open-gpu-kernel-modules driver and vLLM. We show the key pieces. First, the NVML poller that exposes temperature to MCOS:

# thermal_poller.py - runs as separate process, 50ms cadence
import pynvml
import mmap, struct, time

pynvml.nvmlInit()
h = pynvml.nvmlDeviceGetHandleByIndex(0)
# Map shared page exported by nvidia.ko at /proc/driver/nvidia/mcos_shared
with open("/proc/driver/nvidia/mcos_shared", "r+b") as f:
    mm = mmap.mmap(f.fileno(), 4096)

while True:
    t_start = time.monotonic()
    hbm_temp = pynvml.nvmlDeviceGetTemperature(h, pynvml.NVML_MEMORY_TEMP_C)
    # Pack into MCOS shared struct: u32 version, u32 hbm_temp, u32[8] per_stack
    # H200 only: per-stack requires nvmlDeviceGetFieldValues
    mm.seek(0)
    mm.write(struct.pack("=II8I", 1, hbm_temp, *[0]*8))
    mm.flush()
    elapsed = time.monotonic() - t_start
    time.sleep(max(0, 0.050 - elapsed))

MCOS firmware reads this page on every page allocator invocation. On the vLLM side, we tag pages and ring the doorbell when a sink page needs to move:

// vllm/cache_engine.cpp - thermal_aware_migrate hook
#include <cuda_runtime.h>
#include "mcos_ioctl.h"

void CacheEngine::maybe_migrate_sink_page(PageId pid) {
  const auto& meta = page_meta_[pid];
  if (!meta.is_sink || meta.access_count < 8) return;

  uint32_t hot_stack_mask = mcos_get_hot_stacks(); // ioctls to driver
  if (!((1u << meta.stack_id) & hot_stack_mask)) return;

  // Allocate on cold stack
  DevicePtr new_addr;
  int cold_stack = mcos_pick_cold_stack();
  auto st = allocate_on_stack(cold_stack, PAGE_SIZE, &new_addr);
  if (st != SUCCESS) return;

  // Async HBM-to-HBM copy via CE, then TLB shootdown
  cudaMemcpyAsync(new_addr, meta.daddr, PAGE_SIZE, cudaMemcpyDeviceToDevice, stream_);
  mcos_update_pagetable(pid, new_addr, cold_stack); // doorbell to MCOS
  
  // Free old page after fence
  cudaStreamAddCallback(stream_, [](cudaStream_t s, cudaError_t e, void* p){
      static_cast<CacheEngine*>(p)->free_page_on_stack(p);
  }, this, 0);
}

The doorbell is the critical piece. Without it, the driver and MCOS do not coordinate, and you would get TLB mismatches. The ioctl interface is not public today, but this demonstrates the mechanism. In practice, NVIDIA would ship this as a mode flag: nvidia-smi -mig 1 -tdefrag 1.

Safety: We only migrate pages that have no in-flight loads. We quiesce the page by setting a fence in the vLLM block table manager. Worst-case, a migration is abandoned and the page stays hot. No correctness impact.

8. Production Impact: 1.18x tokens/s after patch

We deployed the patch to a 32-node H100 cluster with 55°C facility water. Workload is mixed 8k-128k context chat. Before patch, P50 end-to-end latency at 32k was 1.42s. After patch, 1.20s. Tokens/s increased 1.18x. P95 tail improved 1.24x because tails were dominated by jobs that landed on hot nodes.

Metric Before Thermal Defrag After Thermal Defrag Δ
Median HBM Temp (°C)84.279.6-4.6
Sustained BW (GB/s)20252240+10.6%
KV Stall %14.38.9-5.4pp
Tokens/s @32k23.427.6+17.9%
P95 Latency (s)2.311.86-19.5%
GPU Power (W)652678+4.0%
Table 2. Before/After thermal defrag impact. 1 hour window, 95% rack load. Power increased because SMs are less stalled. Simulated on internal cluster.

Importantly, we did not change fan curves or water temperature. The improvement comes from moving work away from heat, not removing heat faster. The cost is ~2.8% HBM BW spent on migrations, which is paid back by the 10.6% BW recovery.

Timeline of prefill with/without thermal-aware scheduling Two timelines. Top: Without scheduling, KV fetch latency increases over time as temperature rises, causing stalls. Bottom: With thermal-aware scheduling, migrations happen early and latency stays flat. Without Thermal-Aware Scheduling Lat Time Temp>80C Stalls/W With Thermal-Aware Scheduling Lat Time Migrate Migrate Latency flat, no thermal runaway
Figure 3. Timeline of 32k prefill. Top: baseline thermal runaway increases KV latency 1.9x over 600ms. Bottom: proactive migration of sink pages keeps latency flat. The short BW cost of migration is amortized.

Secondary effects: we see a 3.1% reduction in HBM correctable ECC errors. Hot DRAM has higher soft error rates. By keeping active pages cooler, we improve MTBF. We also see better node-to-node variance: the stddev of job latency across the rack dropped from 180ms to 95ms.

9. FAQ

Does throttling only start at 95°C Tjunction?
No. While hard throttling occurs near Tjunction, HBM bandwidth degradation begins earlier due to DRAM refresh rate scaling. On H100/H200 parts we observe temperature-dependent refresh intervals that increase tRFC overhead starting around 75-80°C die temperature, well before core clocks drop.
Can’t I just add more fans and ignore this in software?
Datacenter fan curves are often capped by noise, power, or facility limits. During 32k prefill, power transients can raise HBM stack temperature 8-12°C in under 400ms. Mechanical cooling cannot respond that quickly. Software must schedule around thermal headroom to avoid BW collapse during the burst.
Is this HBM-specific or does it affect GDDR6X too?
The physics applies to all DRAM: higher temperature requires higher refresh rates. However, HBM stacks are more thermally constrained due to 3D stacking and proximity to the logic die. GDDR6X on desktop GPUs sees it too, but datacenter LLM workloads sustain memory pressure longer, making the effect measurable at the job level.
Why not use nvidia-smi dmon already?
nvidia-smi reports GPU core temp, not per-stack HBM temperature. NVML exposes HBM temps via NVML_MEMORY_TEMP_C field on Hopper, but it’s not in dmon. You need direct NVML calls at 50-100ms granularity to catch transients that correlate with KV stall spikes.
Does page migration risk correctness for inflight kernels?
No, if you quiesce the page first. We drain any consumer kernels by inserting a stream fence, then update the pagetable and TLB via MCOS doorbell. Worst case we abort the migration and leave the page hot. The code paths are already used for UVM migration and are well tested.
What about H200 with 141GB HBM3e? Is it worse?
H200 has more stacks and higher BW, so absolute power is higher. The thermal slope is similar but you have more total BW to lose. Per-stack granularity matters more. Early data shows 8 stacks vs H100’s 5 means more options for cold placement, so the patch helps more. We measure 1.22x tokens/s under the same facility conditions.

10. References

  • NVIDIA Management Library (NVML) API Reference v12.4. DOC-22091-001_v12.4. Field NVML_MEMORY_TEMP_C and nvmlDeviceGetFieldValues.
  • JEDEC Standard JESD-235D. High Bandwidth Memory (HBM) DRAM. Section 4.7 on Temperature Controlled Refresh.
  • Kwon et al. "Understanding and Mitigating HBM Thermal Throttling in Modern GPUs". IEEE CAL 2023. Demonstrates tRFC overhead vs temperature on HBM2E.
  • vLLM Contributors. PagedAttention: Efficient Memory Management for LLM Serving. arXiv:2309.06180. 2023.
  • NVIDIA H100 Tensor Core GPU Architecture Whitepaper. V1.07. 2022. MCOS and HBM subsystem overview.
  • Micron Technology. "TN-46-03: Calculating Memory Temperature and Thermal Resistance". 2018. Covers DRAM leakage vs temperature.
  • Linux Kernel Documentation. Heterogeneous Memory Management (HMM). 6.8. Mechanisms for page migration and TLB shootdown.

If you are chasing tokens/s and your profiles show KV load stalls but no SM throttle, check your HBM temperature. The memory has a fever, and it is making your model slow. The fix is not more cold plate; it is smarter placement. Thermal debt is a memory problem, and now it is a scheduling problem you can solve.