Industry

Industry — Thermal Management for AI Servers, PCs and EVs

Auras sells the hardware that pulls heat off chips. The arena is a fragmented, customer‑specified, B2B component business, but a single boundary condition — chip power densities exceeding what air can handle — is collapsing it into two regimes (legacy air, growth liquid) and re‑pricing the supply base. Three things matter before the rest of the report: AI server thermal design power (TDP) has jumped from ~300W per accelerator a few years ago to >1,000W today; this is forcing data‑center operators to swap air for liquid cooling; and Auras is one of a small set of Taiwan module makers the hyperscaler ODMs source from to do that swap.

1. Industry in One Page

No Results

The whole chain exists because the chips at the top of the rack and the chips inside laptops generate more heat per square millimetre every year, and that heat must go somewhere before performance and reliability collapse. The newcomer's mistake is to read "thermal management" and picture commodity heat sinks; the part that matters today is liquid cooling content per AI rack, which is rising from near-zero a few years ago to tens of thousands of NT$ of cold plates, manifolds, quick disconnectors and CDUs per rack as NVIDIA Blackwell and successors ship.

2. How This Industry Makes Money

Cooling-module makers like Auras are contract manufacturers of custom-designed thermal hardware, not catalogue-product companies. Revenue is recognised per unit shipped against a server, notebook, GPU or EV programme. The pricing unit is a bill-of-materials slot per platform — i.e. one cold-plate spec per GPU SKU, one heat-pipe + vapor-chamber module per notebook chassis — and the dollar content per platform rises with thermal design power (TDP).

Loading...

Where bargaining power sits. Four power asymmetries shape returns:

Counterparty Power direction Why
NVIDIA / AMD / Intel Strong upward to chip vendor They publish the TDP and reference designs. The thermal envelope is fixed before any module maker bids.
Hyperscaler end-customer Strong upward to hyperscaler They specify PUE targets, qualify the BOM, and can re-source if a vendor misses a ramp.
Server ODM (Quanta, Wiwynn, Foxconn) Balanced Hand the spec down; can dual-source but switching mid-platform forfeits qualification work.
Auras / module makers Pricing power on new liquid-cooling content; price-cut pressure on legacy air-cooling content Per Auras FY2024 risk factors: "pressure on component manufacturers to be required to cut prices is increasing day by day"; offset only on differentiated content.
Upstream materials (copper, aluminum, fans) Some power Auras keeps ≥3 qualified suppliers per BOM and brings water-pumps in-house — explicit risk-management lever.

Capital intensity is rising fast. Auras's FY2025 free-cash-flow margin was ‑12.4% (versus +8.6% in FY2023) because tooling and plant for AI server liquid cooling is being built ahead of the revenue. Asia Vital Components (3017.TW) — the closest direct peer — announced NT$15B of 2026 CAPEX and NT$17B of 2027 CAPEX to lift liquid cold-plate capacity from ~200k to ~1M units per year (Economic Daily News, March 2026). This is a capex-intensive ramp, not a software-margin business.

3. Demand, Supply, and the Cycle

Demand has three distinct engines, each with its own duty cycle.

No Results

Supply constraints are different from semiconductor cycles. This industry does not face wafer-fab lead times; it faces qualification slots, copper/heat-pipe capacity, and liquid-cooling cold-plate machining throughput. A new entrant cannot drop a vapor-chamber line in 6 months — it has to win co-design with NVIDIA reference platforms, then with the ODM, then with the hyperscaler. Once qualified, it is sticky for the platform life (typically 18–36 months).

Where the cycle hits first. Read in this order when looking for a turn:

No Results

4. Competitive Structure

Three distinct competitor types meet inside this market, each with different economics.

No Results
No Results

Concentration read. Listed thermal exposure to AI-server cooling is dominated by a small Taiwan cluster — AVC, Auras, Jentech — alongside the much larger Delta (broad portfolio) and US-listed Vertiv (one tier up). Auras is the smallest of the dedicated thermal-module pure-plays by both revenue and market cap. The third-party "market-share" lists from Mordor, Spherical Insights and SkyQuest typically rank Delta, Vertiv, AVC, Boyd, Honeywell and 3M at the top of the global thermal-management category — Auras does not appear in published top-20s, because the lists count broader industrial thermal categories where Auras has zero share.

Why it stays fragmented. Cooling modules are co-designed with the system. There is no winner-take-most dynamic because every server platform requires custom tooling and qualification. The same logic protects the incumbents (cant be displaced mid-platform) and prevents them from globalising margins (next platform is re-bid).

5. Regulation, Technology, and Rules of the Game

This is not a regulated industry in the pharma or banking sense; the rules that bind are energy-efficiency mandates, chip-vendor reference designs, trade policy, and customer ESG mandates.

No Results

The single most important rule is the chip vendor's reference design. The moment NVIDIA publishes a >1,000W AI accelerator with a recommended liquid-cooled cold-plate, every legacy heat-pipe-only vendor loses access to that BOM. This is why R&D intensity at Auras has climbed from 3.3% of revenue in FY2020 to 5.8% in FY2024 (NT$914M) — staying inside the next generation reference platform is the gate.

6. The Metrics Professionals Watch

Most ratios that work for "tech hardware" miss the point here. The set below is what actually distinguishes a good thermal-module business from a commoditised one.

No Results

The non-obvious metric is the gap between FCF margin and gross margin. In FY2025 Auras printed a 27.4% gross margin but a ‑12.4% FCF margin — the entire reported P&L profit, and more, is being recycled into tooling for the next AI-server platform. That is the right answer in a ramp; it would be a red flag if AI server demand did not show up to consume the capacity. The operating cash flow / net income ratio also turned negative (‑0.22 in FY2025), which adds a working-capital build to the capex story — a forensic flag the next tab should examine.

7. Where Auras Technology Co., Ltd. Fits

Auras is a mid-sized Taiwan thermal-module pure-play transitioning from an air-cooling component supplier to a liquid-cooling system provider for AI servers. It is neither the scale player (Delta, AVC) nor the systems integrator (Vertiv); it is the niche specialist that holds qualified BOM slots on NVIDIA-reference AI server platforms and is reinvesting aggressively to expand them.

No Results

What this means for the rest of the report. Auras is small enough that any single platform win or loss moves the P&L. It is large enough that it must show up on NVIDIA's reference vendor matrix to stay relevant. The Warren and Quant tabs should test: (a) whether the AI server revenue share is reproducible (i.e. won at the next platform), (b) whether the current capex ramp can be amortised at >25% gross margin, and (c) whether the price-cut pressure called out in the AR risk factors will compress legacy PC/NB margin faster than AI revenue scales.

8. What to Watch First

No Results

If five of these seven signals are positive (rising capex, rising TDP, accelerating monthly revenue, falling customer concentration, stable gross margins, AVC slipping ramp, tightening PUE), the industry backdrop is improving and the next tabs should test how much Auras can capture. If three or more turn negative, the FY2018-style commodity-margin downside in Section 3 becomes the more relevant base case.