TrendForce has identified 10 key technology trends that will define the tech industry’s evolution in 2026.
The highlights of these findings are outlined below:
AI Chip Competition Intensifies as Liquid Cooling Gains Widespread Adoption in Data Centers
In 2026, the high demand for AI data center construction—fueled by increased capital spending by major North American CSPs and the rise of sovereign cloud projects worldwide—is anticipated to boost AI server shipments by over 20% year-over-year.
NVIDIA, the leading name in AI today, will face stronger competition ahead. AMD plans to challenge NVIDIA by introducing its MI400 full-rack solution, which mirrors NVIDIA’s GB/VR systems and is aimed at CSP clients. Meanwhile, major North American CSPs are increasing their in-house ASIC development. In China, geopolitical tensions have sped up the drive for technological self-sufficiency, with companies like ByteDance, Baidu, Alibaba, Tencent, Huawei, and Cambricon boosting efforts to create their own AI chips. This is set to intensify the global competition.
The rapid increase in data volume and memory bandwidth needs, driven by expanding AI workloads from training to inference, is challenging system design by exposing bottlenecks in transmission speed and power efficiency. To address these limitations, HBM and optical interconnect technologies are emerging as critical enablers of next-generation AI architectures.
Current generations of HBM leverage 3D stacking and through-silicon via to significantly reduce the distance between processors and memory, achieving higher bandwidth and efficiency. The upcoming HBM4 generation will introduce greater channel density and wider I/O bandwidth to further support the massive computational demands of AI GPUs and accelerators.
However, as model parameters surpass the trillion-scale level and GPU clusters expand exponentially, memory bandwidth once again emerges as a major performance bottleneck. Memory manufacturers are addressing this issue by optimizing HBM stack architectures, innovating in packaging and interface design, and co-designing with logic chips to enhance on-chip bandwidth for AI processors.
While these advances mitigate memory-related bottlenecks, data transmission across chips and modules has become the next critical limitation to system performance. To overcome these limits, co-packaged optics (CPO) and silicon photonics (SiPh) are emerging are strategic focus areas for GPU makers and CSPs.
Currently, 800G and 1.6T pluggable optical transceivers have already entered mass production, and starting in 2026, even higher-bandwidth SiPh/CPO platforms are expected to be deployed in AI switches. These next-gen optical communication technologies will enable high-bandwidth, low-power data interconnects, optimizing overall system bandwidth density and energy efficiency to meet the escalating performance demands of AI infrastructure.
Overall, the memory industry is rapidly evolving toward bandwidth efficiency as its core competitive advantage. Advances in optical communications—designed to handle data transmission across chips and modules—are emerging as the most effective solution to overcome the limitations of traditional electrical interfaces in long-distance, high-density data transfers. As a result, high-speed transmission technologies are set to become a key pillar of AI infrastructure evolution.
NAND Flash Suppliers Advance AI Storage Solutions to Accelerate Inference and Reduce Costs
AI training and inference tasks demand quick access to massive datasets with unpredictable I/O behavior, leading to a widening performance gap with current storage options. NAND Flash manufacturers are tackling this issue by speeding up the development of tailored solutions, concentrating on two main product types.
The first category includes storage-class memory SSDs, KV cache SSDs, and HBF, which are placed between DRAM and traditional NAND Flash. These options offer extremely low latency and high bandwidth, making them perfect for speeding up real-time AI inference tasks.
The second category includes nearline QLC SSDs, which are rapidly being adopted for warm and cold AI data storage layers like model checkpoints and dataset archiving. QLC significantly lowers the cost per bit for storing large AI datasets, offering 33% higher per-die storage density than TLC. TrendForce projects that by 2026, QLC SSDs are expected to make up 30% of the enterprise SSD market, highlighting their increasing importance in enhancing storage capacity and cost efficiency in AI infrastructure.
Energy Storage Systems Emerge as the Power Core of AI Data Centers and Are Set for Explosive Growth
As AI data centers develop into large-scale clustered systems, their variable workloads require much more stable power. This shift is turning energy storage systems from mere backup sources into the core energy infrastructure of AI data centers.
Over the next five years, AI data centers are expected to significantly transform energy storage systems. In addition to traditional short-duration UPS backup and power quality stabilization, the share of medium- to long-duration storage systems (2 to 4 hours) will increase sharply to support backup power, energy arbitrage, and grid services simultaneously.
Deployment models will also evolve from centralized, data center-level battery energy storage systems to distributed architectures at the rack or cluster level that incorporate modular battery backup units capable of instantaneous response. This shift will improve system resilience and energy efficiency while satisfying the increasingly demanding power stability needs of AI-driven infrastructure.
North America is expected to become the largest global market for AI data center energy storage, led by hyperscale cloud providers. In China, the “Eastern Data, Western Computing” initiative is driving data centers toward renewable energy-rich western regions, where AI data centers paired with energy storage systems will become standard infrastructure for large-scale campuses. Globally, the installed capacity of AI data center energy storage is projected to surge from 15.7 GWh in 2024 to 216.8 GWh by 2030, representing a CAGR of 46.1%.
AI Data Centers Transition to 800V HVDC Architecture, Driving Demand for Third-Generation Semiconductors
Data centers are experiencing a major upgrade in power infrastructure as server rack ratings increase from kilowatts to megawatts. The industry is quickly adopting 800V HVDC architectures to boost efficiency, enhance reliability, cut down on copper cabling, and support more compact system designs. Advanced third-generation semiconductors like SiC and GaN play a crucial role in this shift, with numerous semiconductor providers now participating in NVIDIA’s 800V HVDC project.
SiC is vital in the front-end and mid-stage power conversion within data center architectures, managing the highest voltages and power loads. While SiC devices currently have lower maximum voltage ratings compared to traditional silicon, their enhanced thermal efficiency and switching performance are essential for the development of next-generation solid-state transformers (SSTs).
Meanwhile, GaN, known for its high-frequency and high-efficiency properties, is becoming increasingly popular in mid- and end-stage power conversion. It supports ultra-high-power density and quick dynamic responses. TrendForce predicts that the adoption of SiC and GaN in data center power systems will reach 17% by 2026 and exceed 30% by 2030.
Next-Generation Semiconductor Race: 2nm GAAFET Production and 2.5D/3D Heterogeneous Integration Lead the Next Breakthrough