As AI continues to reshape data centers, market competition will intensify, driving up energy and resource costs. The demand for quieter, more energy-efficient, and sustainable facilities will only grow.
For data centers, the heat is on.
As the business world’s reliance on AI-driven applications skyrockets, data centers face increasing pressure to process unprecedented amounts of information while shrinking their carbon footprint. Data centers globally account for 1% to 2% of the world’s overall power consumption, but their energy needs will only continue to grow, experts say. Goldman Sachs predicts that by the end of the decade, the electricity requirements of data centers worldwide will reach 4%.
One reason for this is that for data centers to service the skyrocketing number of AI applications, they’re fitting larger clusters of high-performance GPUs into tighter spaces. Typically, this kind of compute power requires a lot of electricity while simultaneously generating large quantities of heat. As a result, cooling servers drives up the need for more electricity, and this has all led to a spike in operating costs.
Meanwhile, regulators across the globe have begun closely scrutinizing the environmental impacts of data centers, just as a number of communities around the world have begun to oppose data center construction. Opponents claim that the facilities and their prodigious water and power consumption can lead to higher fees for local ratepayers.
The good news is that in this challenging market, a wave of innovative technologies and processes has been developed to help data centers operate more efficiently, cost-effectively, and environmentally friendly.
See also: 3 Reasons Why NVMe-oF is Redefining AI-scale Data Centers in 2025
Push the chips forward
A good starting point for maximizing a data center’s efficiency, performance, and sustainability is by obtaining superior compute power.
The latest generation of GPUs, including Nvidia’s Blackwell generation, is highly sought after by hyperscalers for their parallel-processing capabilities — essential for supporting AI applications. No doubt raw performance will always top the list of requirements for selecting compute platforms, but companies know that in today’s business climate, a balance must be struck.
Compute platforms that don’t hog energy can help cut costs, reduce heat, and decrease carbon footprints. Nvidia, AMD, and others have developed new generations of processors that have made drastic improvements in energy efficiency — though there’s still plenty of room for improvement.
In their evaluations of compute platforms, data center leaders can consider performance-efficiency metrics, such as SPECPower, and take into account real-world efficiency measurements and embedded-carbon reports. By factoring energy efficiency into the purchase of a compute platform, data centers will encourage chipmakers to continue making sustainability a priority in future designs.
See also: How AI Is Forcing an IT Infrastructure Rethink
Keep your cool
In the age of AI, cooling data centers has become more essential, complicated, and costly. Standard air conditioning, which relies on compressors to circulate cool air, remains the most common way to cool data centers. But the process is inefficient, guzzles energy, and generates high amounts of greenhouse gases (GHG). According to the International Energy Agency, data centers and transmission networks account for “1% of energy-related GHG emissions.”
Some relatively newer techniques for cooling data centers include hot temperature, free cooling, hot aisle, and cool aisle configurations. Also, both direct-to-chip and immersion liquid cooling techniques have proven effective.
For AI-focused data centers and the higher heat they generate, liquid cooling is preferred because water and liquid coolants remove heat more efficiently and enable denser server configurations, culminating in a smaller carbon footprint.
Power up
One area of the data center that’s long overdue for upgrading is the battery room.
Servers need a constant source of electricity to prevent costly downtime. Cooling systems also draw a lot of power and consume around 40% of a data center’s electricity.
In the case of an outage at a main power source, Uninterruptible Power Supply (UPS) systems immediately switch over to backup batteries. These batteries supply energy to a facility, typically for the 30 to 50 seconds it takes for generators to kick in and produce electricity for extended periods, as long as days or weeks.
Data centers have, for many years, relied on valve-regulated lead-acid (VRLA) batteries for UPS. When comparing performance, efficiency, and sustainability, new battery chemistries, such as nickel-zinc (NiZn), offer numerous advantages over VRLA and lithium-ion.
Nickel-zinc delivers power from smaller form factors than lead-acid or lithium-ion. At a time when data centers must supply GPUs with more electricity than ever, nickel-zinc batteries can easily handle the added burden while taking up less precious floor space.
These batteries rely on highly available, conflict-free materials and are highly recyclable. Nickel zinc produces lower GHG emissions, smaller water and energy footprints, emits 4x less greenhouse gases than lead-acid, and 6x lower than lithium-ion.
Quick Change
As AI continues to reshape data centers, market competition will intensify, driving up energy and resource costs. The demand for quieter, more energy-efficient, and sustainable facilities will only grow.
Now is the time to assess operations—especially UPS—to modernize, enhance resilience, and stay ahead of the changes to come.





























