But the study that calculated those estimates also pointed out that AI systems’ water usage can vary widely, depending on where and when the computer answering the query is running.
When people move from seeing AI as simply a resource drain to understanding its actual footprint, where the effects come from, how they vary, and what can be done to reduce them, they are far better equipped to make choices that balance innovation with sustainability.
The first is on-site cooling of servers that generate enormous amounts of heat. This often uses evaporative cooling towers – giant misters that spray water over hot pipes or open basins. The evaporation carries away heat, but that water is removed from the local water supply, such as a river, a reservoir or an aquifer. Other cooling systems may use less water but more electricity.
Hydropower also uses up significant amounts of water, which evaporates from reservoirs. Concentrated solar plants, which run more like traditional steam power stations, can be water-intensive if they rely on wet cooling.
Water use shifts dramatically with location. A data center in cool, humid Ireland can often rely on outside air or chillers and run for months with minimal water use. By contrast, a data center in Arizona in July may depend heavily on evaporative cooling. Hot, dry air makes that method highly effective, but it also consumes large volumes of water, since evaporation is the mechanism that removes heat.
Timing matters too. A University of Massachusetts Amherst study found that a data center might use only half as much water in winter as in summer. And at midday during a heat wave, cooling systems work overtime. At night, demand is lower.
Newer approaches offer promising alternatives. For instance, immersion cooling submerges servers in fluids that don’t conduct electricity, such as synthetic oils, reducing water evaporation almost entirely.
And a new design from Microsoft claims to use zero water for cooling, by circulating a special liquid through sealed pipes directly across computer chips. The liquid absorbs heat and then releases it through a closed-loop system without needing any evaporation. The data centers would still use some potable water for restrooms and other staff facilities, but cooling itself would no longer draw from local water supplies.
These solutions are not yet mainstream, however, mainly because of cost, maintenance complexity and the difficulty of converting existing data centers to new systems. Most operators rely on evaporative systems.
You can estimate AI’s water footprint yourself in just three steps, with no advanced math required.
Step 1 – Look for credible research or official disclosures. Independent analyses estimate that a medium-length GPT-5 response, which is about 150 to 200 words of output, or roughly 200 to 300 tokens, uses about 19.3 watt-hours. A response of similar length from GPT-4o uses about 1.75 watt-hours.
Step 2 – Use a practical estimate for the amount of water per unit of electricity, combining the usage for cooling and for power.
Independent researchers and industryreports suggest that a reasonable range today is about 1.3 to 2.0 milliliters per watt-hour. The lower end reflects efficient facilities that use modern cooling and cleaner grids. The higher end represents more typical sites.
Step 3 – Now it’s time to put the pieces together. Take the energy number you found in Step 1 and multiply it by the water factor from Step 2. That gives you the water footprint of a single AI response.
Here’s the one-line formula you’ll need:
Energy per prompt (watt-hours) × Water factor (milliliters per watt-hour) = Water per prompt (in milliliters)
For a medium-length query to GPT-5, that calculation should use the figures of 19.3 watt-hours and 2 milliliters per watt-hour. 19.3 x 2 = 39 milliliters of water per response.
For a medium-length query to GPT-4o, the calculation is 1.75 watt-hours x 2 milliliters per watt-hour = 3.5 milliliters of water per response.
If you assume the data centers are more efficient, and use 1.3 milliliters per watt-hour, the numbers drop: about 25 milliliters for GPT-5 and 2.3 milliliters for GPT-4o.
A recent Google technical report said a median text prompt to its Gemini system uses just 0.24 watt-hours of electricity and about 0.26 milliliters of water – roughly the volume of five drops. However, the report does not say how long that prompt is, so it can’t be compared directly with GPT water usage.
Those different estimates – ranging from 0.26 milliliters to 39 milliliters – demonstrate how much the effects of efficiency, AI model and power-generation infrastructure all matter.
Comparisons can add context
To truly understand how much water these queries use, it can be helpful to compare them to other familiar water uses.
When multiplied by millions, AI queries’ water use adds up. OpenAI reports about 2.5 billion prompts per day. That figure includes queries to its GPT-4o, GPT-4 Turbo, GPT-3.5 and GPT-5 systems, with no public breakdown of how many queries are issued to each particular model.
Using independent estimates and Google’s official reporting gives a sense of the possible range:
All Google Gemini median prompts: about 650,000 liters per day.
All GPT 4o medium prompts: about 8.8 million liters per day.
All GPT 5 medium prompts: about 97.5 million liters per day.
For comparison, Americans use about 34 billion liters per day watering residential lawns and gardens. One liter is about one-quarter of a gallon.
Generative AI does use water, but – at least for now – its daily totals are small compared with other common uses such as lawns, showers and laundry.
But its water demand is not fixed. Google’s disclosure shows what is possible when systems are optimized, with specialized chips, efficient cooling and smart workload management. Recycling water and locating data centers in cooler, wetter regions can help, too.
Transparency matters, as well: When companies release their data, the public, policymakers and researchers can see what is achievable and compare providers fairly.
Leo S. Lo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This article is republished from The Conversation under a Creative Commons license.