The Hidden Cost of Clever: Can You Really Be Green and Use AI?

The Hidden Cost of Clever: Can You Really Be Green and Use AI?

Organisations love to show off their green credentials. From switching to energy-efficient lighting to reducing waste in production, there’s been a big shift in corporate behaviour over the past decade. Sustainability targets have moved from the “nice to have” section of annual reports to being front and centre. Executives are proudly pledging carbon neutrality, talking up circular economies, and pushing digital transformation with an eco-conscious lens. But here’s the thing – a quiet contradiction is brewing. And nobody’s really talking about it.

While companies are cutting down plastic, trimming transport emissions, and swapping out printers for paperless workflows, they’re also doing something else: diving headfirst into artificial intelligence. Whether it's machine learning to optimise supply chains, chatbots to handle customer queries, or predictive analytics to cut losses, AI is fast becoming a central part of modern business. And no one’s disputing the benefits. AI can drive efficiency, open new markets, streamline operations, and even support sustainability initiatives. But let’s not pretend it comes for free.

Training large AI models consumes a ridiculous amount of computing power. And where there's computing power, there's energy consumption. Massive cloud servers and data centres are constantly humming away, powered by electricity – and let’s face it, not all of it renewable. When OpenAI trained GPT-3, it reportedly used the equivalent energy of a car driving around the planet a few times. And that was just training. Running these models daily for millions of users? That’s the real elephant in the server room.

So, how do we reconcile these two competing truths? Can a company truly be environmentally savvy while leaning heavily on artificial intelligence? Or is there a growing hypocrisy in claiming to be green while quietly burning through gigawatts of electricity behind the scenes?

We’ve seen carbon offsetting become a band-aid for guilt. Got a few too many emissions? Just plant some trees. But what about AI? Should businesses be required to report on their AI footprint the same way they report Scope 1, 2 and 3 emissions? Should there be a Scope 4 – digital emissions? And more provocatively, should organisations offset their AI usage the same way they offset their flights?

AI isn’t just code. It’s code that comes with a hidden cost. Every time a model runs, it makes a dent. Individually, it might seem small – a chatbot response here, a fraud detection algorithm there – but at scale, across millions of users and use cases, it adds up. And yet no one seems to be tracking it.

What if AI’s rise is quietly reversing all the hard-earned sustainability gains we’ve made elsewhere? What if your ESG strategy is being undercut by your AI strategy? And what if in ten years’ time, regulators begin to ask companies not just how much energy they use, but what kind of energy, and for what digital purpose?

There’s also a strange irony at play. Many businesses are using AI to become more sustainable – identifying waste, optimising delivery routes, predicting energy use, managing resources. But they rarely report the trade-off. The energy required to run these models might rival, or even outweigh, the energy saved. So do the ends justify the means? Or is it just another layer of corporate greenwashing – sustainability theatre dressed in Python code?

AI ethics is a growing field. There are entire frameworks being written around bias, fairness, explainability. But environmental impact rarely gets a mention. It's almost as if the tech is assumed to be clean because it’s digital. But digital doesn’t mean carbon-free. And data doesn’t mean harmless. There’s a server somewhere keeping every one of your prompts alive.

Imagine if every time a model was run, a carbon counter ticked up on your screen. Would that change how often we use AI? Would it change how we design models? Or would it just make us feel worse about asking ChatGPT to write a cheeky poem in the style of Banjo Paterson?

There’s also the matter of where data centres are located. Many are in regions powered by fossil fuels. So even if your business is in Australia and claims to be buying green power, the AI service you're relying on might be operating from a coal-heavy region overseas. Do you still get to pat yourself on the back?

The lack of visibility is part of the problem. Very few businesses can quantify the energy costs tied directly to their AI usage. It’s rarely part of procurement decisions or project justifications. No one asks, “What’s the environmental impact of this model?” when choosing an AI vendor. But maybe they should.

Could we reach a point where AI becomes regulated not just for what it does to people, but for what it does to the planet? Could environmental audits begin to include AI inference costs? Will there be public pressure to build leaner, greener models? Or will we just keep chasing bigger, faster, smarter AI without stopping to ask what it’s doing under the hood?

There’s a provocative future ahead. One where sustainability officers might need to sit next to data scientists. Where every new AI feature gets a carbon label. Where your board questions whether the emissions created by your AI programme are worth the marginal gains it delivers. And one where investors and consumers start demanding answers about what AI is really costing the environment.

Organisations need to reckon with the fact that AI is not invisible. It has a footprint, and it’s growing fast. It’s time we stopped pretending that digital equals clean. It might be time for some uncomfortable conversations.

Is your AI strategy truly green? Or is it just cleverly disguised emissions?

And maybe – just maybe – being future-ready means not just being AI-ready, but being honest about what AI really takes.

To view or add a comment, sign in

More articles by Dharshun Sridharan

Explore topics