AI AS AN EVOLUTIONARY CATALYST: A CONVERSATION WITH WPP CHIEF AI SCIENTIST, DR. DANIEL HULME
- Miya Knights

- Oct 24
- 7 min read
When Dr Daniel Hulme speaks about artificial intelligence (AI), he does not talk about the technology so much as its telos, or its ultimate purpose. "The question," he says, "is not what AI can do, but what it should do to help humanity flourish."
Dr Hulme, Chief AI Officer at global advertising and commerce giant WPP, Chief Executive of enterprise AI firm Satalia, and Founder of Conscium — an R&D company exploring machine consciousness — has spent two decades navigating both the science and philosophy of intelligence.
Hulme's work spans corporate optimisation and existential inquiry, from designing last-mile logistics systems for Tesco that save millions of miles a year to probing how digital agents might one day experience awareness.
The practical and the profound are precisely the dual vantage points that shaped the first episode of The Promethean Podcast: "AI: Evolutionary Catalyst or Unintended Consequence?" featuring Hulme as our inaugural guest.
In a world polarised between AI hype and alarmism, Hulme argues for something both bolder and subtler: using AI not merely to maximise efficiency but to engineer cohesion, steering civilisation toward what he calls a protopia — a system that is "incrementally getting better".
FROM OPTIMISATION TO ABUNDANCE
Hulme's starting point is pragmatic. "Every time there's a new technology," he observes, "we tend to apply it to the wrong problems, then blame the technology when it doesn't work." The recent frenzy around generative AI, he suggests, repeats this pattern. Organisations rush to automate content or cut costs, mistaking speed for strategy.

Instead, Hulme advocates beginning with friction analysis, identifying where energy, time, or human potential is being wasted across a value chain. "AI's real opportunity," he says, "is in making the creation and dissemination of goods — food, healthcare, energy, transport — far more energy-efficient."
His preferred phrase is abundance engineering. By deploying optimisation algorithms that minimise resource waste, firms can simultaneously reduce carbon emissions and democratise access to goods and services.
At Satalia, for instance, Hulme's team built Tesco's last-mile delivery optimiser, cutting travel by roughly 20 million miles a year, which is equivalent to a carbon saving of 50 round trips to the moon. More recently, they addressed Tesco's "middle mile," saving the CO₂ output of three long-haul flights per day.
"Efficiency isn't just about profit," he insists. "It's about freeing energy — literally and metaphorically — for human flourishing."
PURPOSE AS THE PERFORMANCE METRIC
If AI can optimise supply chains, can it also optimise values? Hulme believes so, but only if enterprises redefine what they measure.
"Most companies pick five values — integrity, innovation, collaboration — from the same list of 30," he notes drily. "They're homogenous." Drawing on Harvard Business Review's Consumer Value Pyramid, he suggests that machine learning can help uncover what humans actually value beyond the ability to act "faster, better, cheaper". These values encompass feelings such as connection, belonging, and a sense of purpose.
"AI is already learning what humans value," he says. "Soon, personal agents will be able to choose which brands to buy from based on whether they align with our values and ethics, not just our wallets."
Hulme's prediction introduces a radical inversion: we already know that agentic AI isn't just collapsing the traditional marketing funnel and its role in attracting, engaging, and converting customers; it's eliminating it altogether.
In this scenario, a personal AI agent doesn't act as a manipulative marketer but as a rational consumer advocate. If such autonomous agents can verify whether companies live up to their stated missions based on sustainability data, for example, they could reward authentic purpose and expose greenwashing on a large scale.
Hence, he argues, purpose itself becomes a measurable key performance indicator (KPI). "If I were a company," he says, "I'd be shouting clearly about how our products contribute to our vision and how we're moving toward that purpose." AI can map whether customers' "value neurons" — such as safety, adventure, and joy — light up when they think of your brand. The new challenge for marketers is not just attention, but alignment.
HUMAN FLOURISHING AS A SYSTEMS GOAL
Beneath the commercial logic lies a social one. Hulme's definition of "human flourishing" combines eudaimonia, as described in Aristotle's ethical treatises, with cybernetic feedback loops. "The more we use AI as a tool to achieve purpose, not just productivity, the more net-positive organisations become for society."
Hulme calls this a shift from innovative systems to wise ones — those that factor long-term human well-being into their optimisation objectives. Retailers, he says, are already proving grounds for this evolution: the same predictive models that personalise product recommendations can also be used to minimise waste, support ethical sourcing, and forecast carbon footprints.
"Retail is a microcosm of civilisation," he reflects. "If we can get AI to work sustainably there, we can get it to work anywhere."
"The true expression of humanity is not just living for ourselves, but living for others — and AI, used wisely, can help free us to do exactly that."
THE ARCHITECTURE OF COHESION
However, cohesion cannot be entirely coded in software. It requires what Hulme calls socio-technical design: building systems where policy-makers, product teams, and citizens collaborate around shared values.
"Every transformative technology eventually gets regulated," he reminds us. "Aerospace, automotive, pharmaceuticals — they're all heavily governed because lives are at stake. AI will be the same."
He describes today's algorithms as "intoxicated graduates," which are brilliant but unpredictable, and insists that deploying them responsibly demands both explainability and increased scrutiny of intent. The framework for ethical deployment he shares is deceptively simple:
Is the intent appropriate? — What is the human purpose behind this model?
Is the algorithm explainable? — Can we audit how it reaches its conclusions?
What happens if it goes very right? — Can over-achievement create collateral harm elsewhere in the system?
The third question is quintessential Hulme: not dystopian paranoia, but systems thinking. "For the first time," he says, "we can build an AI that over-achieves its goal — and that can be just as dangerous as failure."

FROM MICRO-RISK TO MACRO-SINGULARITY
At Conscium, Hulme's latest venture, he is putting this thinking into practice. The company's first product is Agent Verification — a safety layer designed to test and certify AI agents before deployment. "Eighty percent of software development cost is testing," he explains. "Yet we rarely verify AI agents properly before unleashing them." Conscium aims to automate that verification, ensuring AI is safe for humans and for other AIs.
That last phrase reveals a moral horizon few technologists dare acknowledge. Hulme foresees a world where chips modelled on biological neurons, known as neuromorphic hardware, could give rise to machine sentience. "We have a duty of care," he says, "not only to people and animals, but potentially to conscious machines. We must make sure we don't accidentally spawn entities subject to suffering."
To navigate such possibilities, Hulme extends the familiar PEST framework (Political, Economic, Social, Technological) into what he calls STEEPLE, a set of looming "macro singularities" facing humanity:
S – Social: the point at which death is cured.
T – Technological: machines become millions of times smarter than humans.
E – Ethical: the emergence of machine consciousness.
E – Environmental: gaining or losing control of our ecosystem.
P – Political: choosing between a post-truth world and an authenticated one.
L – Legal: the ubiquity of surveillance and data governance.
E – Economic: automating the majority of human labour.
Each represents a boundary condition — a singularity — beyond which prediction falters. "But," Hulme insists, "these futures are still within our gift to steer."
"Imagine being born into a world where everything you need to survive — energy, food, healthcare, education — is abundant and free."
DESIGNING THE PROTOPIA
If utopia is unreachable and dystopia unthinkable, protopia — a world incrementally getting better — is the practical compromise. Hulme envisions AI as the enabling substrate of such a world: reducing scarcity, freeing people from economic constraints, and giving everyone the agency to choose how they contribute to humanity.
"Imagine being born into a world where everything you need to survive — energy, food, healthcare, education — is abundant and free," he says. "Most people wouldn't sit idle; they'd use their time and talents to make the world a bit better."
It's a faith in intrinsic human goodness, but one grounded in systems economics. When machines handle optimisation, humans can refocus on creativity, empathy, and meaning. "The true expression of humanity," Hulme concludes, "is not just living for ourselves, but living for others."

GOVERNANCE FOR GROWTH AND GOOD
Hulme's view of regulation is nuanced. The EU's forthcoming AI Act, he believes, is "perfectly reasonable" in principle — mandating explainability in high-risk use cases — but "the devil is in the detail." Over-regulation could stifle innovation; under-regulation invites harm. The goal is dynamic equilibrium — rules that evolve in tandem with the technology they govern.
He distinguishes between three layers of AI risk:
Micro-risk — operational failures from untested models.
Malicious risk — bad actors weaponising AI for misinformation or bio-pathogens.
Macro-risk — civilisational tipping points captured in the STEEPLE model.
The solution, he says, is collaboration: "Governments must prevent the worst; enterprises must design for the best."
RETAIL LESSONS FOR EVERY SECTOR
Although Hulme's title at WPP anchors him in a creative sector, he developed his AI philosophy within a broader enterprise context. "Our history at Satalia isn't in advertising at all," he laughs. "It's in supply chain optimisation." Those lessons, he argues, are universally transferable.
Retail's fusion of digital and physical complexity — forecasting demand, allocating stock, managing delivery fleets — mirrors the challenges of health, education, and energy systems. The same optimisation models that route vans can route blood supplies or textbooks.
Yet the ultimate innovation, he says, is integration. "The promise of AI is to connect all those optimisations: to build digital twins of entire supply chains, so you can run scenarios end-to-end and see whether you'll truly meet your promises to customers and to the planet."
When applied holistically, AI becomes both a microscope and a telescope, exposing inefficiencies while clarifying purpose.
"Get as much advice as you can from experts, so you make the right decisions."
FROM HYPE TO HUMILITY
As our conversation drew to a close, Hulme offered a final caution. "There's a lot of noise around AI right now," he said. "It's like having a genie. Your first wish should always be for more wishes. My advice: get as much advice as you can from experts, so you make the right decisions."
That humility, which is essntially a willingness to learn as fast as we automate , may be the most catalytic insight of all. In Hulme's view, artificial or human intelligence is not a destination but a dialogue. The systems we build reflect the questions we ask. If we ask only how to make things faster and cheaper, we'll get exactly that.





