AI and the Future of Rewards
What we were told, and what we should question.
Attending Future Reward Europe 2025 was a welcome reset. We’re all navigating rapid change, but this event didn’t just acknowledge that—it tackled it head-on. The standout session for me? The discussion on AI in the rewards space, led by Claude and Martin.
It wasn’t just theoretical. It was confident, provocative, and fast-paced. But as I listened, I found myself both nodding and questioning. Some of it landed as common sense, some as visionary—and some felt like it needed a bit more friction.
Here’s how it unfolded, what resonated, what raised questions for me, and where I land after reflecting on it all.
Speed vs Substance: What Do We Actually Want From AI?
Claude opened with a live poll: Which AI outcome matters most to your reward agenda?
- Pay equity accuracy
- Faster market-data refresh ✅ (the clear winner)
- Personalised rewards
- Cost-to-value optimisation
The room leaned heavily into speed—faster insights, faster updates. And I get it. In HR and rewards, timing matters. Compensation decisions can’t wait for perfect data.
But I couldn’t help wondering: Are we over-prioritising speed at the cost of strategy?
Speed is useful when it moves the right decisions faster. But if the inputs are shaky or the context isn’t clear, you’re just accelerating bad decisions.
I looked at a 2024 McKinsey report on AI in HR, which cautions:
“Organizations must resist the urge to deploy AI where speed and novelty trump clarity and alignment. Otherwise, automation will reinforce outdated decisions faster—not smarter.”
That feels important. Before asking how fast we can go, maybe we should be asking where we’re actually trying to go—and whether AI understands the terrain.
Prompting: The Real Skill Behind the Scenes
Claude then walked us through a framework that really clicked for me. Four levels of AI prompting:
Prompt Level Outcome
Basic: Generic, like Google
Refined: Structured, but not tailored
Customised: Contextual and relevant
Strategic: Action-oriented, aligned to need
This wasn’t new to me, but seeing it laid out made something clear: we’re underestimating the skill it takes to “talk” to AI well. It’s not about asking more questions—it’s about asking better ones.
The session reminded me of Ethan Mollick’s work from Wharton, who argues:
“Prompt engineering is the new form of business literacy. Those who learn to think with AI—not just use it—will outpace those who wait for perfect use cases.”
That echoed throughout the session. Claude positioned tools like ChatGPT (o3), Claude, Gemini, and Copilot not as answer engines, but thinking partners. And it made me pause: how many people in our function are actually thinking of AI this way? How many still treat it like a smarter search engine?
AI Trust, Safety, and Cost: The Case for Going Enterprise
Another strong message came through loud and clear:
Stop using free tools. Pay for enterprise AI.
Claude made the case that the price of real capability—what he called “PhD-level insight”—is absurdly low compared to its value. If you’re trusting AI with anything that affects pay, people, or data, the free-tier risk just isn’t worth it.
At first, I bristled a bit. Isn’t this just vendor narrative? But then I looked at the latest guidance from the World Economic Forum’s 2025 Responsible AI report. They highlight enterprise-grade controls, auditability, and data retention policies as non-negotiables in any function touching employee data.
So I’ll admit: the call to treat AI as a serious, licensed business tool—not a toy—was a good reminder.
Where AI Is Headed (and What That Means for Us)
Martin then took the baton with a forward-looking view that was more sci-fi than spreadsheet—but in the best way.
He painted a future where:
- We talk to AI instead of typing
- Dashboards turn into real-time simulations
- AI interacts directly with our screens—no clicking needed
- “What if?” becomes a daily part of reward planning
This wasn’t wild speculation. Much of it is already visible in tools like Microsoft Copilot and Google’s Gemini, and with agents like Devin that are already navigating screens on their own.
But this raised a bigger question for me: What does this mean for human roles in reward?
Martin framed it as a skills pivot: “critical thinking 2.0.” He argued that the best AI outputs still depend on sharp questions. That, to me, feels exactly right—and also quietly threatening. If your value at work is executing steps someone else designed, AI is coming for your job. But if your value is asking smarter questions, synthesising judgment, and seeing patterns—your role might just get more important.
Reward Simulators and the Documentation Wake-Up Call
Martin also introduced something that really clicked: the idea of simulators in reward strategy. “What if?” analysis at scale. It’s common in finance, rare in HR—and practically nonexistent in reward teams.
That could change everything. Imagine testing equity impact, budget models, or reward changes across dozens of personas, instantly.
But to get there? He offered a four-quadrant roadmap:
1. Gather what you already have
2. Create what doesn’t exist yet
3. Simplify and clarify language
4. Align conflicting sources
That sounds basic. But most organisations struggle with documentation. Our policies, processes, and rationales live in meetings and heads, not systems. And AI can’t help with what it can’t read.
It’s a blunt truth: garbage in, garbage out isn’t just about data quality—it’s about documentation hygiene.
Outside Perspectives: Are We Over-Automating Judgment?
While I found much of what was said compelling, I’ve also been reading voices who urge more caution.
Cade Metz (New York Times tech reporter) warns that as AI tools become more confident, people become more passive—trusting polished output without pressure-testing the assumptions behind it.
In HR and rewards, that’s dangerous. A beautifully written but flawed compensation recommendation is still wrong.
So I come away from the session energised—but also aware of the fine line we’re walking. The risk isn’t AI taking over. It’s us stopping too soon, accepting the first answer, or worse—forgetting to ask if it’s even the right question.
My Takeaway: Use AI to Think, Not Just Respond
This session made one thing clear: AI is here to stay in rewards. But the future won’t be about using AI—it will be about using it well.
That means:
- Learning to prompt like a strategist, not a Googler
- Structuring our data so AI can actually help
- Staying critical of both the tools and ourselves
The people who thrive won’t be the ones who fear AI or blindly adopt it. They’ll be the ones who treat it like a colleague—with curiosity, critique, and confidence.