Pierre-Yves Calloc’h spent 20 years at Pernod Ricard—the company behind some of the world’s biggest alcohol brands—first as managing director, then CIO, and CDO. When he turned his attention to the company’s first AI projects, he treated governance as a design constraint from day one: something to be solved for rather than negotiated around after the fact. The D-Star program he led was deployed to the sales force in 28 countries, with 85% adoption and $15M in annual ROI in the US alone.
Most enterprise AI programs never get that far. AI adoption committees get hung up on the choice of model or AI coding assistant, or initial exploration gets bogged down in pulling together scattered data. Beyond these obstacles, the first promising pilot often falls at the last hurdle: IT sign-off on production access. Pierre-Yves watched that pattern play out plenty before approaching it differently.
He sat down with Retool’s CEO, David Hsu, at Retool Summit London to share more about his experience bringing governed AI initiatives to a global company.
In Pierre-Yves’ time at Pernod Ricard, the company had roughly 5,000 sales reps in the field every day, visiting supermarkets, bars, hotels, and restaurants across 60 countries. For years, their visit schedules ran on fixed frequencies—big accounts weekly, smaller ones every two or three weeks—regardless of what was actually happening at any given outlet.
Now, every Monday morning, each sales rep received a prioritized list of outlets to visit that week, generated by an AI engine drawing on roughly 40 data inputs: outlet size, sales of Pernod Ricard products and competitor products, local demographics, proximity to points of interest, and more. The engine clustered outlets by expected performance, identified which ones were underperforming relative to their peers, and surfaced a list of prioritized actions: a missing SKU to pitch, a display to check, or a conversation with a store manager about upcoming promotions.
The underlying model isn’t exotic: “It’s just doing clustering based on those 40 dimensions,” Pierre-Yves said at Retool’s London Summit. The harder problem was getting it into production across 28 countries (including all 50 US states and 12 Indian states, each with different business rules) without losing momentum.
Pierre-Yves’ team didn’t run proof-of-concepts. They went directly to MVP, but only for projects with an identified P&L impact that justified the investment in data infrastructure and engineering time.
“We only select things that can scale and will get a significant return on investment,” Pierre-Yves explained. The data question that trips up many enterprise AI programs becomes a financial one. “For a lot of projects, if you have a good ROI of a few millions or dozens of millions of euros, you will find the data.”
D-Star went from project initiation to go-live in two pilot countries in seven months. Starting with two pilots helped to accelerate everything that came after: “From the beginning, you know what the countries have in common, and what is specific. That way you can architect it and apply the business rules in the right way,” Pierre-Yves said.
By the same token, if something isn’t working, you can quickly isolate whether the issue is with that country or the app itself. After six or seven country deployments, the team had already encountered most of the edge cases they’d face in the remaining entities.
The A/B testing they ran to measure impact also produced an unexpected signal: after three weeks, the control group—the reps not using the tool—went to their union to demand access. “This is the first time I had people actually going to the union to say, ‘I want the tool,’ instead of fighting against it.”
Pierre-Yves worked with IT to get fast approval for D-Star because the platform it was built on came with audit logging, access controls, and a no-production-write guarantee in place—the risks IT would normally spend months reviewing were addressed before the conversation started.
The typical dynamic makes this harder than it sounds. “You need to get the cloud working, you need to be able to extract data from the systems,” Pierre-Yves explained. “And at the same time, you are breaking some of the rules of IT.” The need to move quickly can raise red flags, which typically translate into long approval cycles.
“The fact that there was that layer of security that guarantees you will never erase the production database gave everyone a lot of confidence,” Pierre-Yves said. “We got the authorization to go super fast and do things quick and semi-dirty in that environment.”
The commitment ran both ways: “I promised the IT team we’d fix anything that was built in a hurry before handing over,” Pierre-Yves said, “and we actually did it.” Pierre-Yves’ team could move fast and break things, and IT could trust that they weren’t taking on excess tech debt.
Even with governance resolved and the model producing good recommendations, Pierre-Yves knew the program would fail if the user interface didn’t work for the people using it.
His counterintuitive move was to over-invest in the front end. While a typical enterprise application allocates around 1% of project cost to the user interface, D-Star ran at roughly 2%: bumping adoption from 60% to 85%.
These tactics were only possible with the flexibility to iterate quickly on the UI. During a training session in Canada, sales reps flagged that some of the terminology was in French from France rather than Canada, and that they wanted certain parts of the interface reconfigured. “They went for the lunch break, and in the afternoon it was done.”
That kind of responsiveness would be impossible on a standard enterprise release cycle, where UI changes might ship every six months. Retool’s cadence of building and deploying changes in hours rather than quarters meant the team could adapt country by country. When the US needed enough customization to justify a fork, the $15M annual return made the call easy, and the one-to-two-million merge cost was a manageable follow-up.
Four principles behind Pierre-Yves’ enterprise AI approach
Pierre-Yves ran three other programs of comparable scale at Pernod Ricard—marketing mix modeling, promotion optimization, and pricing—using the same four-part approach:
- Identify P&L impact before writing any code
- Go straight to MVP in two pilot markets
- Build governance in from the start rather than retrofitting it
- Treat the user interface as a first-class investment, not an afterthought
For CTOs and CIOs navigating pressure to move on AI while managing legitimate security and compliance exposure, Pierre-Yves’s experience offers a useful reframe. Instead of adding friction to enterprise AI deployment, a platform with built-in governance removes friction that was already there.
light Reader



