AI, Culture and the Human Edge

Culture meets code New York The Marketing Society event

Key insights from the recent Culture Meets Code: Human-Centered Values in the Age of AI event in New York featuring Ekta Chopra, Chief Digital Officer e.l.f. Beauty, Sophie Ho, former Head of Brand Anthropic and moderated by Suresh Raj, Chief Growth Officer M+C Saatchi.

As AI reshapes every layer of business, the real question for senior marketers is not whether to adopt it, but what to protect as you do. This session brought together two leaders with rare front-row seats to the AI revolution to talk openly about where technology strengthens organisational culture, and where it quietly hollows it out. The conversation ranged from boardroom strategy to parenting anxiety, and was all the better for it.

 

5 KEY POINTS

Start with intention, not implementation

The most dangerous AI deployments begin with a tool rather than a purpose. Ekta opened with a principle that anchored the whole evening: define your intention before anything else. At e.l.f., that intention is specific and human-centred: make every elf as productive as possible. That clarity makes the harder decisions downstream far easier.

AI is a mirror of your values

Ekta's framing stopped the room. AI reflects what you already believe. If your underlying intent is cost-cutting disguised as transformation, that is what AI will deliver. If your intent is to amplify your people, it will do that instead. The technology does not arrive with values attached. You bring yours to it.

The human skills that matter most are the ones AI cannot replicate

Sophie was direct: the manager who was promoted because they were more competent than their team is facing a new reality. Agents can close that competency gap faster than ever. What remains irreplaceable is empathy, creativity, taste, originality, and the ability to read a room. These are not soft skills in the old dismissive sense. They are the single most competitive advantage any organisation can build for the future.

Performative adoption is a real and growing risk

Sophie named something everyone in the room had privately felt: the difference between genuine adoption and what she called AI slop. The panic, the pressure to be seen to act, the polished-looking plans that are actually hollow underneath. Being honest about where you are in the adoption curve matters more than performing readiness you do not have.

Governance is not a technology problem

Ekta made the case clearly. Every leader needs to know three things: which decisions they are comfortable an agent making, which decisions need a human in the loop, and what the escalation path looks like when something goes wrong. That is not an IT question. It is a leadership question, and it sits with every person in the room.

Culture Meets Code: Human-Centered Values in the Age of AI

The evening started with a question about whether AI strengthens or erodes culture, and the answer, refreshingly, was both. Sophie pointed to the genuine productivity gains she had witnessed at Anthropic, watching colleagues in finance and legal build tools they were proud of, doing work that previously drained them. That part energised her. The part that did not was the performative side of adoption: the anxiety-driven scramble to be seen as AI-forward, producing output that looked credible but lacked real thinking behind it.

Ekta brought a different lens. At e.l.f., AI is always considered in relation to the human it is meant to support. The company's revenue per employee already competes with AI-first businesses. The question is not whether to adopt, but how to do it without losing what makes the culture work. For Ekta, that means humans remain in the loop, governance is built in from the start, and every team has a clear agreement about where agents can act autonomously and where they cannot.

The conversation moved to the Williams Racing CTO, who Sophie had met through her Anthropic work. His instinct when thinking about AI deployment was not to find where it could cut fastest, but to identify where he could build trust first. That story landed because it is the opposite of what most organisations actually do. The pressure to move quickly tends to override the patience needed to move well.

On the question of marketing specifically, both panellists pushed back against the idea that AI makes marketing easier. Sophie described the Anthropic Super Bowl campaign: AI was used to generate a mock-up video for testing and research. When it came to production, everything was human. That is not a rejection of the technology; it is a clear-eyed view of what it is good for and what it is not.

The later part of the evening got honest about what is already happening in the industry. Teams being cut. Strategists being asked to do the work of four people using tools that, in the wrong hands, produce mediocre output at scale. Several people in the room had seen it. Some had lived it. The panellists did not pretend there is an easy answer. Sophie said she thinks some companies will realise the mistake and course-correct. Others may not.

What grounded the conversation throughout was a shared belief that the human element is not a nice addition to an AI strategy. It is the strategy. The organisations that will do this well are the ones building for empathy, creativity, and human judgement as core capabilities, not as afterthoughts once the efficiency gains have been counted.

3 TAKEAWAYS

Know your intention before you touch the tools

Whether you are a team lead or a board member, be specific about what you are trying to achieve and for whom. Intention shapes outcomes. Without it, you are just generating slop faster.

Soft skills are now your sharpest competitive edge

Creativity, taste, empathy, and the ability to connect with people: these are the capabilities that agents cannot replicate and that your organisation should be actively developing, not assuming.

Build your governance agreements early

Before you hand anything over to an agent, get your team aligned on what decisions require human oversight, what can run autonomously, and how you will catch errors. This is leadership work, not IT work.

2 ACTION ITEMS

Run a decision audit with your team

List the decisions your function makes regularly. For each one, decide: human only, human in the loop, or agent-led with oversight. Make those agreements explicit, write them down, and revisit them quarterly.

Reframe how you develop junior talent

The entry-level role is changing, not disappearing. Identify one thing your junior team members are currently doing that AI could handle, and redirect that time towards building the skills that will matter most: judgement, storytelling, relationship-building, and creative thinking.

Ekta Chopra

"AI is like a mirror, and you actually see what you believe in, like your values and so forth."

Ekta Chopra Chief Technology and AI Officer, e.l.f. Beauty