Brands with humanity, AI and fascist bots

AI and fascist bots

It’s hard to make it halfway through a workshop these days without somebody suggesting a brand needs to ‘have a conversation’ with its consumers, ‘open a dialogue’, or ‘show its heartbeat’. ‘We need to be a brand with humanity!’ is what we say when we mean ‘this brand is boring me to tears!’

This week the people behind the Wendy’s Twitter account showed us what ‘behaving like a brand with humanity’ really means. It means being impulsive, insulting and unrepentantly rude.

Social media has pushed the importance of brand voice into the spotlight. A brand’s language needs to be an accurate reflection of its personality, but it also needs to be understandable for its audience, and appropriate for its medium. It’s a balancing act that many find difficult to pull off; hamstrung by the fear of offending and the desire to be friendly and ‘social’ (that’s what you do on ‘social’ media right?), most brands end up with a tone of voice that’s bland, bland, bland.

But social media is child’s play compared with the challenges of getting this stuff right in an AI world. We’re innovating with natural speech processing faster than we can figure out the right combination of tonality, cadence, vocabulary and context that together make up ‘tone of voice’. If we can’t approximate a voice that sounds credibly human, what hope do we have of creating one that sounds authentic to a specific brand? When my ATM machine tells me that ‘it was a joy’ serving me, I find that unbelievable on every possible level.

Mind you, perhaps an AI voice with an authentically human personality is a far more terrifying alternative. It took just 24 hours for Microsoft’s chat bot @Tay to transform into a fascist bot after ‘learning’ bigotry, racism and hatred from real humans.

What a time to be alive.