A quick update to share a good read, written by Lauren McMenemy for The Content Standard. It looks at the relationship between ethics, artificial intelligence, public relations and keeping content on the straight and narrow. Lauren interviewed me for this piece, along with CIPR colleagues Stephen Waddington and Jean Valin following the report on AI in PR published by CIPR this year. It’s a good read – you’ll find it here – and certainly should be fueling food for thought for today’s practitioner. The extract below will give you a flavour of our discussions :
It’s those ethical considerations that are of the most concern to New Zealand-based PR expert Catherine Arrow, who was one of Valin’s reviewers for the research. Like Valin, she’s a member of the Global Alliance for Public Relations and Communications Management and is intrinsically involved in looking into the ethical impact of AI on the industry.
“I think one of the things that anybody who’s creating content, information, stories-something that will connect with others-has got to be aware of is that at some point in very near future they won’t be needed,” Arrow says, adding her ethical concerns are less about job losses and more about the impact on the stories we tell.
“If it’s a skewed data set and we don’t teach the AI well then it becomes discriminatory, it cuts people out of whatever area of engagement they’re involved with, and that is really detrimental. Microsoft’s Tay is a really tragic example of how a very sensible, innocent, naive AI tool could be corrupted within the space of 24 hours. All of the programming, all of the teaching that’s done is going to be dependent on the ethical stance and moral stance of those who are teaching it to work.”
Arrow digs deeper into AI than that, looking at the Big Brother aspect of some of the AI features we’re currently seeing introduced, such as facial recognition bringing in emotional resonance, and the implications of these tools beyond social engineering. Imagine a world where we don’t look at metrics to decide the best time to post content, but instead can use facial recognition to tell if people are ready to receive the information. She says, “Society is just not emotionally mature enough to make sure this stuff isn’t being manipulated.”
We might not be ready for it yet, but Arrow believes we’ll have to deal with this level of AI in the next 12 to 18 months, and that we need to have some strong ethical discussions about what’s right and wrong-particularly given the furor over the recent Facebook privacy scandal.
“There has to be some really robust discussion about how it’s going to be used, how people’s emotional states are going to be protected,” she says. “It’s not just data protection, but a protection of humanity-to make sure it’s not manipulated to sell another fizzy drink.”