The second future is AI powered

Written By :

Category :

Artificial Intelligence

,

Future PR

,

Public Relations

,

Trust

Posted On :

Share This :

It was fine while artificial intelligence was just driving a few cars around or sorting out your spam folder. It’s been useful having a grammar tool that prompts us to correct errors and typos during a busy day. Picture correction, summaries, transcriptions – just great. But then came the rise of deep fakes, language models were declared ‘too powerful’ to let loose, and the stealthy shift into a brave new world began. And here we are today – positively drowning in artificial intelligence this, artificial intelligence that. Surely it’s not that big a deal? Well it is. For all of us, in every aspect of our lives. And while the ramifications of generative artificial intelligence will be felt by everyone, my focus today is the second possible future for public relations – and this future is powered by AI. 

We’ve been discussing the potential impact of generative artificial intelligence for nearly a decade now – I’ve lost count of the number of webinars, presentations and training sessions I’ve run on the topic since 2013 – but it is only since November last year when Open AI released ChatGPT3 that practitioners started to really consider the implications of this new technological union – for practice, for organisations and for society. For better and for worse. Despite the explosion of interest, coping with speed of change and the constant consequences of each new application is hard. This week has seen the release of ChatGPT’s Code Interpreter which puts the data analysis we were talking about yesterday into the ‘really easy-peasy’ basket. On the back of that, comes today’s news that billionaire Elon Musk has launched xAI so – it’s been reported – he can ‘understand the true nature of the universe’. 

Anyway – I digress. What does all this mean specifically for public relations? For future practice? Essentially, it’s a game changer – but not necessarily a nice one. What we’ve seen since ChatGPT3 was released into the wild is the greatest shift since the advent of the internet. The main task ahead for practitioners is helping our organisations navigate this shift, minimise division, guard against misinformation, train and guide policy, people and prompts. Asking the right questions – an existing superpower – becomes more important than ever. Practitioners need to understand what they are dealing with – the difference between AI types and models and their application. There’s a good book by Field Cady – The Data Science Handbook – which I’d recommend along with his ‘Data Science: The Executive Summary – A Technical Book for Non-Technical Professionals’ which you might find useful – you can find more information here .  Interestingly, there is common ground between data professionals and public relations professionals – both disciplines set out to frame the problem by asking the right questions and it is this common ground where I believe the AI powered public relations future lies. There is no going back from this (as long as the electricity stays on) as artificial intelligence is embedded in our lives – and practice – from this point on.

This particular future – as they say – is now. Last night I was interviewed about the impact of AI on public relations and found myself frustrated because the questions were centred on task replacement, and reflected an approach to practice that aligns public relations solely with content generation. If the approach to practice prevails then, quite frankly, the role is redundant. Although everything generated by AI needs to be checked, reviewed and the source declared, there will be organisations out there that will replace their ‘content creators’ without a second thought. These organisations won’t consider the long term impact on their reputation or the disengagement and splintered relationships that will follow. Generative AI can produce every type of ‘content’ you can think of – probably faster than you can think it – so if the approach to public relations is centred on outputs, on ‘sending stuff out’, there is no future practice to consider. If, on the other hand, the relationship is at the heart of practice then generative AI becomes a tool that assists with analysis, insights, planning, crisis and issues management. We frame the problem, ask the right questions, develop and implement strategies that improve relationship and societal outcomes. 

We must remember that you can’t trust generative AI. I’ve nicknamed ChatGPT ‘Dobby’, after the house-elf in the Harry Potter books. Every day Dobby produces inaccurate and fictional information which, when questioned for accuracy, is followed by long and profuse apologies with elaborate back stories on its fictions and hallucinations. Often, because it is working on probability, it appears to be something of a people pleaser which we know is not a good operational model. As a society, global or local, we can’t function if we don’t trust each other, and the ways in which those bonds of trust are formed have shifted and changed.

Trust is central to the work we do – building and sustaining the relationships to maintain our licence to operate – and without trust, leadership is virtually impossible. Trust connects us, binds us together and allows us to move forward while an absence of trust leaves a vacuum filled by fear and suspicion. As people, we do better together than we do apart but without trust, we become less able, less likely to connect. How does generative AI undermine trust? Or affect reputations? On the positive side, we have task speed and a ‘massive brain’ at our disposal but on the negative – well, here’s Dobby’s – sorry ChatGPT’s – explanation: “If not used properly, the use of ChatGPT can raise concerns about the authenticity and credibility of the information provided by organisations. If an organisation’s use of ChatGPT becomes known, it may raise questions about whether the content is truly original or if it’s just generated by a model. This could lead to mistrust and scepticism among customers and other stakeholders, which can negatively impact the organisation’s reputation.

Additionally, if organisations use the model to generate inappropriate or offensive content, it could lead to negative publicity and legal trouble. It is important for professionals to understand the capabilities of limit and limitations of large language models like this, and how to use them responsibly.’ There you have it – the onus is on us to develop our skills and expertise if we are to remain relevant. But there’s more – much more. It is imperative that we consider the ethics of application, what that means for our stakeholders, our communities of interest and for society – so roll up your sleeves and get to grips with algorithmic ethics. AI is only as good as the data it’s trained on and if that’s accurate or inaccurate, biased, unbiased, well informed, misinformed, that’s what we will be served.

The greatest danger to our role within organisations is being regarded only as tactical implementers rather than strategists and relationship builders necessary to maintain the licence to operate. The greatest dangers to society include the issues of misinformation, deep fakes, fabrication, fractured social cohesion and the digital divide becoming a chasm. Rolling out language models without ethical consideration is irresponsible, and the potential for great harm to communities, individuals and organisations is significant. The digital environment is now a place of great contradictions where we can – as always – have amazing or terrible things, depending on the choices we make the values we hold, and the underlying intent. I’ve always said the greatest competency for a practitioner is courage – the courage to speak out, to listen, to advocate, to evaluate, and – pardon the cliche – to speak truth to power. Perhaps we can add to that spotting untruths in power. If that role is neglected or ignored, then we really will be replaced by ChatGPT and its successors faster than Dobby could iron his hands. As organisations seemingly move into a new golden age of purpose (having forgotten all about it for a few decades) and attempt to align their purpose and values as we enter this new era, perhaps a good starting point would be to clean up their act online and be mindful of good outcomes as they undergo technological transformation. Ensure that they use data responsibly and redraw digital terms of engagement for benefit and without dark patterns. Enabling trust online isn’t about shiny new tech, artistic artificial intelligence or a great serve from search. It is, as it’s always been, about the way we behave, the choices we make, and a genuine desire to build a fair and equitable society. Enabling trust online and offline is up to us. That’s our space in an AI powered future and, as we’ve observed, that future is now. Hope to see you tomorrow for the third future – and it’s all about you.