EthicalVoices

How Much Human Agency Do We Require in Public Relations? – Ant Cousins

Joining me this week’s episode is Ant Cousins, the Executive Director of AI Strategy at Cision. I recently read how Cision created a code of ethics for AI development and I wanted to have him on to talk about it. Ant discusses a number of important issues, including:

Why don’t we just dive right in? Tell me more about Cision’s AI ethics policy.

Firstly, I’ll tell you why we created it, and I think as everyone has been seeing so much news about AI recently, it really has hit the mainstream and I can tell you, having worked in AI for 10 years, I feel so vindicated, finally, everyone gets it and everyone sees the value, which is super exciting. But, with every single article you see covering AI, not just AI in PR, but AI in general, there is always that final paragraph, which is AI is great, but it might just kill us all.

It’s been really interesting that I’ve seen a lot more awareness and interest in the ethical implications of AI across the whole world. But obviously we’re in the PR space, so the reason we focused this document down into the application of AI specifically in PR because we didn’t really see there was much guidance out there for that.

Ethics in PR is a well-known topic and there are plenty of different guides you can find on that. But specifically, how should we be making use of AI, and really generative AI, in PR or communications? We didn’t see a lot of guidance on that specifically.

The reason we ended up creating this was because there was that gap, but also because there are implications for us as providers if we don’t come together and agree what is the right kind of ethical and responsible approach to the development of AI in PR. Because we’ve got the European Union with the AI Act, we’ve got the UK and US both working on regulation. If we don’t come together and describe what we think is the right approach to the application of AI, then the risk is that we are not well understood, and we’re not well catered for in those incoming regulations.

You mentioned implications, what do you see as some of those implications?

In the EU AI Act, the best example is a description in there of what they call high-risk AI applications. Emotional recognition is a high-risk application of AI, according to EU AI Act. They’ve been quite precise in that regulation because they’re taking a very precise almost down to the individual use case level of the applications of AI. But in the PR and communication space, we have been doing emotion recognition for quite a while. We read text and we say, “Hey, we think this text is happy, we think this text is sad, we think this text represents the emotions of a particular author.”

We don’t think that’s a particularly high-risk use of emotion recognition, not as high risk as using it in schools or using it in the workplace and the potential privacy implications of that. But the risk, the implication for us, is when the US and when the UK acts and when other acts are created around the world, if that understanding isn’t there, the implication is we can’t do something we’ve been doing for a while.

That’s just one example. What about other examples where our use cases aren’t that well understood, and therefore the regulation prevents us from doing something we already do, or it prevents us from doing something we want to be able to do to give value to our customers?

It sounds similar to when GDPR came in and there were concerns over stakeholder mapping, which is something we’ve always done. And is that gathering too much information? There were some legal cases on it. As I always tell folks, you don’t want to be the legal case, because it tends to be the expensive one for you as a company.

In GDPR, there are rules that set out the use of AI for decision-making on humans. If you’ve got humans applying for jobs and you’re using AI to read the resumes, then you need to be able to give those individuals redress or explainability of the decisions that were taken.

We need to determine the appropriate amount of human agency or decision-making in the PR and communications workflows. What is that? I don’t think we’ve defined that. That’s the kind of questions that are now coming up. We at Cision just want to be part of the discussion and help make sure we come to the conclusion before someone else concludes it for us.

What is the right amount of agency?

That’s a really good question. I have a view, which is it comes down to the level of risk and the level of creative requirement. Think of some of the least interesting and rewarding tasks you could ask someone in PR or comms to do – “Hey, here’s a press release. Could you just create five or six social media posts over the next two weeks to just nibble at this press release and give us some coverage?” This is almost always a super boring task for most PR folks, when they’re more interested in the press release, more interested in the relationships, the journalists, and things like that, and sub this down to marketing, who just bash out some copy. If you’ve got a model which you can be relatively sure of and trustworthy, in which all it’s going to do is take some of the text of the press release and then form it around a bit and bash it out, do you need a human to write that content? Almost certainly not. I would suggest, do you need a human just to have a look at it and go, “Yeah, that’s okay, bing.” That is more like the relationship I think we’re going to have between humans and AI going forwards.

However, if you have responses to that social media content and you’re looking at engaging in conversations with humans, again, do we need a human to create that content response? I would suggest, again, probably not, so that the human agency is about overseeing that kind of content. But where we’ve got conversations with humans, do we need to make sure that that human understands they’re engaging with a human, or a robot, effectively? That becomes more of a place where we need to maybe not have a human to control the content, but we do need to label the content. We need to be explainable; we need to be transparent that we have used AI. So that’s where it gets really interesting. It’s not just the relationship, it’s who does what work? It’s how much transparency do we need to give to our stakeholders, as to who has done that work?

Transparency is where I’m seeing the most discussion right now among my colleagues in other agencies too. It’s just on the disclosure element but how do you handle that transparency overall? I really liked the transparency/explainability section of the Cision AI ethics policy, and I wanted to just delve in a little bit deeper.

One of the issues I’ve seen with AI so far as I’ve been playing with all of them is its ability to hallucinate very convincingly. As you’re building the tools that so many PR pros rely on, how do you overcome some hallucination elements that are inherent in AI?

You’re absolutely right to point out that real challenge. We’re talking now about generative AI and there’s many different use cases for it. I think actually the greater value of AI is more on the analysis and more on the automation of decision making, and I’ll come onto that in a second, than it is on the generation of content. Because effectively, using it to generate content is using it to speed up a task that a PR pro or comms pro can already do. We’re just making a bit quicker at doing that, which is great. But I think the real value, specifically in the large language models that are now available, is in using their understanding of PR principles and of communications principles, using them to make decisions and automate decisions, to give you analysis of content. I think that’s the stronger use case.

I think that’s where we are leaning. We are making generative tools available, but I think the protection there is always that, and come back to that human agency point, I don’t think we currently are going to recommend anybody take some content that we help you generate and just throw that out to the internet. You have to take responsibility for the content, even if we’re generating for you. Ultimately, it’s just a time saver. It’s not making your decisions for you. That human-in-the-loop aspect is super important with generative AI to protect you from that. I would recommend that maybe isn’t where we focus a bunch of our features and functionality, not on generating text, but more on insights, analysis and automating workflows, which I think is the most exciting aspect of the LLMs.

That’s frankly what we’ve been doing with Willard’s machine learning, neural nets, wherever you want to call it in there, for years, is predict the analysis.

That’s the bit that really excites me, actually. If all we do is use generative AI to do a slightly better version of what we’ve been doing for the last 10 or 15 years with AI, we’ve missed the point. The point is they’ve ingested material that explains how communications works, basic communications principles, methodologies, and best practices. I think if we’re not exploiting that, that’s the real missed opportunity with LLMs.

I’m really looking forward to Code Interpreter with ChatGPT, which is going to really help you dump in large amounts of text and do some really interesting analysis. I think the potential there is powerful.

You’re right, the code interpreter, I’ve got it. And actually, it’s beyond my level of coding. I was a developer 15 years ago, I haven’t written a line of code in ages, and I was like, “Oh, I remember enough, it’s fine.” So I was experimenting with ChatGPT’s code interpreter to analyze some content, and its questions about how it analyzes the content were beyond my level of skill of analyzing content. I was like, wow, okay, this isn’t any longer like a junior or an intern, this is pretty capable, to the point where its questions were beyond my level ability to answer it.

You mean my Fortran skills from 30 years ago weren’t going to really be enough to cut the mustard?

My COBOL skills from about the same era, no use at all.

What are some of the AI ethics that concern you the most?

I’ll tell you the things that don’t concern me…the end of the world and the extinction of life as we know it. That chat you see about AI is all clickbait and headlines and things like this. As soon as you mention AI, almost certainly the first image that comes to someone’s head, it’s going to be the silver robot with the red eyes, which is problematic. But that actually doesn’t concern me because I think we can all agree, let’s agree right now, let’s agree never to put AI in charge of nuclear weapons. Let’s just agree that right now, let’s just rule that one out. If you stick it in your ethics guides, fine. So that isn’t what concerns me.

The stuff that concerns me is the misinformation and disinformation, and the potential of bias and discrimination. Put more broadly, the biggest impact I can think of from AI is probably economic, more than anything else, for two reasons. One is the almost uncontrolled release of a tool that can replace whole swathes of people. We need to think about the economic impacts of the capabilities of these tools. I think we’re still figuring it out, and it’s going to take us years.

We’re still dealing with, frankly, the implications of us developing the internet and developing social media. Those things are still changing and evolving in how they interact with us, and we’re still getting changes down the line for that. It’s going to take us years to figure this out, but there will be no doubt implications for whole swathes of disciplines and types of work. My biggest concern is making sure we’re planning ahead for that and giving people a chance to understand the implications, plan for the implications, for retraining, for figuring out what they do.

But specifically, to the PR domain, I think the implications are around bias discrimination, it’s on that analysis point and misinformation and disinformation. Not from PR pros, but anybody now can generate misinformation, disinformation,  not just with text, but with images, with videos, making it more effective. That’s probably one of the bigger concerns I’ve got. And we saw that with the Pentagon AI image. We saw a real impact on the market because of that. I think as soon as that happened, a whole bunch more people will have a bright idea which was like, “Oh, maybe I could generate an image which will cause an impact on the market,” and then they can short the market, they can make money. As soon as people think of ways to make money with stuff, then we get it in swathes.

That’s the industrialization of fraud. I am very dystopian when it comes to misinformation because I am just so paranoid about how fast it’s going to go. I teach PR ethics at Boston University; I tell the students the biggest challenge you’re going to be facing as a PR pro is how do you respond when your companies are being attacked by totally AI-backed misinformation?

It’s a double-edged sword, because whilst AI is responsible, or probably will be responsible for a lot of the problems we’re going to see coming up to the next election, which is going to be the worst election on record for misinformation, disinformation, generative AI. We’ve already seen it in some of the campaigns with people generating imagery, and not even labeling it as AI-generated, just throwing it out there as if it’s true.

We’ve got the challenge that AI is going to be contributing to that problem, but it is also one of the only ways we can respond to the problem. Whether you’re a journalist looking to fact check or gain credibility or understanding of the content, or whether you’re a PR pro trying to do the same.

The use of AI to analyze the exponentially increasing amounts of content and the increasingly diverse range of platforms they’ve got to monitor, the increasing demands of audiences for personalization is essential. So whilst it’s going to be potentially causing a bunch of problems, it’s also the only solution that we have to stay ahead.  

Where do you recommend PR pros start? What’s your recommendation, whether it’s within a Cision product or other areas? What do you think are the first few things they should be doing?

On that defense point, we’ve already released features in our most recent platform. So Cision One, we released what we call Risk Score. Thinking about cancel culture, if you look at the last 20-odd years of cancellation events or adverse consumer reactions, let’s call them, or boycotts. If you look at the last 20 years at the most significant events and categorize those by the most likely cause, it’s racism and it’s sexism and it’s politicization, whether you made a comment too far to the right and too far to the left. It  comes down to ESG as the broad context and the categorization of that content. So, we looked at that and we’ve built models that could detect specifically racism and sexism and fake news and controversy and all those other kinds of things.

So we are getting ahead, I think, of ensuring that PR pros and comms pros can benchmark and identify and get early warning of that kind of content. That’s out in Cision One right now. I think it’s probably still slightly ahead of the curve or ahead of the game, but I think we would rather that our users and our customers be armed with that kind of content and not need it, than need it and not have it.

But the very easiest way to get an understanding of what this kind of technology is capable of is to go and experiment. And I think that’s the thing that really blew up ChatGPT was how easy it was to engage with. You no longer have to be a technical expert, you’re literally just having a conversation. If you recall when Google came about, it took a while really to understand how Google Search worked, and Google Search evolved. You learned how to do a good search on Google.

The same with ChatGPT is you have to use it a few times to understand that you’re not doing a Google Search, you’re not saying answer this question or generate this text. You are literally having a conversation with someone. And the more you treat it that way, the better the responses you get and the better the prompts become. Experiment. Play. Literally ask ChatGPT, “Hey, how can I get started?”

Tell it I’m a public relations professional, what are the prompts? It’s already there. So I think the best thing to do is go there and experiment and play and you really will see the power very quickly.

Talking about the ability to determine if you’re going too far politically, one way or the other, do you have sliders in there to let folks determine their acceptable level of risk? For example, we know the studies that show Gen Z is expecting corporate social advocacy. So what would be considered too far in some areas, if you’re targeting Gen Z, may not be far enough.

You’re absolutely right. Right now, we have a custom score that we developed that’s our own proprietary calculation that generates that score. But different brands have different thresholds of risk. If you’re Disney, you’re very low threshold of risk in any category. If you’re a just, I don’t know, a cigar manufacturer and maybe you don’t care so much about some of those individual areas of risk, I’m just labeling cigar manufacturers as callous there, which is probably unfair. But there are spectrums of risk threshold. So yeah, absolutely, yes, that’s where we’re heading.

What is the best ethics advice you ever received?

There are probably two.

The first piece of advice I got was really simple. When I was on day one in the Ministry of Defense Press Office, this is 2006, 2007 or so, the height of the Afghan and Iraq Wars at the time. I was day one in an extremely busy press office. Basically, we were front page news almost every day for something or other, good or bad. And on day one, nine o’clock in the morning, meeting with the director of news, and he said, “I’ve only got one piece of advice for you and you better stick to it. The piece of advice is this: never lie and never say no comment.” And that was like, “Wow, that leaves us in a pretty tough position sometimes. I’m not allowed to lie about this and I’m not allowed to say no comment, which means I have to answer the question and I have to tell the truth.”

Now, interesting that he used that negative connotation, “Do not lie,” as opposed to, “Always tell the truth,” because the truth is subjective, but lies are a little bit more obviously picked up. And I think that led to my second piece of advice. I can’t tell you who gave me this advice, but he’s a senior politician in the UK, and he was explaining to me that effectively the way he approaches his own persona and how he acts and what he says is of once you get caught in a lie, if it’s exposed and you’re caught outright for everyone to see in a lie, you can never go back to the level of trust you had previous to that lie. You’ve got a new paradigm, you’ve got a new level, and it’s lower than it was before.

Every single time that happens, you hit a new low, to the point where you cannot ever get back up again. The way he sees his persona, is, and as a politician it’s really difficult, because the truth is objective, and because he wants to paint, in some cases, the most positive picture of something that he can, but he’s always asking himself that question which is, is that too close to the line, to point where, after this point, I will never be able to get that level of trust back again? I thought that was really interesting, and I think that’s something I’ve taken to heart as well.

Listen to the full interview, with bonus content, here.

 

Mark McClennan, APR, Fellow PRSA
Follow Me
Mark W. McClennan, APR, Fellow PRSA, is the general manager of C+C's Boston office. C+C is a communications agency all about the good and purpose-driven brands. He has more than 20 years of tech and fintech agency experience, served as the 2016 National Chair of PRSA, drove the creation of the PRSA Ethics App and is the host of EthicalVoices.com

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *