APPLE ranks this podcast among the "Top 1% of podcasts worldwide"
Aug. 12, 2024

Combatting AI Deepfakes: Strategies for Public Relations Professionals

Combatting AI Deepfakes: Strategies for Public Relations Professionals

Can AI deepfakes really cause global economic damage exceeding $1 trillion? Join us on the Public Relations Review Podcast as we tackle this critical question with Rebecca Emery, APR, a leading expert in the field. Rebecca uncovers the alarming rise of AI-generated misinformation, shedding light on how bad actors are exploiting the accessibility of AI technology to spread falsehoods with devastating impacts. We also explore the double-edged sword of AI applications through examples like Ukraine's AI avatar Victoria Shi, which demonstrates both the positive potential and inherent risks of AI-driven communication.

The player is loading ...
Public Relations Review Podcast

What do you think of this podcast? Please give it a review! Thank you!

Can AI deepfakes really cause global economic damage exceeding $1 trillion? Join host Peter Woolfolk on the Public Relations Review Podcast as he and guest  Rebecca Emery, APR, CEO of Seacoast AI.com, a leading expert in the field. tackle this critical question with Rebecca Emery, APR, CEO of Seacoast AI.com, a leading expert in the field. Rebecca uncovers the alarming rise of AI-generated misinformation, shedding light on how bad actors are exploiting the accessibility of AI technology to spread falsehoods with devastating impacts. We also explore the double-edged sword of AI applications through examples like Ukraine's AI avatar Victoria Shi, which demonstrates both the positive potential and inherent risks of AI-driven communication.

In this episode, we arm you with practical steps to detect and combat digital deception. Rebecca dives into tools such as Google Lens, Hive Moderation, and TinEye, empowering communicators to identify image and video alterations effectively. You'll also learn about the SIFT method—Stop, Investigate the source, Find better coverage, and Trace claims to their original context—which journalists use to verify information. We emphasize the broader role of educating the public on AI's ethical use, equipping you with the knowledge to respond to misinformation and deepfakes adeptly.

Finally, we underscore the necessity of education in recognizing and addressing digital deception. Rebecca shares frameworks like the ABC method—Actors, Behavior, and Content—to help identify threats, illustrated by real-world incidents like the Royal Family's altered photograph. We stress the importance of building robust resources for reporting and removing harmful content while staying informed about the evolving threats in AI technology. Don't miss Rebecca's expert insights on AI's transformative yet challenging role in public relations, and learn how to stay ahead in this rapidly evolving landscape.

We proudly announce this podcast is now available on Amazon ALEXA.  Simply say: "ALEXA play Public Relations Review Podcast" to hear the latest episode.  To see a list of ALL our episodes go to our podcast website: www. public relations reviewpodcast.com or go to  or
Apple podcasts and search "Public Relations Review Podcast."  Thank you for listening.  Please subscribe and leave a review.

The Growth Gear
Explore business growth and success strategies with Tim Jordan on 'The Growth Gear.

Listen on: Apple Podcasts   Spotify

Support the Show.

Chapters

01:10 - AI and Misinformation in Public Relations

12:49 - Detecting and Combatting Digital Deception

19:10 - [Ad] The Growth Gear

19:50 - (Cont.) Detecting and Combatting Digital Deception

20:53 - Recognizing Digital Deception and Misinformation

29:48 - AI's Impact on Public Relations

Transcript

WEBVTT

00:00:03.725 --> 00:00:04.426
Welcome.

00:00:04.426 --> 00:00:18.530
This is the Public Relations Review Podcast, a program to discuss the many facets of public relations with seasoned professionals, educators, authors and others.

00:00:18.530 --> 00:00:22.225
Now here is your host, Peter Woolfolk.

00:00:23.429 --> 00:00:29.725
Welcome to the Public Relations Review Podcast and to our listeners all across America and around the world.

00:00:29.725 --> 00:00:43.807
This podcast is ranked by Apple as one of the top 1% of podcasts worldwide, so I say thank you to all of our guests and listeners for making this possible, and please leave your review for the podcast.

00:00:43.807 --> 00:00:45.110
We certainly like to hear from you.

00:00:45.110 --> 00:00:54.551
Now question as most of you know by now, artificial intelligence is front and center in many cases, including visual presentations.

00:00:54.551 --> 00:00:56.283
Some are exceedingly realistic.

00:00:56.283 --> 00:01:02.345
Unfortunately, many will use fake avatars and spread misinformation so what do?

00:01:02.365 --> 00:01:05.394
you do when you find fake AI presentations?

00:01:05.394 --> 00:01:08.143
Well, my guest today can help answer that question.

00:01:08.143 --> 00:01:16.152
She is Rebecca Emery, APR, and currently serves on the board of the Maine Public Relations Council as chair of the Professional Development Committee.

00:01:16.152 --> 00:01:26.935
Now, in November 2022, when ChatGPT was released, it caused most of us to take immediate notice and try to understand how this new technology could be helpful.

00:01:26.935 --> 00:01:35.165
Well, Rebecca's fellow APRs, agency heads and PR colleagues kept asking her what is this new technology?

00:01:35.165 --> 00:01:37.186
Is it safe to use in my role?

00:01:37.186 --> 00:01:38.605
Is it plagiarism?

00:01:38.605 --> 00:01:53.802
Well, it was at that point that Rebecca made a career pivot and founded Seacoast AI in early 2023 to provide AR awareness, training and advisory services to the Seacoast area of New Hampshire and Southern Maine.

00:01:53.802 --> 00:02:02.802
Now her tri-state consultancy helps individuals, executives and small to medium sized businesses embrace and harness AI technology's powerful business acceleration.

00:02:02.802 --> 00:02:04.885
Embrace and harness AI technology's powerful business acceleration.

00:02:04.885 --> 00:02:09.750
So let me now welcome Rebecca Emery to the podcast.

00:02:09.750 --> 00:02:11.171
Thank you for joining me today.

00:02:12.132 --> 00:02:14.354
Thank you for having me today, Peter, I appreciate it.

00:02:15.215 --> 00:02:24.030
Well, let's just ask the first question Just how pervasive is fake artificial intelligence, and how can we begin to recognize and combat misinformation?

00:02:24.670 --> 00:02:25.051
You bet.

00:02:25.051 --> 00:02:30.693
So we're seeing that AI is essentially everywhere.

00:02:30.693 --> 00:02:33.389
It's on our cell phones, it's in our computers.

00:02:33.389 --> 00:02:39.680
We have the ability to talk to the smartest supercomputer in the world in natural language, and why wouldn't we?

00:02:39.680 --> 00:02:43.411
But the bad actors also have access to this technology.

00:02:44.060 --> 00:02:57.932
I've seen estimates, peter, as high as $1 trillion, that the global cost, from everything from misinformation and disinformation and phishing scams, could exceed $1 trillion by the end of this year.

00:02:58.939 --> 00:03:17.967
But when it comes to AI deepfakes and misinformation and disinformation, we saw about a half a million deepfakes were shared globally last year and that number is expected to reach about 8 million by the end of this year and double every six months thereafter.

00:03:18.288 --> 00:03:22.014
Now you ask yourself how did we get here?

00:03:22.014 --> 00:03:50.068
And the interesting thing is this the AI technologies are becoming more and more sophisticated, to the point where it's very difficult to recognize whether, anized deepfake, they could reach about 100,000 social users for about $0.07.

00:03:50.068 --> 00:03:58.510
But the cost to try dozens of these AI tools, these image generators and voice cloning technologies those are free.

00:03:58.510 --> 00:04:04.292
So there's a real problem with the dollars and cents here, in the sense that all of these technologies.

00:04:04.292 --> 00:04:07.424
Problem with the dollars and cents here in the sense that all of these technologies.

00:04:07.424 --> 00:04:31.350
Many of them are available in free basic capabilities, limited capabilities and it only costs a few cents to weaponize what's made with them, and so we're going to continue to see this grow over the coming years Because, unfortunately, as much as AI has lots of good capabilities and positive benefits, the bad actors also have access to it.

00:04:32.797 --> 00:04:46.230
You know, just for contextual purposes, this conversation between you and I started a while back when you showed me a video of I believe it was someone in Ukraine speaking for the government.

00:04:46.250 --> 00:04:46.512
Correct.

00:04:46.654 --> 00:04:47.721
It was a real person.

00:04:47.721 --> 00:04:55.773
Tell me a bit about that, and then we can go from that into more real and fake AI presentations.

00:04:56.274 --> 00:04:56.855
You bet, peter.

00:04:56.855 --> 00:05:24.427
So I think what you and I spoke about a while ago was that the Ukraine government, basically the Ukraine Ministry of Foreign Affairs, created a new AI avatar called Victoria Shi S-H-I, and Shi was based on a known personality, very relatable, I think, to the local people there, but understand that AI can translate into multiple languages in real time.

00:05:24.427 --> 00:05:28.125
And here's a country that is in a wartime situation.

00:05:28.125 --> 00:05:38.439
They can't exactly call a press conference and say we're going to bring all of our top ministry officials and consular officials together and we're all going to be in this location at this time.

00:05:38.439 --> 00:05:41.026
I mean, that's just not practical in a wartime setting.

00:05:41.026 --> 00:05:43.478
So they created an AI avatar.

00:05:43.478 --> 00:05:54.687
Her name is Victoria Shee, and the idea is that the consular will be able to put out messages with operational and verified consular information.

00:05:54.687 --> 00:06:04.954
They'll be able to do this on the public, on the ministry's official website and social media channels, and they can also broadcast information on emergency situations and other news.

00:06:04.954 --> 00:06:09.014
Broadcast information on emergency situations and other news.

00:06:11.975 --> 00:06:29.218
And on one hand, we think about AI and there's a contrast here, right, because we would expect our government to give us authentic information from a human being, but in this case by using an AI avatar and by using a QR code that allows you to go back to the official statement, they're able to provide very timely updates.

00:06:29.218 --> 00:06:38.247
It doesn't take the officials away from their official business, but using an AI avatar also ensures that there are no missteps or on-camera slip-ups.

00:06:38.247 --> 00:06:47.509
Right when the cameras are rolling, we often choose words that we go back later and think, gosh, I wish I hadn't have, you know, said it that way.

00:06:47.509 --> 00:06:53.127
But using an AI avatar allows the ministry to put out an official public statement.

00:06:53.127 --> 00:06:54.762
The avatar can deliver it.

00:06:54.762 --> 00:06:59.300
The avatar can deliver it in multiple languages and from anywhere at any time.

00:06:59.961 --> 00:07:05.276
So you know, I think there's some really positive benefits to something like that.

00:07:05.276 --> 00:07:10.038
However, it also set some up for possible manipulation.

00:07:10.038 --> 00:07:18.000
It would be very easy, perhaps, to clone that avatar and put out what sounds like seemingly official statements.

00:07:18.000 --> 00:07:22.007
So we haven't quite seen all of this play out yet.

00:07:22.007 --> 00:07:36.971
There's a city in Japan that did a similar thing, where they cloned their mayor so that this clone could then deliver timely messages in multiple languages.

00:07:36.971 --> 00:07:46.797
So I think we're starting to see AI come into the communications role in terms of its ability to deliver messages, timely messages, messages in different languages.

00:07:46.797 --> 00:07:58.343
But I think the jury's still out in terms of its ability to deliver messages, timely messages, messages in different languages, but I think the jury's still out in terms of whether this is going to be successful for them or not, but it's certainly a bold move forward.

00:07:59.055 --> 00:08:29.648
Well, you know, I think it's a good idea to let people know that it also has other uses, because I would imagine some commercial firms can bring AI up to answer some basic questions, or maybe a wide range of questions based on whatever the inventory or programs or services or products that they offer, that it at least let the caller see that someone is responding to or can respond to them in a reasonably intelligent fashion.

00:08:30.295 --> 00:08:52.750
Correct, and we know that AI needs a lot of data, and so companies have a lot of data, and so for them to be able to extend that into a more natural relatable form in the form of an avatar that can speak their language, relatable form in the form of an avatar that can speak their language it's definitely the step forward that we're seeing in terms of AI use for communication.

00:08:58.517 --> 00:09:03.174
Well, you know, it certainly has some benefits, because one of the nice things that I like about it is that an individual, an actual person, can become an avatar with the technology.

00:09:03.174 --> 00:09:16.264
You know, I've done it myself here on some of the software we have that load my photograph up there and it does whatever it needs to do, and then I just add some words to it and I can see myself speaking.

00:09:16.264 --> 00:09:22.207
It might not be the correct voice, but the fact that I am saying what I wrote.

00:09:22.207 --> 00:09:36.719
So companies can perhaps do the very same thing and it adds a bit more, I guess, friendly outreach to our customers, rather than just maybe typing something into a computer and having a few words come back to you.

00:09:37.301 --> 00:09:37.642
It does.

00:09:37.642 --> 00:09:58.264
It adds a level of, I think, dimensionality and it may be to the point where you could choose to see an avatar that you most relate to, so you might be able to choose from multiple avatars, meaning you, the viewer, and then have that avatar provide you with those messages or that information in that relatable format.

00:09:58.264 --> 00:10:01.037
I mean some of the avatars they're still a little clunky.

00:10:01.037 --> 00:10:03.981
The mouth movements are still that.

00:10:03.981 --> 00:10:14.975
Even Victoria she, her shoulder and head was a little robotic and of course, part of that is the transparency around using an AI avatar.

00:10:15.495 --> 00:10:22.950
But as the technology gets more sophisticated, I believe these avatars will become more and more realistic.

00:10:22.950 --> 00:10:26.424
We see them a lot with onboarding at new companies.

00:10:26.424 --> 00:10:29.416
More and more realistic.

00:10:29.416 --> 00:10:30.458
We see them a lot with onboarding at new companies.

00:10:30.458 --> 00:10:33.586
So new employees might get onboarding video that has an avatar in the corner that walks them through.

00:10:33.586 --> 00:10:42.337
And we see that with presenters that you're used to sort of looking for that person in small format as they're speaking.

00:10:42.337 --> 00:10:57.868
But the use of avatars for that and being able to choose different languages simultaneously and say, well, I speak this language, so I'd like to receive that message To be able to do that in real time is pretty powerful and again, ai needs all of that data.

00:10:57.868 --> 00:11:18.205
So it's a logical extension for AI in terms of being able to present a company's data in that more relatable format let's get back to those, the shifty, underhanded people now, because bad actors, the bad actors.

00:11:18.245 --> 00:11:27.028
They're there and I guess we need to maybe help people understand how they can identify some bad actors or at least begin to ask some questions.

00:11:27.028 --> 00:11:28.802
Is that information accurate?

00:11:28.802 --> 00:11:31.403
Yes, I saw it on the TV or on my computer.

00:11:31.403 --> 00:11:32.681
It came from an avatar.

00:11:32.681 --> 00:11:36.065
How can I go about determining if it's accurate or not?

00:11:36.065 --> 00:11:40.985
Let's help them understand the processes that they need to go through to get that done.

00:11:41.687 --> 00:11:42.508
Absolutely, peter.

00:11:42.508 --> 00:12:06.469
I think it's important for communicators to first understand that when they see an image or a video or hear audio and they're not quite certain that it's real, we want to be able to evaluate and analyze and assess that threat, and so first we want to use tools to investigate Is this a deep fake?

00:12:06.469 --> 00:12:07.711
Is it a cheap fake?

00:12:07.711 --> 00:12:12.081
Is this coming from humans and trolls?

00:12:12.081 --> 00:12:15.626
Or is this coming from digital bots?

00:12:15.626 --> 00:12:18.941
Is this disinformation or misinformation?

00:12:18.941 --> 00:12:28.648
All of those have different distinctions and, as a result, they will inform what we do and how we respond and whether we publicly respond or not.

00:12:28.648 --> 00:12:40.826
But from a first step, if you see something that appears to be digital deception of some format, the very first thing you can do is simply do a reverse lookup on Google.

00:12:40.826 --> 00:12:52.874
It's a powerful tool and you can simply go into the Google search bar and click on that little it looks like a little camera, I believe and it's called Google Lens so you can put an image in there.

00:12:52.874 --> 00:13:01.056
You could put a social media post, anything, and it will start to look up whether indeed that is digitally altered.

00:13:01.216 --> 00:13:03.000
There are more sophisticated tools.

00:13:03.000 --> 00:13:07.346
I like Hive Moderation is one of them H-I-V-E.

00:13:07.346 --> 00:13:18.538
I think that's really an excellent tool for both image and then multimodal, so audio and video, and it uses multiple models to detect.

00:13:18.538 --> 00:13:30.222
And then, of course, if we're talking about content, there's ways you can look in Snopes and Room Regard and Reddit and other locations to see whether this is a known scam.

00:13:30.222 --> 00:13:33.561
But if you're really unsure, go to AI.

00:13:33.561 --> 00:13:36.912
You can ask ChatGPT to analyze an image and it will.

00:13:36.912 --> 00:13:48.846
It will give you a pretty good idea as to whether something has the digital footprint, if you will, that might signal that it was altered in some way.

00:13:49.634 --> 00:13:50.759
You know, hold on a second.

00:13:50.759 --> 00:13:51.462
That was interesting.

00:13:51.462 --> 00:13:54.919
You said that chat GPT can help Explain that a little bit more.

00:13:54.919 --> 00:13:57.464
First time I'm hearing something like that.

00:13:58.335 --> 00:14:17.509
Well, I mean, ai can recognize things, ai signatures, so, yeah, so, and ChatGPT is a very powerful model and it's multimodal, so you can upload handwritten notes and it will immediately put them into logical form.

00:14:17.509 --> 00:14:24.629
You can upload an image and ask it for a caption or to describe the image and give you a mid-journey prompt.

00:14:24.629 --> 00:14:33.586
But we can also put an image for example, a social media person maybe wasn't at an event that they're being depicted at.

00:14:33.586 --> 00:14:40.067
You could put that into ChatGPT and say, does this appear to be digitally altered?

00:14:40.067 --> 00:14:47.046
And it will at least analyze it for you and give you an idea as to whether it thinks it is.

00:14:47.046 --> 00:14:53.640
I think hive moderation is a better tool for that because it uses multiple models and cross-checks them.

00:14:53.640 --> 00:15:02.166
And then there are lots of other tools that can check for plagiarism detection and Grammarly and Quillbot and so forth.

00:15:03.777 --> 00:15:13.288
But, yes, you can start by just going to any one of the AI chatbots, putting an image in and saying, hey, does this appear to have been altered at all?

00:15:14.230 --> 00:15:17.759
And some of these will actually go back and show you the history.

00:15:17.759 --> 00:15:29.389
So if you go to a tool called TinEye, which is P-I-N and then E-Y-E, it will actually show you all of the impressions of that image throughout history.

00:15:29.389 --> 00:15:44.926
So if it's something that is going viral in the moment and there are, let's say, 100 impressions of it out there, it will show you each one as it finds them, and then you can go back and trace back to the very first one.

00:15:44.926 --> 00:16:01.785
And this is what helps us be sort of AI detectives you want to try and find the origin of misinformation or disinformation and understand is it being weaponized against you or the brand you represent, or is it in part of a larger cause?

00:16:01.785 --> 00:16:15.395
And then you want to find out if this is going back to humans, or whether this is going back to humans or whether this is going back to a bot farm where multiple, maybe a hundred, cell phones are all being controlled by one master computer.

00:16:15.657 --> 00:16:36.938
So it helps to go back, I think, and trace and then verify when you suspect digital deception well, let me just say right now that is a spectacular amount of information that you've provided to our audience you know how to detect fake, if you will, or the bad players in terms of AI presentations.

00:16:37.279 --> 00:16:40.446
I hadn't heard this before and I was.

00:16:40.446 --> 00:16:41.587
I'm taking notes on this.

00:16:41.587 --> 00:16:58.052
I think it's huge information that our listeners can certainly benefit from, and let me say this now I'm certainly going to be promoting this episode with some of my AI videos that I do produce for these things, to let folks know just how important this is.

00:16:58.072 --> 00:16:59.254
Absolutely.

00:16:59.254 --> 00:17:03.046
And you know there's a method that's taught in universities and used by journalists.

00:17:03.046 --> 00:17:06.804
It's called the SIFT method, and used by journalists.

00:17:06.804 --> 00:17:07.546
It's called the SIFT method.

00:17:07.546 --> 00:17:16.778
Stop, investigate the source, find better coverage and trace claims to their original context.

00:17:16.778 --> 00:17:28.548
So if you suspect that there is some kind of altered article out there or whatever, you can really just take a minute and try and investigate that source and trace it back to their original context, especially if it's a cheap fake or an altered.

00:17:28.548 --> 00:17:35.236
So the context and the image may be real, but part of it was altered At least.

00:17:35.236 --> 00:17:45.624
If you can go back and find the original real reference, then you can put them side by side and say this is AI manipulated or this has been digitally altered.

00:17:47.036 --> 00:18:03.146
Now, ann, I know you do quite a bit of speaking, but in terms of the topic, do you find I guess the best question is are most of your topics about the fake AIs, or just where do they land for your speaking engagements?

00:18:04.315 --> 00:18:10.641
No, most of the time I'm educating and providing awareness and demystifying AI.

00:18:10.721 --> 00:18:22.740
What is it, what is it not, and what can it do for us and what should we never do with it, and what are those best practices and guidelines for responsible and ethical use.

00:18:23.295 --> 00:18:32.008
But as part of that, being a public relations career professional myself, I talk to a lot of marketing communications groups.

00:18:32.008 --> 00:19:00.042
I talk to public relations groups and part of what we talk about is the reputation management piece, which is the rise of deep picks that are being weaponized against corporations for money and weaponized against people for money and to disparage them, perhaps out in the public, and so this topic comes into that discussion.

00:19:00.042 --> 00:19:03.288
But very often I'm talking about the fun stuff how do we use it?

00:19:03.288 --> 00:19:07.816
How do we use it for content creation and ideation and SEO and so forth?

00:19:07.816 --> 00:19:16.921
But it's also very important to me to make sure I educate others on how to start spotting misinformation and disinformation.

00:19:16.921 --> 00:19:18.785
You have to practice.

00:19:18.785 --> 00:19:40.170
You have to start to understand how to recognize the signs when we suspect digital deception is at play, and I have some frameworks around that, so I do teach whole groups, whole teams about AI, deep fakes and reputation management.

00:19:41.036 --> 00:19:42.281
Well, let me say that right now.

00:19:42.281 --> 00:19:44.303
Hey, you all haven't just mentioned how to detect things.

00:19:44.303 --> 00:20:04.884
Let me say that to our listeners right now that Rebecca're having just mentioned uh, you know how to detect things, let me say that to our listeners right now that rebecca has developed a free I want to repeat, free 20-page digital guide that includes links to ai tools, best practices and tips for spotting misinformation, and it's available free on her website at seacoast, that's s--E-A.

00:20:04.884 --> 00:20:22.988
Seacoastaicom Wanted to make sure we got that in because I was fascinated as you talked about how we can go about, or others can go about, detecting fake AI presentations, because that's going to be huge today and in the foreseeable future.

00:20:23.808 --> 00:20:36.483
Absolutely, and in fact, I'm building out a resource guide around this, because if you suspect digital deception and you have to make a public statement about it, how do you analyze the threat?

00:20:36.483 --> 00:20:41.415
And you can use AI to really make that process go much quicker as well.

00:20:41.415 --> 00:20:48.990
Ai is excellent at analyzing situations and giving you guidance on next steps and so forth.

00:20:48.990 --> 00:21:00.640
I was training a group of amazing communicators this week and I showed them how to use Claude to create an artifact, and it was a decision tree and based on a certain scenario.

00:21:00.640 --> 00:21:04.807
At what point should we consider making a public statement about this?

00:21:05.307 --> 00:21:13.142
And it steps us through sort of a yes, no, if it's going to cause us harm, then perhaps, yes, we should make a public statement, and so forth.

00:21:13.142 --> 00:21:17.665
And then I asked Claude AI to turn it into a little quizlet.

00:21:17.665 --> 00:21:39.950
And all of a sudden, now you have this little snippet of code that looks like a quiz and can step you through all the different scenarios until you get to the answer you need, which is is it time to put out a public statement or should we just create a holding statement and continue to research and investigate what is happening?

00:21:40.654 --> 00:21:44.967
Well, it seems to me that you're doing a lot, a lot of work for a lot of people to help them.

00:21:44.967 --> 00:21:54.730
One, become more efficient in the use of AI, but also, just as important, detecting the unsavory players that are out there.

00:21:56.778 --> 00:21:58.304
We have to improve our skills.

00:21:58.304 --> 00:22:09.789
And we saw a little bit of this, peter, earlier in the year, when the royal family was trying to provide Kate with some personal space.

00:22:09.789 --> 00:22:41.403
She was dealing with a personal matter and the world, you know, influencers and followers and interested people were asking for an update and unfortunately, the royals made a bit of a public relations stake when they put out an altered photograph and although the photograph in parts were real that was the family, they were together there were also parts of that photograph that were clearly altered and that's what we consider misinformation.

00:22:41.403 --> 00:22:45.605
Right, the part of it is real, but somehow it's been altered.

00:22:45.605 --> 00:22:54.875
The part of it is real but somehow it's been altered, and I think that caused the world then to get into a bigger panic because they felt like they were being deceived.

00:22:55.434 --> 00:23:07.845
And so when you look at breaking something like that down, I use a simple framework, an ABC framework, which is actors, behavior and content, and when I mean content, I even mean context and contours.

00:23:07.845 --> 00:23:18.166
So you first want to look at what's happening and, with the person and the behavior, and you want to think about does this seem far-fetched for this person?

00:23:18.166 --> 00:23:22.005
You know, is Tom Hanks really trying to promote a dental plan?

00:23:22.005 --> 00:23:26.563
No, Is Taylor Swift going to be giving away 3,000 sets of Le Creus?

00:23:26.563 --> 00:23:26.864
Was a.

00:23:26.864 --> 00:23:29.148
All you have to do it sign up and give your credit card.

00:23:29.148 --> 00:23:32.825
No, if she's gonna do any kinda promotion, she can do it.

00:23:32.825 --> 00:23:35.198
You know, and degree.

00:23:35.198 --> 00:23:40.586
It can be a big, coordinated promotion and it will be spectacular and fabulous.

00:23:40.686 --> 00:23:51.864
And that was not, and you could clearly see that the images were altered, the the deep fake was altered, but unfortunately people were scammed by it and gave their information.

00:23:51.864 --> 00:23:57.280
So when you're looking at what could be a scam, you have to think about the actors and the behavior.

00:23:57.280 --> 00:23:58.261
Does this make sense?

00:23:58.261 --> 00:24:01.865
Is there some kind of opportunity or intent to exploit or harm others?

00:24:01.865 --> 00:24:05.968
A big indicator is the sense of urgency.

00:24:05.968 --> 00:24:09.049
Call now, put in your credit card now.

00:24:09.049 --> 00:24:09.931
We need money now.

00:24:09.931 --> 00:24:18.576
Or someone might call and try and do a ransom play and you need to upload a Walmart card now.

00:24:18.576 --> 00:24:24.925
Right, that sense of urgency should be a big red flag for all of us.

00:24:24.945 --> 00:24:29.603
And then, when we look at the content, you want to think about not only the content, but the context and the contours.

00:24:29.603 --> 00:24:39.346
Very often AI, the avatar technology, the voice swapping and so forth they don't quite get all the edge details right.

00:24:39.346 --> 00:24:50.469
So I always encourage people to look around the contours of the image itself around the edges, but also around the edges of people's faces their nose, their mouth, their teeth, their hair, their hands.

00:24:50.469 --> 00:24:59.390
Those are areas that AI still, at times, struggles with and those can be very helpful indicators that something is amiss.

00:24:59.390 --> 00:25:02.555
So ABC actor's behavior and content.

00:25:03.059 --> 00:25:06.711
Well, Rebecca, you've provided some exceptional information for our listeners here today.

00:25:06.711 --> 00:25:10.049
Is there anything you think we have missed?

00:25:11.761 --> 00:25:28.135
I think that there is a lot to this, but I encourage communications professionals and public relations professionals if you worry about possible weaponized AI, deepfake or misinformation or disinformation, it helps to start educating yourself.

00:25:28.135 --> 00:25:30.323
What are the differences between those things?

00:25:30.323 --> 00:25:34.299
What are the differences between deepfakes and cheapfakes, and trolls and bots?

00:25:34.299 --> 00:25:37.885
Because all of that and how do you assess the impact?

00:25:37.885 --> 00:25:42.012
And then you better have your links ready.

00:25:42.012 --> 00:25:47.723
So, for example, where do we go to report and remove content?

00:25:47.723 --> 00:25:49.469
Right, there's Google takedown.

00:25:49.469 --> 00:25:50.932
There's Bing, there's Facebook.

00:25:50.932 --> 00:25:53.246
All the different platforms have takedown pages.

00:25:53.246 --> 00:25:55.771
Build a list of those and have those ready.

00:25:56.579 --> 00:26:02.532
We need to make sure that we might have to refer to law enforcement if we're at a school and this deals with minors.

00:26:02.532 --> 00:26:09.366
Right, we want to make sure we have a good monitoring tool in place, whether that's something like, you know, sprout, social or whatever.

00:26:09.366 --> 00:26:10.611
You want to make sure.

00:26:10.611 --> 00:26:18.201
Then you can start to continue to monitor, and then you have to really think about your response and what that might look like.

00:26:18.201 --> 00:26:39.498
So there's a lot to it and I enjoy educating others and really breaking it down, because, if you think about it, all those platforms they reward engagement and in some cases you do want to respond, but in other cases you don't want to fuel that engagement because it's only going to make the misinformation or disinformation go even further in those platforms.

00:26:39.577 --> 00:26:43.880
There's a lot to it well, let me say that you have provided an awful lot of information.

00:26:43.880 --> 00:26:48.431
Now can our listeners get in touch with you by way of your website.

00:26:49.373 --> 00:26:52.327
Yes, there's a contact form right on my website.

00:26:52.327 --> 00:26:54.048
It's seacoastaicom.

00:26:54.409 --> 00:27:06.930
Okay, let me repeat that again and write this down, listeners, because I think there's a lot of information here that you might want to get back to, and her website again is Seacoast, that's S-E-A, seacoastaicom.

00:27:06.930 --> 00:27:23.528
My guest today has been Rebecca Emery, an APR with Seacoast AI and, being quite a son of mine, has provided some exceptional information on AI, legitimate and illegitimate, today that I think that we can all benefit from.

00:27:25.162 --> 00:27:30.825
So, rebecca, I am so happy we've had a chance to have this conversation and I'm sure all listeners are going to benefit from it.

00:27:31.467 --> 00:27:32.049
Me too, peter.

00:27:32.049 --> 00:27:51.752
I really appreciate it, and I just encourage everyone to just increase your awareness about AI, be on the lookout for digital deception and know what to do when that starts to rear its ugly head, because, unfortunately, ai deep fakes are on the rise.

00:27:51.752 --> 00:28:00.548
Ai technology is here to stay, and so this is part of the new reality for us as communicators and PR professionals.

00:28:02.231 --> 00:28:02.980
Well, good, thank you.

00:28:02.980 --> 00:28:17.047
So so very much for taking time to be on the Public Relations Review Podcast and you know, based on what we've talked about today, I think we might have to find another way to get you on this show about some other related topic in the near future, if you'll be willing to come on and share it with us.

00:28:18.621 --> 00:28:20.126
Happy to do that, peter, anytime.

00:28:20.126 --> 00:28:45.555
It's always a pleasure to talk with you and, uh, I think that there's so much that's happening on this and ai is evolving so rapidly before our eyes that this is going to be an ongoing discussion point for sure great well, as I said, my guest that today has been uh, rebecca, emery and apr, who heads seacoast AI, located at Seacoast AI, and you get to her at SeacoastAIcom.

00:28:46.579 --> 00:28:49.630
Rebecca, thank you again so much, and to my listeners, thank you.

00:28:49.630 --> 00:28:54.722
We certainly would like to get a review from you and, of course, share this episode with all of your friends.

00:28:54.722 --> 00:29:05.738
And be sure to tune in to the next edition of the Public Relations Review Podcast of the Public Relations Review Podcast.

00:29:06.039 --> 00:29:14.167
This podcast is produced by Communication Strategies, an award-winning public relations and public affairs firm headquartered in Nashville, Tennessee.

00:29:14.167 --> 00:29:15.400
Thank you.