APPLE ranks this podcast among the "Top 1% of podcasts worldwide"
May 6, 2024

PR Professional's Guide to Ethical AI Usage and Client Trust

PR Professional's Guide to Ethical AI Usage and Client Trust
The player is loading ...
Public Relations Review Podcast

Embarking on an ethical odyssey, host Peter Woolfolk engages with PRSA board members Michelle Egan and Dr. Cayce Myers to examine the delicate interplay between AI advancements and moral directives in public relations. Prepare to navigate the terrain of AI tools, such as ChatGPT and AI avatars, with a compass set firmly on maintaining transparency and verifying AI-generated content. Their dialogue ventures into the realm of responsibility as we tackle the thorny issues of AI-driven misinformation and the frameworks necessary to empower PR professionals to wield these powerful tools without losing their ethical bearings.

Venturing further, we confront the hidden crevices of AI privacy and security concerns, discussing the nuances of navigating client confidentiality against the backdrop of fierce industry competition. Through our exploration, we illuminate the varying degrees of openness in AI platforms and the potential hazards they present, from the exposure of personal details to the safeguarding of proprietary data. We share resources from prsa.org and advocate for hands-on experience with AI to fortify one's understanding of its capabilities—all while thanking our insightful guests, Michelle Egan and Cayce Myers, for their enlightening contributions to this pivotal discussion.

We proudly announce this podcast is now available on Amazon ALEXA.  Simply say: "ALEXA play Public Relations Review Podcast" to hear the latest episode.  To see a list of ALL our episodes go to our podcast website: www. public relations reviewpodcast.com or go to  or
Apple podcasts and search "Public Relations Review Podcast."  Thank you for listening.  Please subscribe and leave a review.

02q9GWZKECAcbRlu4yjf

Support the Show.

Chapters

01:10 - Ethical AI in Public Relations

05:46 - Ethical Considerations of AI Use

21:26 - AI Privacy and Security Concerns

Transcript

WEBVTT

00:00:03.725 --> 00:00:04.426
Welcome.

00:00:04.426 --> 00:00:18.530
This is the Public Relations Review Podcast, a program to discuss the many facets of public relations with seasoned professionals, educators, authors and others.

00:00:18.530 --> 00:00:22.207
Now here is your host, peter Woolfolk.

00:00:23.170 --> 00:00:28.826
Welcome to the Public Relations Review Podcast and to our listeners all across America and around the world.

00:00:28.826 --> 00:00:39.646
This podcast is now ranked by Apple as being among the top 1% of podcasts worldwide, so thank you to all of our guests and listeners for making this happen.

00:00:39.646 --> 00:00:45.750
Now question artificial intelligence has now become an integral component of public relations.

00:00:45.750 --> 00:00:50.350
There's text-to-speech chat, gpt, gemini, avatars and video.

00:00:50.350 --> 00:00:51.332
There are just a few.

00:00:51.332 --> 00:00:56.392
Are there rules and regulations we should be aware of when using this technology?

00:00:56.392 --> 00:01:01.131
Well, my guests today say, under some circumstances, that answer is yes.

00:01:01.131 --> 00:01:15.254
First there's Michelle Egan, apr and fellow APR, and she joins us from Anchorage, alaska, and there's Dr Casey Myers, ap and JD, and he joins us from Virginia Tech in Blacksburg, virginia.

00:01:15.254 --> 00:01:17.045
Welcome to the podcast.

00:01:17.828 --> 00:01:19.444
Thank you, so excited to be here.

00:01:19.444 --> 00:01:21.400
Okay, yes, thank you Peter.

00:01:21.781 --> 00:01:27.552
Well, let me just say this now that I certainly use artificial intelligence and certain products here in my podcast.

00:01:27.552 --> 00:01:39.052
It produces my transcripts for me, it writes blogs, it even gives me five titles that I can use on the podcast, that I can choose to use them or change them or whatever.

00:01:39.052 --> 00:01:48.742
And I've also used some text-to-speech on several episodes, and, yes, I do also used some text-to-speech on several episodes.

00:01:48.742 --> 00:01:49.283
And, yes, I do.

00:01:49.283 --> 00:01:51.165
I use the AI avatar videos to promote each individual episode.

00:01:51.165 --> 00:01:56.900
So, basically, what are the concerns that PRSA has with AI in public relations?

00:01:58.043 --> 00:02:00.846
Well, I'm glad you had us on to talk about this issue.

00:02:00.846 --> 00:02:19.633
Peter Casey and I are both members of the board of PRSA and last year I was chair and Casey was the liaison to our board of ethics and professional standards, and that group BEPS we call it worked on some guidance for professionals who are using AI.

00:02:19.633 --> 00:02:44.497
We're excited about it, we love the creativity of it and we want to make sure that people are paying attention to the ethical guidance that comes from our code of ethics and that really touches on things like disclosure about the use of AI, doing your own research, making sure that you avoid any conflicts of interest, that you're verifying what you're using.

00:02:44.497 --> 00:02:51.731
So all those things come into a guidance document that Casey worked on and we released last year.

00:02:51.731 --> 00:02:54.885
That's available to everyone, member or not.

00:02:54.885 --> 00:03:05.813
So when you're using AI, experimenting with it, employing it in your business, you have some guidance about what to look out for and how to use it in the most effective way.

00:03:06.560 --> 00:03:18.957
One of the things that I noticed last year when I was speaking as chair of TRSA is, early in the year, people we would ask you know how many of you are using ChatGPT or a similar tool?

00:03:19.058 --> 00:03:47.639
And you just get like some real reluctant, two, three real reluctant hands go up in the audience and then by the end of the year you get quite a few and people would say to me it kind of feels like cheating, right, that's what they would say about using a tool like ChatGPT or any of the others that we talk about, and giving people a framework helps them to use the tools and then not feel so much like maybe it's inappropriate.

00:03:48.320 --> 00:04:03.402
The other thing that I think is really important that people could use as a resource is a recent publication that PRSA did on mis and disinformation, and that is such a huge issue in our society.

00:04:03.402 --> 00:04:29.815
In fact, the 2024 World Economic Global Risks Report puts myths and disinformation at the very top of global risks, above climate change, polarization, all the other things that you typically think of, and that's because, when it's fueled by AI, the propensity for and disinformation to proliferate at scale is really great.

00:04:29.815 --> 00:04:38.990
So we've got a couple things out there from PRSA, I think that are interesting to members and non-members and also help kind of provide some guidance.

00:04:39.500 --> 00:04:46.494
But, I'm going to let Casey talk a little bit more about this, because he's got some real deep expertise in this area.

00:04:47.019 --> 00:04:53.540
Let me just say this because, in terms of disclosure, I guess maybe we might need some help there.

00:04:53.540 --> 00:04:56.187
What sort of disclosure are we talking about?

00:04:56.187 --> 00:04:59.293
So, for instance, I use ChatGPT.

00:04:59.293 --> 00:05:21.932
For instance, I might say write down something like give me a brief outline for identifying A, b, c and D and whatever those elements are, and it will provide me that I will then take a look at it and then I'll make any adjustments or adjustments and modifications that I need that help me be more precise about what it is I want to say.

00:05:21.932 --> 00:05:28.428
So sometimes I use it as a guideline rather than me trying to think of everything, and sometimes it gives me some ideas I hadn't thought about.

00:05:28.428 --> 00:05:32.550
So I guess we might need some help on what disclosure is.

00:05:32.550 --> 00:05:34.185
I mean disinformation.

00:05:34.185 --> 00:05:35.329
I certainly understand that.

00:05:35.329 --> 00:05:39.632
So that's one thing I would have some questions about.

00:05:39.632 --> 00:05:44.651
How does one look at the use of it in that fashion?

00:05:45.673 --> 00:05:48.379
How does one look at the use of it in that fashion?

00:05:48.379 --> 00:05:58.180
No-transcript, there's not a law that's going to mandate a certain level of disclosure.

00:05:58.180 --> 00:06:12.384
Now, that may come and that may be something that you may see in a kind of work product, particularly around visuals, where there is maybe watermarking that comes on visual AI content.

00:06:12.870 --> 00:06:23.081
There are some disclosures that are mandated in the use of AI for communications, for political advertising.

00:06:23.081 --> 00:06:30.264
We've seen that kind of concern out of disinformation in the 2024 campaign, presidential year.

00:06:30.264 --> 00:06:32.677
Globally there's a concern around that.

00:06:32.677 --> 00:06:35.980
So the disclosure there is mandated.

00:06:35.980 --> 00:06:41.242
But in the day-to-day operations of public relations that is going to be an individualized decision.

00:06:41.370 --> 00:06:46.002
Now there are people that will say you should disclose because that's transparent.

00:06:46.550 --> 00:06:56.360
There's transparency in disclosure and the insidious nature of disinformation is that people can't tell what's real and what's fake and you have to be honest with your audiences and you have to be honest with your audiences.

00:06:56.360 --> 00:06:59.944
But there are others who will say AI is a tool.

00:06:59.944 --> 00:07:01.387
I use it as a tool.

00:07:01.387 --> 00:07:06.512
I don't use it as a substitute for my own work product.

00:07:06.512 --> 00:07:09.942
I use it as a tool to enhance my work product, to help me complete tasks.

00:07:09.942 --> 00:07:18.492
So, just like what you were mentioning menial tasks like creating a check sheet, brainstorming, et cetera would you necessarily disclose that?

00:07:18.492 --> 00:07:26.112
That's been part of your process and I think that's a very individualized decision that has to be made by practitioners.

00:07:26.112 --> 00:07:33.475
But it is something that increasingly, is the number one question that I get what do we do with AI and disclosure?

00:07:33.475 --> 00:07:45.444
And I think that ultimately, what's going to happen is that we as an industry, as a public relations industry, have a lot of power right now because we don't have a legal mandate on disclosure in most circumstances.

00:07:45.829 --> 00:07:53.204
We have to make that decision for ourselves as professionals, and I think that there's a lot of things to weigh in that decision.

00:07:54.577 --> 00:07:55.189
Well, it's interesting.

00:07:59.098 --> 00:08:00.843
Casey, I think you've probably heard this as well.

00:08:00.843 --> 00:08:18.199
I've talked to people who work in public relations agencies, and one of the things they're doing is putting in their contracts a general disclosure that we may occasionally use AI in working on a product, and that's one way people are addressing it.

00:08:18.199 --> 00:08:42.825
But, casey's right as with any other ethics issue, there's a lot of personal latitude, and what you described again is making your work better, or giving you the space to use your brain for more powerful things than creating a checklist or doing a small amount of research, and so it's a little bit different than a misinformation type campaign.

00:08:43.571 --> 00:08:48.710
Well, you know, one of the other things that Casey just mentioned is that the little mark that goes on the avatars.

00:08:49.191 --> 00:09:01.320
I use the free version and that comes with it, so you know it's only something about a minute, a minute and a half, to say you know, here's what we're going to be covering in our next episode, that sort of thing.

00:09:01.320 --> 00:09:05.779
And the little trademark, or whatever it is, is down in the lower corner.

00:09:05.779 --> 00:09:06.822
That does not come up.

00:09:06.822 --> 00:09:16.923
So that's really just advertising the fact that we're going to have this podcast episode and this is what we're going to be talking about, that sort of thing.

00:09:16.923 --> 00:09:29.840
And I can certainly see, as a matter of fact it has been shown on television where some very prominent people have the misinformation, because it did look like, well, it was them, but they were saying words that they never uttered.

00:09:29.840 --> 00:09:40.202
So I can see where that can cause and will cause a huge, massive amount of problems, because that is misinformation at the highest levels that we do not need to have.

00:09:41.071 --> 00:09:43.317
Let me just jump in here real quick about that.

00:09:43.317 --> 00:09:49.852
The issue really is that we as PR practitioners want to be ethical.

00:09:49.852 --> 00:09:51.035
We want to do the right thing.

00:09:51.035 --> 00:09:58.666
The disinformation that's out there, those people will never disclose, if they can get away with it, because they're bad actors.

00:09:58.666 --> 00:10:02.495
They want to produce content that is meant to deceive.

00:10:02.495 --> 00:10:04.780
You take, for instance, voice cloning.

00:10:04.780 --> 00:10:10.080
There are thousands of scams that use voice cloning to get people to send other people money.

00:10:10.080 --> 00:10:11.972
It sounds like your daughter's calling you.

00:10:11.972 --> 00:10:14.960
She's been kidnapped, you need to pay a ransom or something.

00:10:14.960 --> 00:10:18.855
It sounds like her because they only need a few seconds of audio to voice clone.

00:10:18.855 --> 00:10:22.322
And so those folks aren't going to disclose because they're bad actors.

00:10:22.743 --> 00:10:26.336
Now, we in the PR industry aren't in the disinformation business.

00:10:26.336 --> 00:10:33.778
We're in the transparent communication business and we want to uphold our professional ethics at the highest level.

00:10:33.778 --> 00:10:39.096
But it does beg a question of what level of disclosure is required.

00:10:39.096 --> 00:10:47.821
So, for instance, let's say, a lot of folks they're using AI to edit, they're using AI to kind of do what normally Photoshop would maybe do for a picture.

00:10:47.821 --> 00:10:52.000
Does that need to be disclosed to the public?

00:10:52.000 --> 00:11:02.442
You could give a general disclosure, but then again, we don't disclose a lot of things that we use, like, for instance, if you use Grammarly or Spellcheck, that's not disclosed.

00:11:02.442 --> 00:11:08.423
If you use a template that's already preexisting in Microsoft, you don't necessarily disclose that.

00:11:08.423 --> 00:11:15.844
So there's a counterpoint to it of, like, well, how small does the use of AI have to be before you don't disclose?

00:11:15.844 --> 00:11:18.558
And I think that's something that's going to be very individualized.

00:11:23.370 --> 00:11:24.914
I think that the industry doesn't have an answer for that quite yet.

00:11:24.914 --> 00:11:26.399
Would the response to that, or an answer to that, have to be?

00:11:26.399 --> 00:11:34.477
You know how much does it impact someone else making a decision if they know that you did it or, compared to AI, had done it.

00:11:34.477 --> 00:11:49.504
If what they are producing for you, you have to make a decision on whether to or not to accept it, I would think that, whether they use or did not use AI, would be almost imperative to let them know.

00:11:50.792 --> 00:11:52.859
I think that's a great point.

00:11:52.859 --> 00:11:57.904
I think that's 100% correct, and you know it used to be just to give you an example, with these deep fakes.

00:11:57.904 --> 00:12:00.919
It used to be you'd hear the saying seeing is believing.

00:12:00.919 --> 00:12:06.136
Well, now you can't believe what you see, right, you've got to individually check it.

00:12:06.136 --> 00:12:28.024
So I think, at the point that you're creating different realities for other people and informing their perception of the world, use of AI for just sort of functionary tasks but goes and you're using AI to actually create dialogue within society, that may have a huge resonance.

00:12:28.024 --> 00:12:29.032
So absolutely.

00:12:29.533 --> 00:12:36.354
And our guidance suggests that you are responsible for the information that you disseminate.

00:12:36.354 --> 00:12:46.102
So you're responsible to validate that it's accurate, to make sure that the sources are checked, that those sources are disclosed wherever possible.

00:12:46.102 --> 00:12:56.662
So you know, it really is part of the guidance as well to say, at the end of the day, use these tools and you're still responsible for the information that you're sharing.

00:12:57.854 --> 00:13:22.357
I guess my question from that is that it's more imperative to be forthright about whether you did or did not use AI if it is being used to help someone make a decision, and particularly if they're paying you for that, but if it's someone making a decision based on what you produce using some form of artificial intelligence, it should be imperative that AI be revealed, that it was used in this development process.

00:13:22.357 --> 00:13:26.394
Is that close to what it is that we're trying to get done here?

00:13:27.570 --> 00:13:30.783
I think that's an interesting way to frame it.

00:13:30.783 --> 00:13:32.307
So you're getting.

00:13:32.307 --> 00:13:40.326
You are getting at the point of what is the tool being used for, what is the information being used for?

00:13:40.326 --> 00:13:43.913
So I think, that would be a good guideline, Casey.

00:13:44.759 --> 00:13:45.380
I think that's.

00:13:45.380 --> 00:13:47.163
I think that is what he's.

00:13:47.163 --> 00:14:00.697
I think what you're getting at there is whether or not the tool is used in a way that is going to have massive impact on the person receiving the content.

00:14:00.697 --> 00:14:09.408
And if you're going to receive content and the tool is used in a way that's going to have impact and shape their opinion, then you should disclose that.

00:14:09.408 --> 00:14:31.830
I would go a step further and say that also, when you are using AI to process information, you have to be very careful that you're not overly reliant on that AI, because AI is a tool, right, just like spellcheck is a tool, just like the Internet is a tool.

00:14:31.830 --> 00:14:34.288
They have a lot of things that they get right.

00:14:34.288 --> 00:14:40.509
They have a lot of things they get wrong, and so that gets to the kind of larger question.

00:14:40.528 --> 00:14:41.594
A lot of pr practitioners will ask me out on the road.

00:14:41.594 --> 00:14:44.202
They'll say well, you know this is going to take our job.

00:14:44.202 --> 00:14:51.788
Well, if you're operating your job in a way where you could just ai can do it, then maybe you're very replaceable, right.

00:14:51.788 --> 00:15:13.610
But if you're, if you're operating your job where you know you bring a lot of talent, a lot of insight, a lot of knowledge, you're able to strategize, and AI is just part of your toolkit, then I don't think that AI can replace that person, because that person's got value added by what they know and what they can do, because an AI brain and a human brain works totally differently.

00:15:13.610 --> 00:15:17.159
Because an AI brain and a human brain works totally differently.

00:15:17.159 --> 00:15:28.808
And so we bring value as an industry when we bring ourselves into that conversation and ourselves into our work product to ensure that it's going to be something that really is honest, transparent, forthright and also effective.

00:15:30.143 --> 00:15:32.654
You know what I was just at a financial services.

00:15:32.654 --> 00:15:33.639
Go ahead.

00:15:34.000 --> 00:15:35.989
I was just at a financial services conference.

00:15:35.989 --> 00:15:41.832
Yeah, at this conference I was just at, one of the speakers was, of course, speaking about AI.

00:15:41.832 --> 00:16:02.903
She was a former Google decision architect and she used this great metaphor imagine you have a thousand-page book and you've read the book and so you understand the storyline, all the research that's in it, all the context of what's been written, and then you are provided with a one page summary.

00:16:02.903 --> 00:16:06.352
So the AI is creating a one page summary.

00:16:06.352 --> 00:16:13.691
That one page summary cannot capture all of the context, all of the background, all of the information that's in there.

00:16:13.691 --> 00:16:14.412
Can it be helpful?

00:16:14.412 --> 00:16:15.153
Absolutely.

00:16:15.153 --> 00:16:29.471
But the PR practitioner, or whoever the responsible user of AI is, has consumed the book right and can convey the context and all of the information that goes into the one-page summary.

00:16:29.471 --> 00:16:33.029
So I think that's a powerful way to think about our role.

00:16:34.240 --> 00:16:56.380
You know, the other thing I think about as I listen to that is that you know, we have experience in a lot of different things and sometimes, as we're putting together a project or trying to resolve some issues, our experience kicks in to say, hey, you know, based on my experience, I think we should do A, b, c and D experience.

00:16:56.380 --> 00:16:57.143
I think we should do A, b, c and D.

00:16:57.143 --> 00:17:05.646
Well, a lot of times I would say perhaps maybe AI does not have the same experience and they can't make that sort of contribution, you know, based on what information has been put into it to respond with.

00:17:05.646 --> 00:17:15.532
So I think what you just said earlier about having a thousand page book and a one-page description helps answer that question.

00:17:18.385 --> 00:17:25.112
I tell a lot of folks when we're talking about AI is that AI is based on an algorithm with data points that are entered.

00:17:25.112 --> 00:17:32.205
Ai is only going to be limited in its response based on those data points and on its algorithm.

00:17:32.205 --> 00:17:36.386
That's why it's so important to have a good AI platform, because it can be very biased.

00:17:36.386 --> 00:17:45.387
But as a human being, we have experiences, we have thoughts, we have identity, we have engagement with other people.

00:17:45.387 --> 00:17:50.667
We may go get a cup of coffee, we may go get drinks after work, we may go chat with somebody in the hallway.

00:17:50.667 --> 00:17:58.208
We've worked for, however, many years in our business and that provides our foundation for our decision-making.

00:17:58.208 --> 00:17:59.891
We also have a gut check.

00:17:59.891 --> 00:18:01.453
Ai doesn't have that.

00:18:01.453 --> 00:18:04.886
It only has the algorithm and the data which is going to crawl.

00:18:04.886 --> 00:18:07.212
So you know our intuition.

00:18:07.212 --> 00:18:19.690
That is how a lot of decisions are made and subsequently, studies show that intuition and experience and being able to make decisions like that very fast is typically the way to make a right decision.

00:18:19.900 --> 00:18:22.067
So, you can't discount the human being.

00:18:22.067 --> 00:18:24.930
That always gets me with folks saying, oh, AI, take over.

00:18:24.930 --> 00:18:29.750
You can't discount the human because the human being brings so much more to the table.

00:18:30.780 --> 00:18:38.128
Well, and that's the very point that I was trying to make you know, based on, as I call it, experience, what has worked and what might not work.

00:18:38.128 --> 00:18:43.328
Based on because I've actually been through it, so I know the answer to that particular question.

00:18:44.180 --> 00:18:58.145
Yeah, I was just going to ask Casey to share what some of the top issues are, that he hears about it because he's out speaking and in a university and very much engaged, you know, with lots of folks who have questions about AI.

00:18:58.446 --> 00:19:02.682
I know there are other issues besides the disclosure.

00:19:03.925 --> 00:19:10.943
One of them, I'll say from my perspective in my professional practice, is safeguarding confidences.

00:19:10.943 --> 00:19:15.530
So I cannot I work for an oil and gas company.

00:19:15.530 --> 00:19:41.848
I cannot put my company's information into a tool and ask it to summarize something for me or, you know, create new information unless I am absolutely sure that that information is not going to be shared, you know, beyond the company, and I don't have anything that provides me that assurance now, and so I'm not able to use AI.

00:19:41.848 --> 00:19:58.432
If you think about ChatGPT or one of the other tools that Microsoft offers, like Copilot I have to have the assurance that the information I'm sharing is not going to create a threat for the company in terms of, like, a cybersecurity threat or an operational threat.

00:19:58.432 --> 00:20:18.289
So I know that's one big issue that I face that is addressed in our PRSA guidance, just to make sure that you have a responsibility, whether it's your employer or your client, to protect their information, and so you know you have to beware of where you're putting the information and what's being done with it.

00:20:19.471 --> 00:20:24.666
So I tell people this you know you think about AI privacy like a door.

00:20:24.666 --> 00:20:32.469
You know the door can be shut and it can be in various stages of open right and it can be fully open.

00:20:32.469 --> 00:20:42.204
And so you have to understand the platform, particularly in generative AI when you're inputting data, if that data is going to be absorbed into the platform itself.

00:20:42.204 --> 00:20:53.142
So, for instance, let's say that I was going to create some sort of payroll structure using AI and I put everybody's social security number on my AI query and had them organize it.

00:20:53.142 --> 00:20:56.844
You know that could absorb those numbers, absorb that identifying information.

00:20:56.884 --> 00:21:15.171
That's one of the reasons why a lot of hospitals and a lot of medical PR people are more reluctant, I think, to use AI because of HIPAA concerns, and so a lot of folks have turned toward these proprietary platforms because they want to have the security of their information not being taken and they have privacy violations.

00:21:15.171 --> 00:21:29.407
There's also a competitive aspect to it, whereas if I put in a query and it gets me an output based on my query, that output could be available to a competitor if they put in a similar query, and so therefore, I kind of lose a competitive edge.

00:21:29.407 --> 00:21:31.939
So privacy is a big thing within AI.

00:21:31.939 --> 00:21:36.790
You have to be deliberative and I think that we right now are not in a space.

00:21:36.790 --> 00:21:42.515
We know it's important but we haven't talked about that open versus closed system AI.

00:21:42.515 --> 00:21:59.094
I think enough in the industry to really understand what does that mean in terms of just safeguarding our clients, safeguarding our people, that we're communicating with their information, and certainly proprietary information of a company if you're in-house very important.

00:22:00.942 --> 00:22:02.848
Well, this has been a very interesting conversation.

00:22:02.848 --> 00:22:05.789
Is there anything that we've actually missed in this discussion?

00:22:06.839 --> 00:22:10.310
It's been a good overview of the issues.

00:22:10.310 --> 00:22:15.134
I would definitely encourage people to go to prsaorg.

00:22:15.134 --> 00:22:19.031
We have some very thought-provoking information there about AI.

00:22:19.031 --> 00:22:31.991
There's a flash page with lots of content and access to webinars and whatnot and to this guidance and it's and much of it is available to anyone, so happy to be a resource on this.

00:22:31.991 --> 00:22:36.471
You know, have the organization be a resource on on AI and on this and this information.

00:22:36.471 --> 00:22:38.827
Well, I'm actually very happy that you said that.

00:22:39.480 --> 00:22:44.972
That's one of the things I did want to say that you let us know about the information and materials that are available.

00:22:44.972 --> 00:22:46.906
I'm sorry, case pick up before you.

00:22:47.359 --> 00:22:53.328
I was just going to say to your listeners that AI can be daunting and there's a lot that is going to change.

00:22:53.569 --> 00:22:57.375
You know, if we came back a year from now and had this conversation, it would be a different conversation.

00:22:57.375 --> 00:23:17.234
I mean, if we came back a month from now, it may be a different conversation because it's rapidly evolving technology, but for those in PR that are looking to get into this conversation and want to use AI, what they need to do is they need to just try it out low stakes, get on a free platform, just see what its capabilities are, and it'll give you a better sense.

00:23:17.234 --> 00:23:22.099
It's like learning how to drive a car from reading a book versus getting in there and putting the keys in the ignition.

00:23:22.099 --> 00:23:24.547
You put the keys in the ignition and you get it on the road.

00:23:24.547 --> 00:23:25.509
You'll learn a lot more.

00:23:25.509 --> 00:23:28.148
So I think there's a lot of positives for AI.

00:23:28.148 --> 00:23:41.310
There's a lot of positives for PR industry with it, and I think that we ultimately can do a lot more meaningful and better work with it, and so I welcome it as an opportunity for us.

00:23:42.040 --> 00:23:49.049
Well, let me say that I welcome both of you for having been a guest on our show today, because you've really given me a lot to think about.

00:23:49.049 --> 00:23:55.890
There are some things that I had not thought about when it comes to the use of AI, and perhaps the same might be for our audience.

00:23:55.890 --> 00:24:06.227
So I want to say thank you to both of you Michelle Egan up there in Anchorage, Alaska, and Casey Myers, Blacksburg, Virginia, for being guests on our show today.

00:24:06.227 --> 00:24:10.506
Any closing remarks that maybe you forgot and you'd like to make now?

00:24:11.402 --> 00:24:12.799
Oh, I just want to say thank you, peter.

00:24:12.799 --> 00:24:18.186
This has been fantastic and I'm really glad to be able to engage with your listeners in this way.

00:24:18.688 --> 00:24:19.449
Yes, thank you, peter.

00:24:19.449 --> 00:24:23.065
Happy to talk and really enjoyed our conversation.

00:24:23.326 --> 00:24:30.132
Well, let me say thank you, because I think that you have brought some information that perhaps a lot of our listeners might not have been aware of.

00:24:30.132 --> 00:24:32.243
It has enlightened them quite a bit.

00:24:32.243 --> 00:24:33.888
I certainly learned a bit myself.

00:24:33.888 --> 00:24:41.962
So I want to say, as I said once before, thank you so much to Michelle Casey for being our guest today, and to my listeners.

00:24:41.962 --> 00:24:43.267
Certainly, if you've enjoyed the show.

00:24:43.267 --> 00:24:48.067
We'd like to get a review from you and, of course, let me say that we have a brand new spiffy newsletter.

00:24:48.067 --> 00:24:54.490
You can get directly to it at wwwpublicrelationsreviewpodcastcom.

00:24:54.490 --> 00:25:01.928
And also always let your friends know that you were listening and please join us for the next edition of the Public Relations Review Podcast.

00:25:06.342 --> 00:25:16.662
This podcast is produced by Communication Strategies, an award-winning public relations and public affairs firm headquartered in Nashville, Tennessee.

00:25:16.662 --> 00:25:18.588
Thank you for joining us.