
AI and the Election
Clip: Season 7 Episode 7 | 8m 9sVideo has Closed Captions
AI has a presence this election season and at the positives and negatives this technology.
AI has a presence this election season. We look at the positives and negatives this technology has on the presidential race.
Nevada Week is a local public television program presented by Vegas PBS

AI and the Election
Clip: Season 7 Episode 7 | 8m 9sVideo has Closed Captions
AI has a presence this election season. We look at the positives and negatives this technology has on the presidential race.
How to Watch Nevada Week
Nevada Week is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipArtificial intelligence and its role in the race for President...
Specifically, what impact deepfakes, or fake images and videos, are having on voters... For that, we spoke with Tech Ethicist and AI Ethics Consultant, Reid Blackman, who we first met last year at the artificial intelligence conference, Ai4.
When we had you on the show around this same time last year, we asked you to explain to our viewers what deepfakes are.
Fast forward a year, how would you characterize the level of knowledge that the public now has about deepfakes?
(Reid Blackman) They probably don't use the term "deepfake" a lot, but they're more familiar than they were a year ago about the presence of fake videos.
The extent to which they can identify when the video is fake, that's a separate question.
-Well, let's take up that question, because we asked you that a year ago.
How able are Americans at this moment to decipher between what's real and what's fake?
You said, Not really.
-I don't think that they are any better at it.
In their defense, the videos are getting better except when the content is really outlandish.
I do think that there's some degree of growing skepticism, such that people need to-- they've come to now ask, Hey, is this real or AI?
So I think that kind of question, Is this real or AI, that's probably increased in frequency among the general population.
But for the most part, I don't think people have a good grip on it.
-Talk about some of the examples that have stood out to you in this last year where AI has been used to give out false information.
-I don't think that there's been a sort of, you know, let's say Cambridge Analytica-style deepfake situation.
The problem is not, I don't think, there's going to be this deepfake that goes viral that has this massive influence that changes the election.
I don't think that's the way it works.
It's more the aggregate of disinformation.
It's, it's constant feed of fake news, disinformation, misinformation that I think will cause the change.
So, you know, I don't think that we should be looking out for the avalanche that just destroys everything.
I think it's more erosion that takes place over days, months, years, as opposed to the big, traumatic event.
-So this continued exposure to false information?
-Yeah.
It's eroding the information environment.
It's making the environment such that we can't increasingly trust what we see, know whether it's true, know where it's coming from.
That's, that's the larger problem, as opposed to, you know, again, the big video.
I don't think most people will see some video of, let's say, Kamala Harris doing something inappropriate or saying something inappropriate or something along those lines, and say, Well, that's it.
I'm not gonna vote for her now.
I don't think that's the real problem.
It's more a constant drip of fake news that gives a feeling or an aura of, Something's not right about this person.
That's gonna lead to more, more influence than the traumatic, Oh, my God, look at this fake, this fake video.
-Will you also explain what role emotion plays in all of this.
-Yeah, so there's this, there's this growing thought that what we need to do is we need to watermark the deepfakes so that people know they're looking at something fake.
I don't think that's-- that's not a silver bullet.
And one reason is because what we're talking about here is emotional reactions to the videos.
The videos don't need to get you to believe something new.
I don't have to believe that Kamala Harris said that thing or something along those lines.
If I feel unease now, or if I feel just, Yeah, I don't think that's true, but I see where it's coming from, kind of, it's not sort of hitting me at the belief level.
It's hitting me at the emotion level, and a lot of our decisions are emotionally driven.
Do I want to have a beer with that guy?
That was famous around George Bush's time.
This is a guy you want to have a beer with, and that's why I'm going to vote for him.
So it might not be the case that the deepfakes influence our beliefs directly.
It might be they cause us to have a certain emotion, especially emotions of, say, disgust, contempt, or just discomfort.
And that drives the formation of beliefs or behavior down the road.
-So even if someone looks at it, knows that it's not real but has an emotional reaction to it, there could be consequences?
-Yeah.
We already know that fiction can generate large emotional responses.
You watch-- you watch fictional movies all the time, you read fictional books, and you have big-- you cry, you laugh on all those things.
And the same thing is true if we watch deepfakes.
We know it's fake, just like we know the movie is fake, that the person didn't really die, but we have that emotional reaction.
And then there's this complicated psychological issue about whether we can sufficiently isolate those emotional reactions to our more rational reactions and reflections when thinking about, say, who should we vote for.
And there's good reason to think that these systems we can't isolate in that way.
We can't just sort of put that to the side and be rational, that those emotions, even despite our efforts, might affect how we deliberate and how we think.
-Remind me the example that we talked about of AI last year.
It involved the Pope, and it was an image of him wearing-- -Yeah.
It was a Balenciaga, a white puffer coat, yeah.
-Okay.
Well, fast forward, and we have just about a week or a couple weeks ago, the Kamala Harris audio that is put over some video of her.
Will you explain what that looked like, what it sounded like.
How realistic do you think it was?
And this was something that was retweeted by Elon Musk, did not acknowledge that it was a parody until, I believe, a couple days later.
-Well, one question, whether it was a parody or whether it was really meant to do some undermining.
So there's a question about what's the intent with which the video was created.
But the video was Kamala saying, I believe it was-- I saw clips of it.
I couldn't even find the original video.
-Right, because that's an ethical question, too.
As news organizations, do we replay what has been fabricated?
-Right.
I looked at several news organizations and how they covered it, and they all, excuse me, didn't know.
None of them showed the whole video.
They showed small clips.
They put giant letters over it that said, "This is AI generated."
So anyway, it was, it was Kamala's voice saying things like, I'm a DEI hire.
You know, Biden is senile.
I'm not prepared, but hire me.
You know, Vote for me anyway.
And it was that with-- the image was not her speaking that, if I recall.
From what I've seen, it was images, you know, her on the campaign trail.
-It was a voiceover.
-Exactly.
It was a voiceover.
-AI generated.
-Yeah.
So, you know, that's what happened.
It's hard to say if anyone believed it.
If you looked at some comments that the news organizations covered, some people said, Is this real?
Is this AI?
So again, this is speaking to people's increased ability to ask that question, to think to ask that question, to have the wherewithal to ask that question.
But you know, again, I don't think that that video is going to change the course of the election by itself.
It's just-- it's not enough.
-Okay.
What about on the opposite end, where you have former President Donald Trump accusing Kamala Harris of using AI to increase the appearance of an audience at one of her rallies?
-Yeah.
I mean, it's patently absurd.
I don't know how else to describe it, because there's all sorts of-- -Is it, though?
There's plenty of people who would say no.
-It's not that it's literally impossible to do.
That can't be the case.
It's that there's so many methods of verifying that the crowd was what it was.
So you have single news organizations with multiple lenses, right, multiple angles of the crowd.
Then you have competing organizations also with very similar news shots.
So you would have to think that there's a massive conspiracy among all these news organizations that were present at the events and that they fooled everyone or that they-- either they had everyone in on it, or they didn't tell anyone and they just released the video and nobody spoke up about it.
So it requires a tremendous amount of orchestration to pull it off across CNN, New York Times, the Washington Post, MSNBC, even FOX.
You'd have to get all of them to say, Oh, yeah, we're going to, we're going to show an altered feed.
-We asked Nevada Secretary of State Francisco Aguilar how concerned he is about AI in this year's elections.
And while he said deepfakes are concerning, he said he's more worried about chatbots.
You'll hear from him on that with further insight from Reid Blackman next week on Nevada Week.
Video has Closed Captions
We look at the various ways Nevada and other western states are impacted by wildfires. (17m 6s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipNevada Week is a local public television program presented by Vegas PBS