![Nevada Week](https://image.pbs.org/contentchannels/bPze0Am-white-logo-41-nGyloaa.png?format=webp&resize=200x)
AI and the Election Part 2
Clip: Season 7 Episode 8 | 9m 35sVideo has Closed Captions
Our conversation with Reid Blackman on the various ways AI technology can impact elections.
We continue our conversation with Reid Blackman on the various ways AI technology can impact elections.
![Nevada Week](https://image.pbs.org/contentchannels/bPze0Am-white-logo-41-nGyloaa.png?format=webp&resize=200x)
AI and the Election Part 2
Clip: Season 7 Episode 8 | 9m 35sVideo has Closed Captions
We continue our conversation with Reid Blackman on the various ways AI technology can impact elections.
How to Watch Nevada Week
Nevada Week is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipAnd we move now to artificial intelligence and its impact on elections.
Last week, we talked with AI Ethics Consultant Reid Blackman about deep fakes, those digitally altered videos and images.
We first met him last year at the artificial intelligence conference Ai4.
And for Part 2 of our discussion, we're talking about chatbots.
That's the type of AI that Nevada's Secretary of State, Francisco Aguilar, says he's most concerned about.
You'll hear from him first, followed by Blackman.
(Francisco Aguilar) This is the first election where artificial intelligence will have a presence.
I had the great opportunity to participate in a test with several of the chatbots.
I was able to ask them questions related to Nevada elections, and it was really concerning.
I'm not concerned-- yes, I'm concerned about deepfakes and the information we actually see in real time, but we have an ability to respond to those situations and to deal with them.
It's the information being given by the chatbots and AI that we don't know is being given.
For example, we know that young people are more likely to adopt that technology than others.
And if they're a first-time voter, they may ask, When do I need to register to vote?
And the information the chatbot gave was "Three weeks in advance of the election," which is wrong.
In Nevada, we have same-day voter registration, which means we just turned away a potential young voter from participating.
That's a big issue, because we don't know it's occurring, we don't know how often it's occurring, and the youth vote could be a strong determinator in who the President of the United States is, because our margins are so slim.
And if a certain number of people are using this as information, they're not going to participate.
-Okay.
So can you simply explain what a chatbot is and then your reaction to what he has experienced?
(Reid Blackman) Yeah, sure.
A chatbot is-- I mean, most of our viewers probably have interacted with a chatbot in some way.
They've been online in some little pop-up box that says, Can I help you?
And it's not a person, it's a computer program.
It's a piece of software.
You say, Yeah, I want to return my item.
Or, you know, How long do I-- how long is the refund policy?
You interact with it, talk to it like a person.
-A lot of government institutions are utilizing these.
-Right.
Now, importantly, there's two different ways, or there's, broadly speaking, two different ways of doing a chatbot.
There's called the old school and the new school way.
The new school way is using like a ChatGPT.
So this is large language model.
It's ChatGPT.
There's another company called Anthropic that has Claude.
Google has-- Do they call it Bard?
No, I think now they call it Gemini.
So there's all these large language models.
This is the most cutting edge large language model chatbot technology.
That stuff-- so that's in one bucket.
One thing that's important about that bucket is that it's well known for generating what gets called hallucinations or some people call confabulations.
In short, it just outputs false information.
So if it doesn't know the answer, it'll just make it up.
Some people call it-- it's a BS machine.
-Oh, my!
-It doesn't just say, Oh, here's the answer.
It does so confidently in many cases.
So that's large language models, and they can be used as chatbots.
The danger is that they can output false information.
So-called old school chatbots, what I'm calling old school chatbots, they don't have that kind of flexibility.
You, like, preprogram it with the questions and answers, and so it can't go off the rails, because it's just on very specific rails.
If you say, you know, When is Election Day?
If it's not programmed to say, It's on this day, then it won't say it.
If it says, It's on this day, it'll say that day.
So it's very clear-cut.
It can say this; it cannot say that.
If you're using a large language model of that first-bucket variety and there's crucial information--When's Election Day?
When do I have to register by?
Whatever it is--that's dangerous.
That seems quite dangerous, because it can hallucinate.
It can make up false information.
And then you have people misguided, not through any fault of their own, and then they either they don't wind up voting or they vote for people that they didn't want to vote for because they falsely believed what the beliefs of the person or the policy positions were.
If we're talking about an old school chatbot, that's fine.
I don't see any reason why you would use a large language model for this kind of case.
If it's just purely information about, you know, things like When is Election Day and Where can I vote, things like that, there's no reason to use this large language model stuff.
-Well, I think the idea is that young people, for example, are going to say, Hey, ChatGPT, tell me when the election day is, or When can I register to vote?
You're saying that's not that common?
-Well, I don't think we have any reason to think that Google has been unseated.
If it were the case that young people were flocking to, say, a ChatGPT, not using Google Search and using ChatGPT instead, we'd see a lot more headlines about it.
Google would be very worked up about it.
And they are worked up about Search to some extent, but it's more it's a fear.
It's a proactive thing, as opposed to what we're seeing right now is the youth flocking to large language model chatbots to do their search, instead of, say, Google or Bing for whoever uses that.
So I don't see it as a problem yet.
It's a scale issue.
Look, are there-- might there be some people, young or old, who go to ChatGPT to get information and turn out to be false information?
Yeah, that's-- I'd be surprised if that weren't the case.
But what's the scale here?
If we're talking about, you know, a dozen people, no big deal.
If we're talking about 100,000 people, big deal.
I don't think we have a sense of what the scale is of people using large language models to ferret out information of high importance like, How do I vote?
But if we do see that, yeah, that's a real worry.
-And similarly, you can get on Google and find false information.
-Exactly, right.
Plenty of people will Google stuff, and they'll find fake news websites.
They'll find just bad advice, and they'll follow that, and that's a bad idea too.
-Earlier you-- Oh, I'm sorry.
-No.
So then just the question is, is the rate at which people are getting false information from a ChatGPT relevant to election cycle or election requirements?
Is it so outpacing the false information that people get, say, from a standard Google search?
And I haven't seen any research to indicate that there's a big delta there.
-Earlier, you talked about watermarks as a potential regulation.
How would that even come to be?
Would that be federal legislation?
I mean, would there be consequences if it was found?
Who would regulate this in the first place?
-Nobody really knows.
There's not-- so there's two things.
One is, who's going to regulate it?
And the short answer is, could be federal, probably unlikely, although they're trying to push for it.
And there could be state legislatures that do it.
So that's plausible that they could pass such a thing.
But then there's a technical problem.
Can we actually create these watermarks in a way that's reliable?
Can we create watermarks that don't get hacked?
And we don't have that yet.
Researchers are working on this sort of thing.
There's also debate, Should we be watermarking fake content, or should we be watermarking the true content?
Maybe what we do is, instead of trying to figure out what all the fake stuff is and watermarking that, maybe we have like, a stamp of approval, a stamp of verification for things that come from PBS Nevada or the New York Times or CNN or whatever it is.
So let's try to watermark the stuff that we have verified the source of, as opposed to watermarking the stuff that we can't verify the source of.
So there's also this sort of strategic debate about which way should we go.
And that is, in part, a technical question.
-Where would you start in regulating AI?
-Oh, man, where would I start regulating?
So the way that I like to think about it is there are some things that are illegal and some things that are unethical.
And a lot of stuff is already illegal.
So discriminating against people of color, for instance, is illegal.
Those questions about how to take antidiscrimination law and apply it to cases in which AI may discriminate against people, as an example.
But at least there's strong foundation for, Here's the foundation of antidiscrimination law.
Here's how it should or does apply to AI now.
Some of that stuff is-- and then you have some things that are unethical but not illegal.
You have some things that are unethical but we don't want to regulate against.
I know this is sort of a long answer, but some things are unethical that you don't want to regulate against.
So for instance, it's unethical to cheat on your spouse, let's say, right, but we don't want it to be illegal.
That would be too much government power to put people in jail or impose fines on people for cheating on their spouses, though it is unethical.
So when I hear the question, something like, How do you want to regulate AI?
The thing that comes to mind is, Okay, so what are the things that are already not illegal?
Notice that fraud is illegal already, right?
So if you're conducting fraud or discrimination by AI, that's likely going to be illegal as it is.
So what are the things that are unethical with AI that are not currently illegal that should be legal?
-Should be legal or illegal?
-Sorry, you're right, should be illegal.
What are things that should be illegal that are not already?
-Okay.
-It can't just be, Well, this is unethical, so it should be illegal.
That's not the case.
And so that's a really hard question to answer, and it's not clear.
My general stance towards regulation probably has evolved since the last time we spoke a year ago, which is that I think we're jumping a little bit too quickly into most regulation.
I'd like to see more, Let's take the law that we have now and see how it applies, like antidiscrimination law.
There is speculation by various parties about what the capabilities of AI are, what people are going to do with it, and we don't have a ton of evidence about what are the actually bad things that are going down.
We have fear, but not a lot of good evidence of, Here's all the bad stuff that's going on right now that's having really bad impacts.
And in the presence of that fear and in the presence of that ignorance, that's a bad place to start creating wise regulations from.
-Reid Blackman, maybe we'll have you back next year and see how much has changed.
Thank you for joining Nevada Week.
-Thanks for having me.
Two major movie studios eyeing Las Vegas
Video has Closed Captions
Both Sony Pictures and Warner Bros. want to build movie studios in Las Vegas. (15m 22s)
Providing Support for PBS.org
Learn Moreabout PBS online sponsorship