Local

AI goes mainstream with ChatGPT in Bing; experts warn of pitfalls

Artificial intelligence is making its way into our every day. Microsoft announced ChatGPT is now an option on its Bing search engine on Tuesday.

Meanwhile, through another AI platform, “deepfake” videos with fake people peddling fake content are already making their way onto social media feeds. Experts in the AI field in Seattle are monitoring the rapid rise of AI and urging people to watch out for misinformation.

“This is something that’s going to continue to crop up. And we have to be aware of it as a society,” said Jevin West, a co-founder of the Center for an Informed Public at the University of Washington.

Most University of Washington students KIRO 7 spoke with are already aware of the AI bot ChatGPT, but it just went even more mainstream with Microsoft’s move.

ChatGPT is a conversational bot that uses data from the internet to answer questions in a human voice. It can write essays, build meal plans, or even create travel itineraries.

Microsoft’s CEO predicts it will reshape the internet as we know it.

“Every computer interaction is going to be mediated with an agent helping you,” said Satya Nadella, Microsoft CEO. “The race starts today in terms of what you can expect.”

Given his role, West is highly in-tune to the potential dangers of AI.

“As an educator and researcher in the technology world, it is pretty exciting,” West said. “I would just say, as information consumers, we have to be aware of some of the negative implications of this new technology,” he said.

He points out that ChatGPT can’t currently tell what’s real or what’s fake online.

“The data that it’s trained on right now is human-generated,” West said. “And there’s a lot of false stuff out there.”

Another example of artificial intelligence generating false content – this week, the New York Times reported on new AI videos from a pro-Chinese propaganda network. The videos were first picked up and reported by Graphika, a major social media analytics firm.

The two clips, which look like news clips, are fake. One reports on a meeting between the US and Chinese heads of state that did not happen.

KIRO 7 showed some UW students a clip.

“It’s ridiculous how good the voices have gotten. I think the faces are still getting there. Yeah, it’s crazy,” said Greg Sather, a UW sophomore.

But West points out that if you’re on your phone, for example, not analyzing the video for signs of its authenticity, people might be fooled.

“In fact, we’ve done some research where we’ve looked at how they can be more believable in these smaller screens,” West said. “It scrolls on by, and whatever was said, we might just believe because it sounded and looked like a real news anchor.”

In 2020, KIRO 7′s Deedee Sun reported on the danger of deepfake videos, speaking with West.

“Isn’t it amazing?” West said. “We were talking about what it could be. And now here it is in in the real world, sometimes doing good things, but also pushing disinformation,” he said.

Even three years ago, it took months to generate deepfake videos.

Now, fake videos using very realistic avatars can be generated in minutes using websites like Synthesia, for example. You can select an avatar and have it essentially say whatever you want – though the website does screen the content.

“I would say this is even faster than I thought for how easy and cheap it is now,” West said. It costs $30 a month for people who want to make videos for personal use. “It’s scary but also very interesting.”

OpenAI, which runs ChatGPT, says it’s developing technology to detect and mitigate the spread of false information.

There’s a waitlist right now for the new Bing search engine. Microsoft says it’ll be available to millions in the coming weeks.