Visit my YouTube channel

Editorial: Federal law shouldn’t shield AI chatbots from liability

admin
#USA#BreakingNews#News

Y Combinator President Sam Altman speaks during a Fireside Chat at TechCrunch Disrupt SF at Pier 48 in San Francisco on Sept. 19, 2017. (Dan Honda/Bay Area News Group)




The erupting world of artificial intelligence poses a threat to jobs, political stability, world peace and health and even, as leading AI figures warned last week, the existence of mankind.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” 350 AI scientists and other notable figures cautioned in an open letter from the Center for AI Safety, a nonprofit organization.

This Pandora’s Box cannot be closed. The challenge now is how to reap the benefits of AI while harnessing its threats and misuse. One tentacle of the creature we have let loose is the potential disinformation that could damage reputations, provide deadly medical advice or be used to alter political outcomes.

The concept of digital disinformation is not new. We’ve seen it proliferate for decades now on the internet and, more recently, on social media platforms such as Facebook and Twitter. And we’ve seen the owners of those platforms disavow responsibility, hiding behind a federal law that shields them from responsibility for the falsehoods posted on their platforms.

We must not let them expand that shield to AI. Applications such as ChatGPT should not be protected from liability by Section 230 of the Communication Decency Act. Simply put, if an AI product creates content, then the company that hosts the platform should be held responsible for that information.

Without that liability, without that responsibility, AI will become a tool for dangerous disinformation that could threaten our health and the foundation of our country.

Section 230 was passed by Congress in 1996, two years before Google was founded and eight years before Facebook started. It’s outdated but remains on the books unaltered. The law says that online content publishers cannot be held liable for “material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”

Make no mistake, Section 230 is rightfully credited with paving the way for the free and open internet millions around the world use today. But the law isn’t without faults. Social media platforms routinely abuse it for economic gain, hiding behind the measure’s provision to escape responsibility for reining in disinformation.

But generative AI, such as Open AI’s ChatGPT or Google’s Bard, is a different beast. The applications are clearly content creators, not merely platforms for conveyance of others’ ideas. They can compile research from a wide range of sources and then write material in much the same way as humans — only at superhuman speed. Without the threat of writer’s block.

Tech firms want Section 230 to extend to generative AI, saying users should bear responsibility for the results, given that they are responsible for writing the instructions AI chatbots employ to create content. That’s a copout, designed to rake in billions while avoiding risk. It ignores that the chatbots are not passive platforms but rather content creators.

Tech firms also argue that the failure to shield companies from being sued for the content their products create would stifle innovation. But when it comes to the transformational potential of artificial intelligence, getting it right is every bit as important as getting it fast. Trust in Big Tech and its products is at an all-time low. Imagine the damage to the tech industry if artificial intelligence proves to be more harmful than beneficial.

We’ve seen the internet and social media wreak havoc on our society under Section 230. The threat from AI is exponentially larger. It’s critical that we get the regulations right this time. And the sooner we bring AI abuses under control, the better.


Originally published at Mercury News & East Bay Times Editorial Boards
Tags

Post a Comment

0Comments
Post a Comment (0)
Visit my YouTube channel

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !