អ្នកបង្កើតគោលនយោបាយត្រូវដោះស្រាយជាមួយនឹងបញ្ហាប្រឈម AI ពិតប្រាកដ

Focusing on the misty risk of extinction from AI is a dangerous distraction.

Last week, a Silicon Valley-funded group called the Center for AI Safety released a one sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” It was signed by many industry leaders and respected AI researchers and received extensive coverage in the press.

The broader reaction to the statement was harsh. University of Oxford Professor Sandra Wachter said it was just a publicity stunt. Some of the more sensible signatories such as security expert Bruce Schneier quickly expressed signers remorse. “I actually don’t think that AI poses a risk to human extinction,” Schneier said.

Others thought the statement was really a fundraising gimmick. Duke University professor of Sociology Kieren Healy posted a mock paraphrase: “my friends and I are going to require an absolute truckload of grant money to mitigate the literal species-level existential threats associated with this thing we claim to be making.”

Marietje Schaake, a former EU parliamentarian now with Stanford University’s Cyber Policy Center, suggested that the subtext of the statement was that policymakers should deal with existential risk, while business executives set the actual rules for AI use. While AI might be novel, she said, this implicit argument that AI industry leaders “are best placed to regulate the very technologies they produce” was nothing more than talking points recycled from their previous use in social media and cryptocurrencies controversies.

My take is that apocalyptic warnings of AI achieving conscious and independent agency are a distraction from the real AI challenges that require regulators to step up enforcement of current law and policymakers to consider revisions to deal with legal gaps.

Fraud accomplished using AI is illegal, as Federal Trade Commission Chair Lina Khan has said. The agency has already warned about the use of AI to impersonate people to commit video and phone scams. China has noticed the same thing and is cracking down on AI-driven scams. It is not clear there are new legal issues here, but enormous enforcement efforts would be required to control this coming flood of AI fraud. Fully funding the agency’s budget request of $590 million would be a far more productive use of public money than a study of the existential risks of AI.

Everyone is properly worried about AI-created misinformation, such as the recent ‘Fake Putin’announcement that Russia was under attack. Labelling might go a long way toward reducing these risks. The commercial from the Republican National Committee that followed Biden’s announcement of his presidential candidacy used AI to generate images of what might happen under a new Biden Presidency but it was labeled as such and that reduced the misinformation risk.

Transparency for AI is low-hanging policy fruit that policymakers should grab, as some seem to be doing. This week European Commission vice president Vera Jourova urged tech companies to label the content generated by artificial intelligence. In the U.S., Rep. Ritchie Torres (D-N.Y.) will soon introduce the legislation that would require services like ChatGPT to disclose that its output was “generated by artificial intelligence.”

Copyright and AI is another challenge. Some clarity is needed about compensating copyright owners for use of their material for AI training. In February, Getty sued Stability AI in the U.S. saying the AI company had copied 12 million Getty images without permission to train its Stable Diffusion AI image-generation software. Last week it asked a London court to block Stability AI in the U.K. because the AI company had violated Getty’s copyright in training its system.

These cases will be hashed out in court. But there’s a decent argument that copyright holders don’t need to be compensated at all either because of fair use or because only unprotected facts and ideas are extracted for AI training. Moreover, the European Union’s 2019 Copyright in Digital Single Market directive contains an exception that allows text and data mining of online copyrighted material unless the copyright owner opts out by using technological protections such as a header blocker to prevent scanning. This could cover AI training data.

The current draft of the European Union’s Artificial Intelligence Act requires disclosure of the copyrighted works used in training AI systems. This appears to be intended to enable copyright owners to exercise their right to opt out of text and data mining for AI training. But it might also be a step toward something further. It might lead to a compulsory license regime that would prevent copyright holders from blocking AI training but provide them with some compensation for the use of their intellectual property. Sorting out these copyright issues will require focused attention from policymakers.

The EU’s AI Act also requires risky AI systems to undergo a certification process aimed at insuring that risks have been adequately assessed and reasonable mitigation measures adopted. The current draft treats foundational models like ChatGPT as risky systems subject to certification, a potential burden that apparently prompted Open AI head Sam Altman to say he would withdraw ChatGPT from Europe if he couldn’t comply with it. He’s since walked that threat back, saying he has no plans to leave.

Yet he has a point. Policymakers looking for a concrete issue to chew on should ask themselves how a general-purpose AI system such as ChatGPT could be certified as “safe” when many of the risks will emerge only as the system is applied in practice.

These are only some of the key AI issues that should concern policymakers. Others include employment impacts of ever-more capable AI systems, privacy challenges when training data includes personal information, the tendency toward concentration created by the enormous costs and network effects of training AI software, the application of Section 230 liability rules, and the enforcement of the laws against bias in lending, housing and employment when AI drives eligibility assessments.

Policymakers need not and should not wander off into misty realms where autonomous AI programs spin out of control and threaten human survival. They have plenty of work to do confronting the many challenges of AI in the real world.

Source: https://www.forbes.com/sites/washingtonbytes/2023/06/06/policymakers-need-to-deal-with-real-ai-challenges/