ការបង្កើត "ការធ្លាក់ចុះនៃទំនុកចិត្ត" យ៉ាងខ្លាំងមានគោលបំណងលុបបំបាត់ការទទួលខុសត្រូវ AI និយាយថាក្រមសីលធម៌ AI និងច្បាប់ AI

I’m sure you are familiar with the old saying that a rising tide lifts all boats. There is the other side of that coin, perhaps not as well known, namely that a receding tide sinks all ships.

Bottom-line, sometimes the tide determines whether you are going up or going down.

The tide is going to do what it does. You might not have any particular say in the matter. If your boat is docked or anchored in the tide, you are at the whim of the tide. The key is to realize that the tide exists, along with anticipating which way it is heading. With a bit of luck, you can ride out the tide and remain unscathed.

Let’s consider how all of this seafaring talk about boats and tides relates to Artificial Intelligence (AI).

First, I’d like to introduce to you the increasingly popular catchphrase of ទទួលខុសត្រូវ AI. The general notion is that we want AI that abides by proper and desirable human values. Some refer to this as ទទួលខុសត្រូវ AI. អ្នកផ្សេងទៀតពិភាក្សាស្រដៀងគ្នា AI ទទួលខុសត្រូវ, AI ដែលគួរឱ្យទុកចិត្តនិង ការតម្រឹម AIទាំងអស់នេះ ប៉ះលើគោលការណ៍គ្រឹះតែមួយ។ សម្រាប់ការពិភាក្សារបស់ខ្ញុំលើបញ្ហាសំខាន់ៗទាំងនេះ សូមមើល តំណភ្ជាប់នៅទីនេះ និង តំណភ្ជាប់នៅទីនេះ, just to name a few in my ongoing and extensive coverage of AI Ethics and AI Law in my Forbes column.

A crucial ingredient entailing the AI alignment conundrum involves a semblance of trust. Can we trust that AI will be safe and sound? Can we trust that those devising AI will seek to do so in a responsible and proper manner? Can we have trust in those that field AI and are engaged in operating and maintaining AI?

That’s a whole lot of trust.

There is an ongoing effort by AI Ethics and AI Law to bolster a sense of trust in AI. The belief is that by establishing suitable “soft laws” that are prescribed as a set of guidelines or Ethical AI precepts, we might have a fighting chance of getting AI developers and AI operators to abide by ethically sound practices. In addition, if we craft and enact sufficiently attentive laws and regulations overseeing or governing AI, considered “hard laws” due to being placed onto the official legal books, there is a strong possibility to guide AI toward a straight and legally permissible path.

If people don’t trust AI, they won’t be able to garner the benefits that good AI imbues. I’ll be momentarily pointing out that there is អាយអេសដើម្បីភាពល្អ and regrettably there is also អាយអេសសម្រាប់អាក្រក់. Bad AI can impact humankind in a myriad of adverse ways. There is AI that acts in discriminatory fashions and exhibits undue biases. There is AI that can indirectly or indirectly harm people. And so on.

So, we’ve got អាយអេសដើម្បីភាពល្អ that we ardently want to be devised and put into use. Meanwhile, there is អាយអេសសម្រាប់អាក្រក់ that we want to curtail and try and prevent. អាយអេសសម្រាប់អាក្រក់ tends to undercut trust in AI. អាយអេសដើម្បីភាពល្អ usually increases trust in AI. An arduous struggle ensues between the mounting increases in trust that are continually being whittled away by the atrocious undermining of trust.

Upward goes AI trust, which subsequently gets batted down. Then, lowered levels of AI trust get stepped upward once again. Back and forth, the levels of AI trust seesaw. It is almost enough to make you get dizzy. Your stomach churns, akin to a semblance of seasickness like being in a boat that is rocking and bobbing in the ocean.

While that battle is taking place, you might assert that there is another macroscopic factor that serves an even greater exertion on the trust scaling progress. There is something much bigger at play. The bobbing up and down of AI trust is at the whim of a sea monster tide. Yes, AI trust is like a boat floating in a realm that is ultimately more pronounced than the battles and skirmishes taking place between អាយអេសដើម្បីភាពល្អ និង អាយអេសសម្រាប់អាក្រក់.

What in the world am I referring to, you might be asking quizzically?

I’m alluding to the massive “trust recession” that our society is currently enduring.

Allow me to elucidate.

There is plenty of talk in the media today about recessions.

In an economic meaning, a recession is considered a contraction of the economy usually associated with a decline in economic activity. We normally witness or experience a recession by such economic conditions as drops in real income, a decline in the GDP (Gross Domestic Product), weakening employment and layoffs, decreases in industrial production, and the like. I’m not going to go into an extended discussion about economic recessions, for which there is much debate about what constitutes a bona fide recession versus claimed or contended ones (you can find plenty of talking heads that heatedly debate that topic).

The notion of a “recession” has widened to include other aspects of society, going beyond just the economic focus. You can refer to any slowing down of one thing or another as perhaps getting mired in a recession. It is a handy word with a multitude of applications.

Get ready for one usage that you might not have yet especially heard of.

A trust recession.

That’s right, we can speak about a phenomenon known as a trust recession.

The gist is that society at large can be experiencing a slowdown or decrease in trust. You’ve undoubtedly sensed this. If you use any kind of social media, it certainly appears as though trust in our major institutions such as our governments or major entities has precipitously fallen. Things sure feel that way.

You are not alone in having felt that spine-chilling tinge of a societal-wide drop in trust.

អត្ថបទនៅក្នុង។ អាត្លង់ទិចនេះ entitled “The End Of Trust” last year postulated these key findings about where our society is heading:

  • “We may be in the midst of a trust recession”
  • “Trust spiral, once begun, is hard to reverse”
  • “Its decline is vaguely felt before it’s plainly seen”

A trust recession kind of sneaks up upon us all. Inch by inch, trust weakens. Efforts to build trust are made harder and harder to pull off. Skepticism reigns supreme. We doubt that trust should be given. We don’t even believe that trust can be particularly earned (in a sense, trust is a ghost, a falsehood, it is unable to be made concrete and reliable).

The thing is, we need trust in our society.

Per that same article: “Trust. Without it, Adam Smith’s invisible hand stays in its pocket; Keynes’s ‘animal spirits’ are muted. ‘Virtually every commercial transaction has within itself an element of trust,’ the Nobel Prize-winning economist Kenneth Arrow wrote in 1972” (as cited from “The End Of Trust”, The Atlantic, November 24, 2021, Jerry Useem).

Research suggests that there is a nearly direct tie between economic performance and the element of trust in society. This is perhaps a controversial claim, though it does seem to intuitively hold water. Consider this noteworthy contention: “The economist’s Paul Zak and Stephen Knack found, in a study published in 1998, that a 15 percent bump in a nation’s belief that ‘most people can be trusted’ adds a full percentage point to economic growth each year” (ibid).

Take a moment and reflect upon your own views about trust.

Do you today have a greater level of trust or a lessened level of trust in each of these respective realms:

  • Trust in government
  • Trust in businesses
  • Trust in leaders
  • Trust in nations
  • Trust in brands
  • Trust in the news media
  • Trust in individuals

If you can truly say that your trust is higher than it once was for all of those facets, a gob-smacking tip of the hat to you (you are living in a world of unabashed bliss). By and large, I dare say that most of the planet would express the opposite in terms of their trust for those hallowed iconic elements have gone down.

Markedly so.

The data seem to support the claim that trust has eroded in our society. Pick any of the aforementioned realms. In terms of our belief in governmental capacities: “Trust in government dropped sharply from its peak in 1964, according to the Pew Research Center, and, with a few exceptions, has been sputtering ever since” (ibid).

You might be tempted to argue that trust in individuals shouldn’t be on the list. Surely, we still trust each other. It is only those big bad institutions that we no longer have trust in. Person to person, our trust has got to be the same as it has always been.

Sorry to tell you this: “Data on trust between individual Americans are harder to come by; surveys have asked questions about so-called interpersonal trust less consistently, according to Pew. But, by one estimate, the percentage of Americans who believed ‘most people could be trusted’ hovered around 45 percent as late as the mid-’80s; it is now 30 percent” (ibid).

Brutal, but true.

A recent interview with experts on trust led to this exposition:

· “The bad news is that if trust is this precious natural resource, it’s endangered. So, in 1972, about half of Americans agreed that most people can be trusted. But by 2018, that had fallen to about 30%. We trust institutions far less than we did 50 years ago. For instance, in 1970, 80% of Americans trusted the medical system. Now it’s 38%. TV news in the 1970s was 46%. Now it’s 11%. Congress, 42% to 7%. We are living through a massive trust recession and that is hurting us in a number of ways that probably most people are totally unaware of” (interview by Jonathan Chang and Meghna Chakrabarti, “Essential Trust: The Brain Science Of Trust”, WBUR, November 29, 2022, and quoted remarks of Jamil Zaki, Associate Professor of Psychology at Stanford University and Director of the Stanford Social Neuroscience Lab).

Various national and global studies of trust have identified barometers associated with societal levels of trust. Even a cursory glance at the results showcases that trust is falling, falling, falling.

The otherwise average version of a typical trust recession has been “upgraded” to being labeled as a ដ៏ធំ trust recession. We don’t just have a run-of-the-mill trust recession; we instead have an all-out massive trust recession. Big time. Getting bigger and bigger. It permeates all manner of our existence. And this massive trust recession touches every corner of what we do and how our lives play out.

Including Artificial Intelligence.

I would guess that you saw that coming. I had earlier mentioned that a battle of អាយអេសដើម្បីភាពល្អ ប្រឆាំង​នឹង​ទល់​នឹង អាយអេសសម្រាប់អាក្រក់ is taking place. Most of those in the AI Ethics and AI Law arena are daily dealing with the ups and downs of those erstwhile battles. We want Responsible AI to win. Surprisingly to many that are in the throes of these pitched battles, they are not cognizant of the impacts due to the massive trust recession that, in a sense, overwhelms whatever is happening on the AI trust battlefields.

The tide is the massive trust recession. The battling trust about AI is a boat that is going up and down on its own accord, and regrettably going downward overall due to the tide receding. As society at large is infected by the massive trust recession, so is the trust in AI.

I don’t want this to seem defeatist.

The fight for trust in AI has to continue. All I’m trying to emphasize is that as those battles persist, keep in mind that trust as a whole is draining out of society. There is less and less buffeting or bolstering of trust to be had. The leftover meager scraps of trust are going to make it increasingly harder to win the អាយអេសដើម្បីភាពល្អ trust ambitions.

That darned tide is taking all ships down, including the trust in AI.

ឆ្លៀត​ពេល​បន្តិច​ទៅ​គុយទាវ​លើ​សំណួរ​គួរ​ឱ្យ​ចាប់​អារម្មណ៍​ទាំង​បី​នេះ៖

  • What can we do about the pervasive societal massive trust recession when it comes to AI?
  • Is AI doomed to rock-bottom basement-level trust, no matter what AI Ethics or AI Law does?
  • Should those in the AI field toss in the towel on AI trust altogether?

ខ្ញុំរីករាយដែលអ្នកបានសួរ។

មុននឹងចុះជ្រៅទៅក្នុងប្រធានបទនេះ ខ្ញុំចង់ដាក់មូលដ្ឋានគ្រឹះសំខាន់ៗមួយចំនួនអំពី AI និងជាពិសេស AI Ethics និងច្បាប់ AI ដើម្បីធ្វើដូច្នេះដើម្បីប្រាកដថាការពិភាក្សានឹងសមហេតុផលតាមបរិបទ។

ការកើនឡើងនៃការយល់ដឹងអំពីក្រមសីលធម៌ AI និងច្បាប់ AI ផងដែរ។

យុគសម័យថ្មីនៃ AI ត្រូវបានគេមើលឃើញដំបូងថាជា អាយអេសដើម្បីភាពល្អមានន័យថា យើងអាចប្រើប្រាស់ AI សម្រាប់ភាពប្រសើរឡើងនៃមនុស្សជាតិ។ នៅលើកែងជើងរបស់ អាយអេសដើម្បីភាពល្អ មក​ការ​ដឹង​ថា​យើង​ត្រូវ​បាន​ជ្រមុជ​ក្នុង​ផង​ដែរ​ អាយអេសសម្រាប់អាក្រក់. នេះរាប់បញ្ចូលទាំង AI ដែលត្រូវបានបង្កើត ឬកែប្រែខ្លួនឯងទៅជាការរើសអើង និងធ្វើការជ្រើសរើសតាមការគណនាដែលបង្កប់នូវភាពលំអៀងដែលមិនសមហេតុផល។ ពេលខ្លះ AI ត្រូវបានបង្កើតឡើងតាមរបៀបនោះ ខណៈពេលដែលក្នុងករណីផ្សេងទៀត វាចូលទៅក្នុងទឹកដីដែលមិនឆ្ពោះទៅរកនោះ។

ខ្ញុំ​ចង់​ធ្វើ​ឱ្យ​ប្រាកដ​ជា​ច្រើន​ថា​យើង​នៅ​លើ​ទំព័រ​តែ​មួយ​អំពី​លក្ខណៈ​នៃ AI សព្វថ្ងៃ។

មិនមាន AI ណាមួយដែលប្រកបដោយមនោសញ្ចេតនាទេ។ យើងមិនមាននេះទេ។ យើង​មិន​ដឹង​ថា​តើ​ AI ​អាច​នឹង​កើត​ឡើង​ឬ​អត់​នោះ​ទេ។ គ្មាននរណាម្នាក់អាចទស្សន៍ទាយបានច្បាស់ថា តើយើងនឹងសម្រេចបាន AI អារម្មណ៍ ឬថាតើ AI អារម្មណ៍នឹងកើតឡើងដោយអព្ភូតហេតុដោយឯកឯងក្នុងទម្រង់នៃ supernova នៃការយល់ដឹងតាមការគណនា (ជាទូទៅគេហៅថាឯកវចនៈ សូមមើលការគ្របដណ្តប់របស់ខ្ញុំនៅ តំណភ្ជាប់នៅទីនេះ).

ប្រភេទនៃ AI ដែលខ្ញុំកំពុងផ្តោតលើមាន AI ដែលមិនមានអារម្មណ៍ដែលយើងមានសព្វថ្ងៃនេះ។ ប្រសិនបើយើងចង់ស្មានទុកជាមុនអំពី AI ប្រកបដោយការយល់ឃើញ ការពិភាក្សានេះអាចទៅក្នុងទិសដៅខុសគ្នាខ្លាំង។ AI ដែល​មាន​អារម្មណ៍​ថា​នឹង​មាន​គុណភាព​មនុស្ស។ អ្នកនឹងត្រូវពិចារណាថា AI អារម្មណ៍គឺស្មើនឹងការយល់ដឹងរបស់មនុស្ស។ ជាងនេះទៅទៀត ដោយសារមានការស្មានខ្លះៗថា យើងប្រហែលជាមាន AI ដ៏វៃឆ្លាត ដែលអាចសន្និដ្ឋានបានថា AI បែបនេះអាចឆ្លាតជាងមនុស្សទៅទៀត (សម្រាប់ការរុករករបស់ខ្ញុំនូវ AI ដ៏ឆ្លាតវៃជាលទ្ធភាព សូមមើល ការគ្របដណ្តប់នៅទីនេះ).

ខ្ញុំ​សូម​ណែនាំ​យ៉ាង​មុតមាំ​ថា​យើង​រក្សា​អ្វីៗ​នៅ​លើ​ផែនដី​ហើយ​ពិចារណា​អំពី​ការ​គណនា​មិន​គិត​ពី AI នាពេលបច្ចុប្បន្ននេះ។

ត្រូវដឹងថា AI នាពេលបច្ចុប្បន្ននេះ មិនអាច "គិត" តាមរបៀបណាមួយបានដូចការគិតរបស់មនុស្សនោះទេ។ នៅពេលអ្នកប្រាស្រ័យទាក់ទងជាមួយ Alexa ឬ Siri សមត្ថភាពសន្ទនាអាចហាក់ដូចជាស្រដៀងនឹងសមត្ថភាពរបស់មនុស្ស ប៉ុន្តែការពិតគឺថាវាមានលក្ខណៈគណនា និងខ្វះការយល់ដឹងរបស់មនុស្ស។ យុគសម័យចុងក្រោយបង្អស់របស់ AI បានប្រើប្រាស់យ៉ាងទូលំទូលាយនូវ Machine Learning (ML) និង Deep Learning (DL) ដែលមានឥទ្ធិពលលើការផ្គូផ្គងលំនាំគណនា។ នេះបាននាំឱ្យមានប្រព័ន្ធ AI ដែលមានរូបរាងដូចមនុស្ស។ ទន្ទឹមនឹងនេះដែរ សព្វថ្ងៃនេះមិនមាន AI ណាមួយដែលមានលក្ខណៈសមហេតុសមផលទេ ហើយក៏មិនមានការងឿងឆ្ងល់នៃការយល់ដឹងនៃការគិតដ៏រឹងមាំរបស់មនុស្សដែរ។

សូមប្រយ័ត្នចំពោះ AI នាពេលបច្ចុប្បន្ននេះ។

ML/DL គឺជាទម្រង់នៃការផ្គូផ្គងលំនាំគណនា។ វិធីសាស្រ្តធម្មតាគឺអ្នកប្រមូលផ្តុំទិន្នន័យអំពីកិច្ចការធ្វើការសម្រេចចិត្ត។ អ្នកបញ្ចូលទិន្នន័យទៅក្នុងម៉ូដែលកុំព្យូទ័រ ML/DL ។ គំរូទាំងនោះស្វែងរកលំនាំគណិតវិទ្យា។ បន្ទាប់ពីស្វែងរកគំរូបែបនេះ ប្រសិនបើរកឃើញនោះ ប្រព័ន្ធ AI នឹងប្រើលំនាំទាំងនោះនៅពេលជួបទិន្នន័យថ្មី។ នៅពេលបង្ហាញទិន្នន័យថ្មី គំរូដែលផ្អែកលើ "ទិន្នន័យចាស់" ឬទិន្នន័យប្រវត្តិសាស្រ្តត្រូវបានអនុវត្តដើម្បីបង្ហាញការសម្រេចចិត្តបច្ចុប្បន្ន។

ខ្ញុំ​គិត​ថា​អ្នក​អាច​ទាយ​ថា​តើ​នេះ​កំពុង​ទៅ​ណា​។ ប្រសិនបើ​មនុស្ស​ដែល​បាន​បង្កើត​គំរូ​តាម​ការ​សម្រេច​ចិត្ត​បាន​រួម​បញ្ចូល​ការ​លំអៀង​ដែល​មិន​ឆ្អែតឆ្អន់ នោះ​ការ​ខុសឆ្គង​គឺ​ថា​ទិន្នន័យ​បាន​ឆ្លុះ​បញ្ចាំង​ពី​ចំណុច​នេះ​តាម​វិធី​ស្រាលៗ ប៉ុន្តែ​សំខាន់។ ការ​ផ្គូផ្គង​គំរូ​ការ​សិក្សា​តាម​ម៉ាស៊ីន ឬ​ការ​រៀន​ជ្រៅ​នឹង​ព្យាយាម​ធ្វើ​ត្រាប់តាម​ទិន្នន័យ​តាម​គណិតវិទ្យា។ មិន​មាន​ភាព​ដូច​គ្នា​នៃ​សុភវិនិច្ឆ័យ ឬ​ទិដ្ឋភាព​មនោសញ្ចេតនា​ផ្សេង​ទៀត​នៃ​ការ​បង្កើត​គំរូ​ដែល​បង្កើត​ដោយ AI ក្នុង​មួយ។

ជាងនេះទៅទៀត អ្នកអភិវឌ្ឍន៍ AI ប្រហែលជាមិនដឹងពីអ្វីដែលកំពុងកើតឡើងនោះទេ។ គណិតវិទ្យា arcane នៅក្នុង ML/DL អាចធ្វើឱ្យវាពិបាកក្នុងការបំបាត់ភាពលំអៀងដែលលាក់នៅពេលនេះ។ អ្នកនឹងសង្ឃឹមយ៉ាងត្រឹមត្រូវ ហើយរំពឹងថាអ្នកអភិវឌ្ឍន៍ AI នឹងសាកល្បងសម្រាប់ភាពលំអៀងដែលអាចកប់បាន ទោះបីជាវាពិបាកជាងវាក៏ដោយ។ ឱកាសដ៏រឹងមាំមួយមានដែលថា ទោះបីជាមានការសាកល្បងយ៉ាងទូលំទូលាយថានឹងមានភាពលំអៀងនៅតែបង្កប់នៅក្នុងគំរូដែលត្រូវគ្នានឹងគំរូនៃ ML/DL ក៏ដោយ។

អ្នក​អាច​ប្រើ​សុភាសិត​ដ៏​ល្បី​ឬ​មិន​ល្អ​ខ្លះ​នៃ​ការ​ចោល​សំរាម​ក្នុង​ធុង​សំរាម។ រឿងនោះគឺថា នេះគឺស្រដៀងទៅនឹងភាពលំអៀងដែលបញ្ចូលទៅក្នុង insidiously ដូចជាភាពលំអៀងដែលលិចចូលទៅក្នុង AI ។ ក្បួនដោះស្រាយការសម្រេចចិត្ត (ADM) នៃ AI axiomatically ក្លាយជាបន្ទុកដោយភាពមិនស្មើគ្នា។

មិនល្អ។

ទាំងអស់នេះមានផលប៉ះពាល់យ៉ាងសំខាន់ AI Ethics និងផ្តល់នូវបង្អួចងាយស្រួលចូលទៅក្នុងមេរៀនដែលបានរៀន (សូម្បីតែមុនពេលមេរៀនទាំងអស់កើតឡើង) នៅពេលនិយាយអំពីការព្យាយាមបង្កើតច្បាប់ AI ។

ក្រៅពីការប្រើប្រាស់ច្បាប់សីលធម៌ AI ជាទូទៅ មានសំណួរដែលត្រូវគ្នាថាតើយើងគួរតែមានច្បាប់ដើម្បីគ្រប់គ្រងការប្រើប្រាស់ផ្សេងៗនៃ AI ដែរឬទេ។ ច្បាប់ថ្មីកំពុងត្រូវបានដាក់ពង្រាយនៅកម្រិតសហព័ន្ធ រដ្ឋ និងមូលដ្ឋាន ដែលទាក់ទងនឹងជួរ និងលក្ខណៈនៃរបៀបដែល AI គួរតែត្រូវបានបង្កើត។ ការ​ខិតខំ​ធ្វើ​សេចក្តី​ព្រាង និង​អនុម័ត​ច្បាប់​បែប​នេះ​ជា​បណ្តើរៗ។ ក្រមសីលធម៌ AI ដើរតួជាគម្លាតដែលត្រូវបានពិចារណា យ៉ាងហោចណាស់ ហើយស្ទើរតែនឹងប្រាកដក្នុងកម្រិតខ្លះត្រូវបានដាក់បញ្ចូលទៅក្នុងច្បាប់ថ្មីទាំងនោះ។

ត្រូវ​ដឹង​ថា​មាន​អ្នក​ខ្លះ​ប្រកែក​យ៉ាង​ម៉ឺងម៉ាត់​ថា​យើង​មិន​ត្រូវ​ការ​ច្បាប់​ថ្មី​ដែល​គ្រប​ដណ្ដប់​លើ AI ហើយ​ថា​ច្បាប់​ដែល​មាន​ស្រាប់​របស់​យើង​គឺ​គ្រប់គ្រាន់​ហើយ។ ពួកគេបានព្រមានថា ប្រសិនបើយើងអនុវត្តច្បាប់ AI មួយចំនួននេះ យើងនឹងសម្លាប់សត្វពពែមាស ដោយកាត់បន្ថយភាពជឿនលឿននៃ AI ដែលផ្តល់អត្ថប្រយោជន៍យ៉ាងច្រើនដល់សង្គម។

នៅក្នុងជួរមុន ខ្ញុំបានរៀបរាប់អំពីកិច្ចខិតខំប្រឹងប្រែងជាតិ និងអន្តរជាតិផ្សេងៗ ដើម្បីបង្កើត និងអនុម័តច្បាប់គ្រប់គ្រង AI សូមមើល តំណភ្ជាប់នៅទីនេះ, ឧទាហរណ៍។ ខ្ញុំក៏បានគ្របដណ្តប់លើគោលការណ៍ និងគោលការណ៍ណែនាំអំពីក្រមសីលធម៌ AI ជាច្រើនដែលប្រជាជាតិនានាបានកំណត់ និងអនុម័ត រួមទាំងឧទាហរណ៍ការខិតខំប្រឹងប្រែងរបស់អង្គការសហប្រជាជាតិដូចជា UNESCO កំណត់ក្រមសីលធម៌ AI ដែលប្រទេសជិត 200 បានអនុម័ត។ តំណភ្ជាប់នៅទីនេះ.

នេះគឺជាបញ្ជីគន្លឹះដ៏មានប្រយោជន៍នៃលក្ខណៈវិនិច្ឆ័យ AI សីលធម៌ ឬលក្ខណៈទាក់ទងនឹងប្រព័ន្ធ AI ដែលខ្ញុំបានស្វែងរកយ៉ាងជិតស្និទ្ធពីមុនមក៖

  • តម្លាភាព
  • យុត្តិធម៌ និងយុត្តិធម៌
  • ភាពមិនអាក្រក់
  • ការទទួលខុសត្រូវ
  • ភាពឯកជន
  • អត្ថប្រយោជន៍
  • សេរីភាព និងស្វ័យភាព
  • ការជឿទុកចិត្ត
  • និរន្តរភាព
  • សេចក្តីថ្លៃថ្នូរ
  • សាមគ្គីភាព

គោលការណ៍ក្រមសីលធម៌ AI ទាំងនោះត្រូវបានគេសន្មត់ថាប្រើប្រាស់ដោយស្មោះស្ម័គ្រដោយអ្នកអភិវឌ្ឍន៍ AI រួមជាមួយនឹងអ្នកដែលគ្រប់គ្រងកិច្ចខិតខំប្រឹងប្រែងអភិវឌ្ឍន៍ AI និងសូម្បីតែអ្នកដែលនៅទីបំផុតអនុវត្ត និងថែរក្សាប្រព័ន្ធ AI ។

អ្នកពាក់ព័ន្ធទាំងអស់នៅទូទាំងវដ្តជីវិត AI នៃការអភិវឌ្ឍន៍ និងការប្រើប្រាស់ត្រូវបានចាត់ទុកថាស្ថិតក្នុងវិសាលភាពនៃការគោរពតាមបទដ្ឋានដែលត្រូវបានបង្កើតឡើងនៃ Ethical AI ។ នេះគឺជាការគូសបញ្ជាក់ដ៏សំខាន់មួយចាប់តាំងពីការសន្មត់ជាធម្មតាគឺថា "មានតែអ្នកសរសេរកូដ" ឬអ្នកដែលកម្មវិធី AI ប៉ុណ្ណោះដែលត្រូវប្រកាន់ខ្ជាប់នូវគោលគំនិតសីលធម៌របស់ AI ។ ដូចដែលបានសង្កត់ធ្ងន់ពីមុនមក វាត្រូវការភូមិមួយដើម្បីបង្កើត និងអនុវត្ត AI ហើយសម្រាប់ភូមិទាំងមូលត្រូវតែគោរព និងគោរពតាមសិក្ខាបទក្រមសីលធម៌ AI ។

ថ្មីៗនេះខ្ញុំក៏បានពិនិត្យ វិក័យប័ត្រនៃសិទ្ធិ AI ដែលជាចំណងជើងផ្លូវការនៃឯកសារផ្លូវការរបស់រដ្ឋាភិបាលសហរដ្ឋអាមេរិកដែលមានចំណងជើងថា "Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People" ដែលជាលទ្ធផលនៃកិច្ចខិតខំប្រឹងប្រែងរយៈពេលមួយឆ្នាំដោយការិយាល័យគោលនយោបាយវិទ្យាសាស្ត្រ និងបច្ចេកវិទ្យា (OSTP ) OSTP គឺជាអង្គភាពសហព័ន្ធដែលបម្រើការផ្តល់ប្រឹក្សាដល់ប្រធានាធិបតីអាមេរិក និងការិយាល័យប្រតិបត្តិរបស់សហរដ្ឋអាមេរិកអំពីទិដ្ឋភាពបច្ចេកវិទ្យា វិទ្យាសាស្ត្រ និងវិស្វកម្មផ្សេងៗនៃសារៈសំខាន់ជាតិ។ ក្នុងន័យនេះ អ្នកអាចនិយាយបានថា Bill of Rights របស់ AI នេះគឺជាឯកសារដែលត្រូវបានអនុម័ត និងយល់ព្រមដោយសេតវិមានអាមេរិកដែលមានស្រាប់។

នៅក្នុង AI Bill of Rights មាន XNUMX ប្រភេទសំខាន់ៗដូចជា៖

  • ប្រព័ន្ធសុវត្ថិភាព និងមានប្រសិទ្ធភាព
  • ការការពារការរើសអើងជាក្បួន
  • ភាពឯកជនទិន្នន័យ
  • សេចក្តីជូនដំណឹង និងការពន្យល់
  • ជម្មើសជំនួសរបស់មនុស្ស ការពិចារណា និងការថយក្រោយ

ខ្ញុំបានពិនិត្យដោយប្រុងប្រយ័ត្ននូវសិក្ខាបទទាំងនោះ តំណភ្ជាប់នៅទីនេះ.

Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of exploring the irksome matter of the ongoing massive trust recession and its impact on AI levels of trust.

Getting A Bigger Boat To Build Up Trust In AI

ចូរយើងពិនិត្យមើលឡើងវិញនូវសំណួរដែលបានកំណត់ពីមុនរបស់ខ្ញុំលើប្រធានបទនេះ៖

  • What can we do about the pervasive societal massive trust recession when it comes to AI?
  • Is AI doomed to rock-bottom basement-level trust, no matter what AI Ethics or AI Law does?
  • Should those in the AI field toss in the towel on AI trust altogether?

I’m going to take the optimistic route and argue that we can do something about this.

I would also vehemently say that we should not toss in the towel. The key instead is to work even harder, plus smarter, toward dealing with the trust in AI question. The part about being smarter entails realizing that we are in a massive trust recession and soberly taking that macroscopic looming factor into mindful account. Yes, for everything that we do during the fervent efforts to adopt and support AI Ethics and AI Law, be watchful of and adjust according to the falling tide of trust all told.

Before I go further into the optimistic or smiley face choice, I suppose it is only fair to offer the contrasting viewpoint. Okay, here you go. We cannot do anything about the massive trust recession. No point in trying to tilt at windmills, as they say. Thus, just keep fighting the fight, and whatever happens with the tide, so be it.

In that sad face scenario, you could suggest it is a shrugging of the shoulders and a capitulation that the tide is the tide. Someday, hopefully, the massive trust recession will weaken and become merely a normal form of a trust recession. Then, with a bit of luck, the trust recession will whimper out and trust will have returned. We might even end up with a booming sense of trust. A trust boom, as it were.

I’ll categorize your choices into the following five options:

1) The Unaware. These are those advocates in the AI Ethics and AI Law arena that don’t know there is a massive trust recession. They don’t even know that they don’t know.

2) The Know But Don’t Care. These are those advocates in AI Ethics and AI Law that know about the massive trust recession but shake it off. Ride it out, and do nothing else new.

3) The Know And Cope With It. These are those advocates in AI Ethics and AI Law that know about the massive trust recession and have opted to cope with it. They adjust their messaging; they adjust their approach. At times, this includes blending the trust recession into their strategies and efforts about furthering trust in AI and seeking the elusive Responsible AI.

4) The Know And Inadvertently Make Things Worse. These are those advocates in AI Ethics and AI Law that know about the massive trust recession, plus they have opted to do something about it, yet they end up shooting their own foot. By reacting improperly to the societal trend, they mistakenly worsen Responsible AI and drop trust in AI to even lower depths.

5) ផ្សេងទៀត (to be explained, momentarily)

Which of those five options are you in?

I purposely gave the fifth option for those of you who either don’t like any of the other four or that you genuinely believe there are other possibilities and that none of the ones listed adequately characterizes your position.

You don’t have to be shoehorned into any of the choices. I merely proffer the selections for the purpose of generating thoughtful discussion on the meritorious topic. We need to be talking about the massive trust recession, I believe. Not much in-depth analysis has yet occurred in the particulars of Responsible AI and Trustworthy AI endeavors as it relates to the societal massive trust recession.

Time to open those floodgates (alright, that’s maybe over-the-top on these puns and wordplay).

If you are wondering what a fifth option might consist of, here’s one that you might find of interest.

AI ភាពពិសេស.

There is a contingent in the AI field that believes AI is an exception to the normal rules of things. These AI exceptionalism proponents assert that you cannot routinely apply other societal shenanigans to AI. AI isn’t impacted because it is a grandiose exception.

In that somewhat dogmatic viewpoint, my analogy of a tide and AI trust as a boat that is bobbing up and down would be tossed out the window as an analogous consideration. AI trust is bigger than the tide. No matter what happens in the massive trust recession, AI trust is going to go wherever it goes. If the tide goes up, AI trust might go up or might go down. If the tide goes down, AI trust might go up or might go down. Irrespective of the tide, AI trust has its own fate, its own destiny, its own path.

I’ve got another twist for you.

Some might contend that AI is going to materially impact the massive trust recession.

You see, the rest of this discussion has gotten things backward, supposedly. It isn’t that the massive trust recession is going to impact AI trust, instead, the reverse is true. Depending upon what we do about AI, the trust recession is potentially going to deepen or recover. AI trust will determine the fate of the tide. I guess you could assert that AI is so powerful as a potential force that it is akin to the sun, the moon, and the earth in determining how the tide is going to go.

If we get the AI trust aspects figured out, and if people trust in AI, maybe this will turn around the trust recession. People will shift their trust in all other respects of their lives. They will begin to increase their trust in government, businesses, leaders, and so on, all because of having ubiquitous trustworthy AI.

Farfetched?

ប្រហែលជាមិនមែនទេ។

Without getting you into a gloomy mood, do realize that the opposite perspective about AI trust could also emerge. In that use case, we all fall into an utter lack of trust in AI. We become so distrustful that the distrust spills over into our already massive trust recession. In turn, this makes the massive trust recession become the super gigantic mega-massive trust recession, many times worse than we could ever imagine.

Dovetail this idea into the bandied-around notion of AI as an existential risk. If AI starts to seem as though the extremital risk is coming to fruition, namely that AI that is going to take over humankind and either enslave us or wipe us all out, you would certainly seem to have a solid argument for the massive trust recession taking a pretty dour downward spiral.

ខ្ញុំយល់ហើយ។

Anyway, let’s hope for the happier side of things, shall we?

សន្និដ្ឋាន

ឥឡូវអ្នកដឹងអំពីឯកសារ massive trust recession, what can you do regarding AI trust?

First, for those of you steeped in the AI Ethics and AI Law realm, make sure to calculate your ទទួលខុសត្រូវ AI and Trustworthy AI pursuits via the societal context associated with being in a trust recession. You should be careful in feeling dejected that your own efforts to boost trust in AI are seemingly hampered or less than fully successful as to what you expected to occur. It could be that your efforts are at least helping, meanwhile unbeknownst to you, the trust drainpipe is rub-a-dub usurping your valiant activity in a silent and sadly detrimental way. Do not despair. It could be that if the trust recession wasn’t underway, you would have seen tremendous advances and extraordinarily laudable results.

Second, we need to do more analyses on how to measure the trust recession and likewise how to measure the ups and down’s of trust in AI. Without having reliable and well-accepted metrics, across the board, we are blindly floating in an ocean where we don’t know how many fathoms we have lost or gained.

Third, consider ways to convey that trust in AI is being shaped by the massive trust recession. Few know of this. AI insiders ought to be doing some deep thinking on the topic. The public at large should also be brought up to speed. There are two messages to be conveyed. One is that there is a massive trust recession. Second, trust in AI is subject to the vagaries of the trust recession, and we have to explicitly take that into account.

As a final remark, for now, I imagine that you know the famous joke about the fish in a fishbowl.

Here’s how it goes.

Two fish are swimming back and forth in a fishbowl. Around and around, they go. Finally, one of the fish turns to the other one and says it is getting tired of being in the water. The other fish contemplates this comment. A few ponderous moments later, the mindful fish inquisitively replies, what in the heck is water?

It’s a bit of an old joke.

The emphasis is supposed to be that whatever surrounds you might not be readily recognizable. You become accustomed to it. It is just there. You do not notice it because it is everywhere and unremarkable as to its presence (I’ll mention as an aside that some cynics don’t like the joke since they insist that real fish do know they are indeed in water, and realize “cognitively” as such, including being able to leap out of the water into the air, etc.).

As a convenient fish tale or parable, we can use this handy dandy allegory to point out that we might not realize that we are in a massive trust recession. It is all around us, and we viscerally feel it, but we don’t consciously realize that it is here.

Time to take off the blinders.

Take a deep breath and breath in the fact that our massive trust recession exists. In turn, for those of you mightily striving day after day to foster Responsible AI and garner trust in AI, keep your eyes wide open as to how the trust recession is intervening in your valiant efforts.

As Shakespeare famously stated: “We must take the current when it serves, or lose our ventures.”

Source: https://www.forbes.com/sites/lanceeliot/2022/12/04/massively-brewing-trust-recession-aims-to-erode-responsible-ai-says-ai-ethics-and-ai-law/