ក្រមសីលធម៌ AI និងច្បាប់ AI សួរសំណួរពិបាកអំពីការសន្យាថ្មីនោះ ដោយអ្នកផលិតរ៉ូបូតដែលរាំដោយនិយាយថាពួកគេនឹងរារាំង AI Weaponization

You might have perchance last week seen in the news or noticed on social media the announced pledge by some robot makers about their professed aims to avoid AI weaponization of general-purpose robots. I’ll be walking you through the details in a moment, so don’t worry if you hadn’t caught wind of the matter.

The reaction to this proclamation has been swift and, perhaps as usual in our polarized society, been both laudatory and at times mockingly critical or downright nastily skeptical.

It is a tale of two worlds.

In one world, some say that this is exactly what we need for ទទួលខុសត្រូវ AI robot developers to declare.

Thank goodness for being on the right side of an issue that will gradually be getting more visible and more worrisome. Those cute dancing robots are troubling because it is pretty easy to rejigger them to carry weapons and be used in the worst of ways (you can check this out yourself by going to social media and there are plentifully videos showcasing dancing robots armed with machine guns and other armaments).

The other side of this coin says that the so-called pledge is nothing more than a marketing or public relations ploy (as a side note, is anybody familiar with the difference between a pledge and a donation?). Anyway, the doubters exhort that this is unbridled virtue signaling in the context of dancing robots. You see, bemoaning the fact that general-purpose robots can be weaponized is certainly a worthwhile and earnestly sought consideration, though merely claiming that a maker won’t do so is likely a hollow promise, some insist.

All in all, the entire matter brings up quite a hefty set of AI Ethics and AI Law considerations. We will meticulously unpack the topic and see how this is a double-whammy of an ethical and legal AI morass. For my ongoing and extensive coverage of AI Ethics and AI Law, see តំណភ្ជាប់នៅទីនេះ និង តំណភ្ជាប់នៅទីនេះ, គ្រាន់តែឈ្មោះមួយចំនួន។

I will also be referring throughout this discussion to my prior analyses of the dangers of AI weaponization, such as my in-depth assessment at តំណភ្ជាប់នៅទីនេះ. You might want to take a look at that discourse for additional behind-the-scenes details.

The Open Letter That Opens A Can Of Worms

Let’s begin this analysis by doing a careful step-by-step exploration of the Open Letter that was recently published by six relatively well-known advanced robot makers, namely Boston Dynamics, Clearpath Robotics, ANYbotics, Agility Robotics, Open Robotics, and Unitree. By and large, I am guessing that you have seen mainly the Boston Dynamics robots, such as the ones that prance around on all fours. They look as though they are dog-like and we relish seeing them scampering around.

As I’ve previously and repeatedly forewarned, the use of such “dancing” robots as a means of convincing the general public that these robots are cutesy and adorable is sadly misleading and veers into the abundant pitfalls of anthropomorphizing them. We begin to think of these hardened pieces of metal and plastic as though they are the equivalent of a cuddly loyal dog. Our willingness to accept these robots is predicated on a false sense of safety and assurance. Sure, you’ve got to make a buck and the odds of doing so are enhanced by parading around dancing robots, but this regrettably omits or seemingly hides the real fact that these robots are robots and that the AI controlling the robots can be devised wrongfully or go awry.

Consider these ramifications of AI (excerpted from my article on AI weaponization, found at តំណភ្ជាប់នៅទីនេះ):

  • AI អាច​នឹង​ជួប​បញ្ហា​មួយ​ដែល​ធ្វើ​ឱ្យ​វា​វង្វេង
  • AI អាច​នឹង​ត្រូវ​បាន​គ្រប​ដណ្តប់ និង​បិទ​ដោយ​មិន​ឆ្លើយតប
  • AI អាចមានកំហុសអ្នកអភិវឌ្ឍន៍ដែលបណ្តាលឱ្យមានអាកប្បកិរិយាខុសប្រក្រតី
  • AI អាច​នឹង​ត្រូវ​បាន​ខូច​ដោយ​មេរោគ​មនុស្ស​អាក្រក់​ដែល​ដាក់​បញ្ចូល
  • AI អាច​នឹង​ត្រូវ​បាន​គ្រប់គ្រង​ដោយ​អ្នក​លួច​អ៊ីនធឺណិត​ក្នុង​ពេល​ជាក់ស្តែង
  • AI អាចត្រូវបានចាត់ទុកថាមិនអាចទាយទុកជាមុនបានដោយសារតែភាពស្មុគស្មាញ
  • AI អាច​នឹង​ធ្វើ​ការ​សម្រេច​ចិត្ត "ខុស" (ទាក់ទង)

Those are points regarding AI that is of the type that is genuinely devised at the get-go to do the right thing.

On top of those considerations, you have to include AI systems crafted from inception to do bad things. You can have AI that is made for beneficial purposes, often referred to as អាយអេសដើម្បីភាពល្អ. You can also have AI that is intentionally made for bad purposes, known as អាយអេសសម្រាប់អាក្រក់. Furthermore, you can have អាយអេសដើម្បីភាពល្អ that is corrupted or rejiggered into becoming អាយអេសសម្រាប់អាក្រក់.

By the way, none of this has anything to do with AI becoming sentient, which I mention because some keep exclaiming that today’s AI is either sentient or on the verge of being sentient. Not so. I take apart those myths in my analysis at តំណភ្ជាប់នៅទីនេះ.

Let’s make sure then that we are on the same page about the nature of today’s AI.

មិនមាន AI ណាមួយដែលប្រកបដោយមនោសញ្ចេតនាទេ។ យើងមិនមាននេះទេ។ យើង​មិន​ដឹង​ថា​តើ​ AI ​អាច​នឹង​កើត​ឡើង​ឬ​អត់​នោះ​ទេ។ គ្មាននរណាម្នាក់អាចទស្សន៍ទាយបានច្បាស់ថា តើយើងនឹងសម្រេចបាន AI អារម្មណ៍ ឬថាតើ AI អារម្មណ៍នឹងកើតឡើងដោយអព្ភូតហេតុដោយឯកឯងក្នុងទម្រង់នៃ supernova នៃការយល់ដឹងតាមការគណនា (ជាទូទៅគេហៅថាឯកវចនៈ សូមមើលការគ្របដណ្តប់របស់ខ្ញុំនៅ តំណភ្ជាប់នៅទីនេះ).

ប្រភេទនៃ AI ដែលខ្ញុំកំពុងផ្តោតលើមាន AI ដែលមិនមានអារម្មណ៍ដែលយើងមានសព្វថ្ងៃនេះ។ ប្រសិនបើយើងចង់ស្មានទុកជាមុនអំពី AI ប្រកបដោយការយល់ឃើញ ការពិភាក្សានេះអាចទៅក្នុងទិសដៅខុសគ្នាខ្លាំង។ AI ដែល​មាន​អារម្មណ៍​ថា​នឹង​មាន​គុណភាព​មនុស្ស។ អ្នកនឹងត្រូវពិចារណាថា AI អារម្មណ៍គឺស្មើនឹងការយល់ដឹងរបស់មនុស្ស។ ជាងនេះទៅទៀត ដោយសារមានការស្មានខ្លះៗថា យើងប្រហែលជាមាន AI ដ៏វៃឆ្លាត ដែលអាចសន្និដ្ឋានបានថា AI បែបនេះអាចឆ្លាតជាងមនុស្សទៅទៀត (សម្រាប់ការរុករករបស់ខ្ញុំនូវ AI ដ៏ឆ្លាតវៃជាលទ្ធភាព សូមមើល ការគ្របដណ្តប់នៅទីនេះ).

ខ្ញុំ​សូម​ណែនាំ​យ៉ាង​មុតមាំ​ថា​យើង​រក្សា​អ្វីៗ​នៅ​លើ​ផែនដី​ហើយ​ពិចារណា​អំពី​ការ​គណនា​មិន​គិត​ពី AI នាពេលបច្ចុប្បន្ននេះ។

ត្រូវដឹងថា AI នាពេលបច្ចុប្បន្ននេះ មិនអាច "គិត" តាមរបៀបណាមួយបានដូចការគិតរបស់មនុស្សនោះទេ។ នៅពេលអ្នកប្រាស្រ័យទាក់ទងជាមួយ Alexa ឬ Siri សមត្ថភាពសន្ទនាអាចហាក់ដូចជាស្រដៀងនឹងសមត្ថភាពរបស់មនុស្ស ប៉ុន្តែការពិតគឺថាវាមានលក្ខណៈគណនា និងខ្វះការយល់ដឹងរបស់មនុស្ស។ យុគសម័យចុងក្រោយបង្អស់របស់ AI បានប្រើប្រាស់យ៉ាងទូលំទូលាយនូវ Machine Learning (ML) និង Deep Learning (DL) ដែលមានឥទ្ធិពលលើការផ្គូផ្គងលំនាំគណនា។ នេះបាននាំឱ្យមានប្រព័ន្ធ AI ដែលមានរូបរាងដូចមនុស្ស។ ទន្ទឹមនឹងនេះដែរ សព្វថ្ងៃនេះមិនមាន AI ណាមួយដែលមានលក្ខណៈសមហេតុសមផលទេ ហើយក៏មិនមានការងឿងឆ្ងល់នៃការយល់ដឹងនៃការគិតដ៏រឹងមាំរបស់មនុស្សដែរ។

សូមប្រយ័ត្នចំពោះ AI នាពេលបច្ចុប្បន្ននេះ។

ML/DL គឺជាទម្រង់នៃការផ្គូផ្គងលំនាំគណនា។ វិធីសាស្រ្តធម្មតាគឺអ្នកប្រមូលផ្តុំទិន្នន័យអំពីកិច្ចការធ្វើការសម្រេចចិត្ត។ អ្នកបញ្ចូលទិន្នន័យទៅក្នុងម៉ូដែលកុំព្យូទ័រ ML/DL ។ គំរូទាំងនោះស្វែងរកលំនាំគណិតវិទ្យា។ បន្ទាប់ពីស្វែងរកគំរូបែបនេះ ប្រសិនបើរកឃើញនោះ ប្រព័ន្ធ AI នឹងប្រើលំនាំទាំងនោះនៅពេលជួបទិន្នន័យថ្មី។ នៅពេលបង្ហាញទិន្នន័យថ្មី គំរូដែលផ្អែកលើ "ទិន្នន័យចាស់" ឬទិន្នន័យប្រវត្តិសាស្រ្តត្រូវបានអនុវត្តដើម្បីបង្ហាញការសម្រេចចិត្តបច្ចុប្បន្ន។

ខ្ញុំ​គិត​ថា​អ្នក​អាច​ទាយ​ថា​តើ​នេះ​កំពុង​ទៅ​ណា​។ ប្រសិនបើ​មនុស្ស​ដែល​បាន​បង្កើត​គំរូ​តាម​ការ​សម្រេច​ចិត្ត​បាន​រួម​បញ្ចូល​ការ​លំអៀង​ដែល​មិន​ឆ្អែតឆ្អន់ នោះ​ការ​ខុសឆ្គង​គឺ​ថា​ទិន្នន័យ​បាន​ឆ្លុះ​បញ្ចាំង​ពី​ចំណុច​នេះ​តាម​វិធី​ស្រាលៗ ប៉ុន្តែ​សំខាន់។ ការ​ផ្គូផ្គង​គំរូ​ការ​សិក្សា​តាម​ម៉ាស៊ីន ឬ​ការ​រៀន​ជ្រៅ​នឹង​ព្យាយាម​ធ្វើ​ត្រាប់តាម​ទិន្នន័យ​តាម​គណិតវិទ្យា។ មិន​មាន​ភាព​ដូច​គ្នា​នៃ​សុភវិនិច្ឆ័យ ឬ​ទិដ្ឋភាព​មនោសញ្ចេតនា​ផ្សេង​ទៀត​នៃ​ការ​បង្កើត​គំរូ​ដែល​បង្កើត​ដោយ AI ក្នុង​មួយ។

ជាងនេះទៅទៀត អ្នកអភិវឌ្ឍន៍ AI ប្រហែលជាមិនដឹងពីអ្វីដែលកំពុងកើតឡើងនោះទេ។ គណិតវិទ្យា arcane នៅក្នុង ML/DL អាចធ្វើឱ្យវាពិបាកក្នុងការបំបាត់ភាពលំអៀងដែលលាក់នៅពេលនេះ។ អ្នកនឹងសង្ឃឹមយ៉ាងត្រឹមត្រូវ ហើយរំពឹងថាអ្នកអភិវឌ្ឍន៍ AI នឹងសាកល្បងសម្រាប់ភាពលំអៀងដែលអាចកប់បាន ទោះបីជាវាពិបាកជាងវាក៏ដោយ។ ឱកាសដ៏រឹងមាំមួយមានដែលថា ទោះបីជាមានការសាកល្បងយ៉ាងទូលំទូលាយថានឹងមានភាពលំអៀងនៅតែបង្កប់នៅក្នុងគំរូដែលត្រូវគ្នានឹងគំរូនៃ ML/DL ក៏ដោយ។

អ្នក​អាច​ប្រើ​សុភាសិត​ដ៏​ល្បី​ឬ​មិន​ល្អ​ខ្លះ​នៃ​ការ​ចោល​សំរាម​ក្នុង​ធុង​សំរាម។ រឿងនោះគឺថា នេះគឺស្រដៀងទៅនឹងភាពលំអៀងដែលបញ្ចូលទៅក្នុង insidiously ដូចជាភាពលំអៀងដែលលិចចូលទៅក្នុង AI ។ ក្បួនដោះស្រាយការសម្រេចចិត្ត (ADM) នៃ AI axiomatically ក្លាយជាបន្ទុកដោយភាពមិនស្មើគ្នា។

មិនល្អ។

ទាំងអស់នេះមានផលប៉ះពាល់យ៉ាងសំខាន់ AI Ethics និងផ្តល់នូវបង្អួចងាយស្រួលចូលទៅក្នុងមេរៀនដែលបានរៀន (សូម្បីតែមុនពេលមេរៀនទាំងអស់កើតឡើង) នៅពេលនិយាយអំពីការព្យាយាមបង្កើតច្បាប់ AI ។

ក្រៅពីការប្រើប្រាស់ច្បាប់សីលធម៌ AI ជាទូទៅ មានសំណួរដែលត្រូវគ្នាថាតើយើងគួរតែមានច្បាប់ដើម្បីគ្រប់គ្រងការប្រើប្រាស់ផ្សេងៗនៃ AI ដែរឬទេ។ ច្បាប់ថ្មីកំពុងត្រូវបានដាក់ពង្រាយនៅកម្រិតសហព័ន្ធ រដ្ឋ និងមូលដ្ឋាន ដែលទាក់ទងនឹងជួរ និងលក្ខណៈនៃរបៀបដែល AI គួរតែត្រូវបានបង្កើត។ ការ​ខិតខំ​ធ្វើ​សេចក្តី​ព្រាង និង​អនុម័ត​ច្បាប់​បែប​នេះ​ជា​បណ្តើរៗ។ ក្រមសីលធម៌ AI ដើរតួជាគម្លាតដែលត្រូវបានពិចារណា យ៉ាងហោចណាស់ ហើយស្ទើរតែនឹងប្រាកដក្នុងកម្រិតខ្លះត្រូវបានដាក់បញ្ចូលទៅក្នុងច្បាប់ថ្មីទាំងនោះ។

ត្រូវ​ដឹង​ថា​មាន​អ្នក​ខ្លះ​ប្រកែក​យ៉ាង​ម៉ឺងម៉ាត់​ថា​យើង​មិន​ត្រូវ​ការ​ច្បាប់​ថ្មី​ដែល​គ្រប​ដណ្ដប់​លើ AI ហើយ​ថា​ច្បាប់​ដែល​មាន​ស្រាប់​របស់​យើង​គឺ​គ្រប់គ្រាន់​ហើយ។ ពួកគេបានព្រមានថា ប្រសិនបើយើងអនុវត្តច្បាប់ AI មួយចំនួននេះ យើងនឹងសម្លាប់សត្វពពែមាស ដោយកាត់បន្ថយភាពជឿនលឿននៃ AI ដែលផ្តល់អត្ថប្រយោជន៍យ៉ាងច្រើនដល់សង្គម។

នៅក្នុងជួរមុន ខ្ញុំបានរៀបរាប់អំពីកិច្ចខិតខំប្រឹងប្រែងជាតិ និងអន្តរជាតិផ្សេងៗ ដើម្បីបង្កើត និងអនុម័តច្បាប់គ្រប់គ្រង AI សូមមើល តំណភ្ជាប់នៅទីនេះ, ឧទាហរណ៍។ ខ្ញុំក៏បានគ្របដណ្តប់លើគោលការណ៍ និងគោលការណ៍ណែនាំអំពីក្រមសីលធម៌ AI ជាច្រើនដែលប្រជាជាតិនានាបានកំណត់ និងអនុម័ត រួមទាំងឧទាហរណ៍ការខិតខំប្រឹងប្រែងរបស់អង្គការសហប្រជាជាតិដូចជា UNESCO កំណត់ក្រមសីលធម៌ AI ដែលប្រទេសជិត 200 បានអនុម័ត។ តំណភ្ជាប់នៅទីនេះ.

នេះគឺជាបញ្ជីគន្លឹះដ៏មានប្រយោជន៍នៃលក្ខណៈវិនិច្ឆ័យ AI សីលធម៌ ឬលក្ខណៈទាក់ទងនឹងប្រព័ន្ធ AI ដែលខ្ញុំបានស្វែងរកយ៉ាងជិតស្និទ្ធពីមុនមក៖

  • តម្លាភាព
  • យុត្តិធម៌ និងយុត្តិធម៌
  • ភាពមិនអាក្រក់
  • ការទទួលខុសត្រូវ
  • ភាពឯកជន
  • អត្ថប្រយោជន៍
  • សេរីភាព និងស្វ័យភាព
  • ការជឿទុកចិត្ត
  • និរន្តរភាព
  • សេចក្តីថ្លៃថ្នូរ
  • សាមគ្គីភាព

គោលការណ៍ក្រមសីលធម៌ AI ទាំងនោះត្រូវបានគេសន្មត់ថាប្រើប្រាស់ដោយស្មោះស្ម័គ្រដោយអ្នកអភិវឌ្ឍន៍ AI រួមជាមួយនឹងអ្នកដែលគ្រប់គ្រងកិច្ចខិតខំប្រឹងប្រែងអភិវឌ្ឍន៍ AI និងសូម្បីតែអ្នកដែលនៅទីបំផុតអនុវត្ត និងថែរក្សាប្រព័ន្ធ AI ។

អ្នកពាក់ព័ន្ធទាំងអស់នៅទូទាំងវដ្តជីវិត AI នៃការអភិវឌ្ឍន៍ និងការប្រើប្រាស់ត្រូវបានចាត់ទុកថាស្ថិតក្នុងវិសាលភាពនៃការគោរពតាមបទដ្ឋានដែលត្រូវបានបង្កើតឡើងនៃ Ethical AI ។ នេះគឺជាការគូសបញ្ជាក់ដ៏សំខាន់មួយចាប់តាំងពីការសន្មត់ធម្មតាគឺថា "មានតែអ្នកសរសេរកូដ" ឬអ្នកដែលកម្មវិធី AI ប៉ុណ្ណោះដែលត្រូវប្រកាន់ខ្ជាប់នូវគោលគំនិតសីលធម៌របស់ AI ។ ដូចដែលបានសង្កត់ធ្ងន់ពីមុនមក វាត្រូវការភូមិមួយដើម្បីរៀបចំ និងអនុវត្ត AI ហើយភូមិទាំងមូលត្រូវតែគោរព និងគោរពតាមសិក្ខាបទ AI ។

Now that I’ve laid a helpful foundation for getting into the Open Letter, we are ready to dive in.

The official subject title of the Open Letter is this:

  • "An Open Letter to the Robotics Industry and our Communities, General Purpose Robots Should Not Be Weaponized” (as per posted online).

រហូតមកដល់ពេលនេះល្អណាស់។

The title almost seems like ice cream and apple pie. How could anyone dispute this as an erstwhile call to avoid AI robot weaponization?

Read on to see.

First, as fodder for consideration, here’s the official opening paragraph of the Open Letter:

  • “We are some of the world’s leading companies dedicated to introducing new generations of advanced mobile robotics to society. These new generations of robots are more accessible, easier to operate, more autonomous, affordable, and adaptable than previous generations, and capable of navigating into locations previously inaccessible to automated or remotely-controlled technologies. We believe that advanced mobile robots will provide great benefit to society as co-workers in industry and companions in our homes” (as per posted online).

The sunny side to the advent of these types of robots is that we can anticipate a lot of great benefits to emerge. No doubt about it. You might have a robot in your home that can do those Jetson-like activities such as cleaning your house, washing your dishes, and other chores around the household. We will have advanced robots for use in factories and manufacturing facilities. Robots can potentially crawl or maneuver into tight spaces such as when a building collapses and human lives are at stake to be saved. And so on.

As an aside, you might find of interest my recent eye-critical coverage of the Tesla AI Day, at which some kind-of walking robots were portrayed by Elon Musk as the future for Tesla and society, see តំណភ្ជាប់នៅទីនេះ.

Back to the matter at hand. When seriously discussing dancing robots or walking robots, we need to mindfully take into account tradeoffs or the total ROI (Return on Investment) of this use of AI. We should not allow ourselves to become overly enamored by benefits when there are also costs to be considered.

A shiny new toy can have rather sharp edges.

All of this spurs an important but somewhat silent point that part of the reason that the AI weaponization issue arises now is due to AI advancement toward autonomous activity. We have usually expected that weapons are generally human operated. A human makes the decision whether to fire or engage the weapon. We can presumably hold that human accountable for their actions.

AI that is devised to work autonomously or that can be tricked into doing so would seemingly remove the human from the loop. The AI is then algorithmically making computational decisions that can end up killing or harming humans. Besides the obvious concerns about lack of control over the AI, you also have the qualms that we might have an arduous time pinning responsibility as to the actions of the AI. We don’t have a human that is our obvious instigator.

I realize that some believe that we ought to simply and directly hold the AI responsible for its actions, as though AI has attained sentience or otherwise been granted legal personhood (see my coverage on the debates over AI garnering legal personhood at តំណភ្ជាប់នៅទីនេះ). That isn’t going to work for now. We are going to have to trace the AI to the humans that either devised it or that fielded it. They will undoubtedly try to legally dodge responsibility by trying to contend that the AI went beyond what they had envisioned. This is a growing contention that we need to deal with (see my AI Law writings for insights on the contentious issues involved).

The United Nations (UN) via the Convention on Certain Conventional Weapons (CCW) in Geneva has established eleven non-binding Guiding Principles on Lethal Autonomous Weapons, as per the official report posted online (encompassing references to pertinent International Humanitarian Law or IHL provisos), including:

(ក) ច្បាប់មនុស្សធម៌អន្តរជាតិបន្តអនុវត្តយ៉ាងពេញលេញចំពោះប្រព័ន្ធសព្វាវុធទាំងអស់ រួមទាំងការអភិវឌ្ឍន៍សក្តានុពល និងការប្រើប្រាស់ប្រព័ន្ធអាវុធស្វយ័តដ៍សាហាវ។

(ខ) ទំនួលខុសត្រូវរបស់មនុស្សចំពោះការសម្រេចចិត្តលើការប្រើប្រាស់ប្រព័ន្ធសព្វាវុធត្រូវតែរក្សាទុក ចាប់តាំងពីការទទួលខុសត្រូវមិនអាចផ្ទេរទៅម៉ាស៊ីន។ នេះគួរតែត្រូវបានពិចារណានៅទូទាំងវដ្តជីវិតទាំងមូលនៃប្រព័ន្ធអាវុធ;

(គ) អន្តរកម្មរវាងមនុស្ស និងម៉ាស៊ីន ដែលអាចមានទម្រង់ផ្សេងៗគ្នា និងត្រូវបានអនុវត្តនៅដំណាក់កាលផ្សេងៗនៃវដ្តជីវិតរបស់អាវុធ គួរតែធានាថា ការប្រើប្រាស់សក្តានុពលនៃប្រព័ន្ធអាវុធដោយផ្អែកលើបច្ចេកវិទ្យាដែលកំពុងរីកចម្រើននៅក្នុងតំបន់នៃប្រព័ន្ធអាវុធស្វយ័តដ៍សាហាវគឺស្ថិតនៅក្នុង ការអនុលោមតាមច្បាប់អន្តរជាតិជាធរមាន ជាពិសេស IHL ។ ក្នុងការកំណត់គុណភាព និងវិសាលភាពនៃអន្តរកម្មរវាងមនុស្ស និងម៉ាស៊ីន កត្តាជាច្រើនគួរតែត្រូវបានពិចារណា រួមទាំងបរិបទប្រតិបត្តិការ និងលក្ខណៈ និងសមត្ថភាពនៃប្រព័ន្ធអាវុធទាំងមូល។

(ឃ) ទំនួលខុសត្រូវក្នុងការអភិវឌ្ឍន៍ ការដាក់ពង្រាយ និងប្រើប្រាស់ប្រព័ន្ធសព្វាវុធដែលកំពុងលេចឡើងក្នុងក្របខណ្ឌនៃ CCW ត្រូវតែត្រូវបានធានាដោយអនុលោមតាមច្បាប់អន្តរជាតិជាធរមាន រួមទាំងតាមរយៈប្រតិបត្តិការនៃប្រព័ន្ធទាំងនោះនៅក្នុងខ្សែសង្វាក់ទទួលខុសត្រូវនៃការបញ្ជា និងការគ្រប់គ្រងរបស់មនុស្ស។

(ង) អនុលោមតាមកាតព្វកិច្ចរបស់រដ្ឋក្រោមច្បាប់អន្តរជាតិ ក្នុងការសិក្សា ការអភិវឌ្ឍន៍ ការទិញយក ឬការទទួលយកអាវុធថ្មី មធ្យោបាយ ឬវិធីធ្វើសង្គ្រាម ការប្តេជ្ញាចិត្តត្រូវតែធ្វើឡើងថាតើការងាររបស់ខ្លួននឹងស្ថិតក្នុងកាលៈទេសៈណាក៏ដោយ ឬគ្រប់កាលៈទេសៈ។ ហាមឃាត់ដោយច្បាប់អន្តរជាតិ;

(f) នៅពេលបង្កើត ឬទទួលបានប្រព័ន្ធអាវុធថ្មី ដោយផ្អែកលើបច្ចេកវិទ្យាដែលកំពុងរីកចម្រើននៅក្នុងតំបន់នៃប្រព័ន្ធអាវុធស្វយ័តដ៍សាហាវ សន្តិសុខរាងកាយ ការការពារដែលមិនមែនជារូបវន្តសមរម្យ (រួមទាំងសន្តិសុខតាមអ៊ីនធឺណិតប្រឆាំងនឹងការលួចចូល ឬការក្លែងបន្លំទិន្នន័យ) ហានិភ័យនៃការទទួលបានដោយក្រុមភេរវករ។ និងហានិភ័យនៃការរីកសាយគួរតែត្រូវបានពិចារណា;

(g) ការវាយតម្លៃហានិភ័យ និងវិធានការកាត់បន្ថយគួរតែជាផ្នែកនៃការរចនា ការអភិវឌ្ឍន៍ ការធ្វើតេស្ត និងវដ្តនៃការដាក់ពង្រាយនៃបច្ចេកវិទ្យាដែលកំពុងរីកចម្រើននៅក្នុងប្រព័ន្ធសព្វាវុធណាមួយ។

(h) ការពិចារណាគួរតែត្រូវបានផ្តល់ទៅឱ្យការប្រើប្រាស់បច្ចេកវិទ្យាដែលកំពុងរីកចម្រើននៅក្នុងតំបន់នៃប្រព័ន្ធអាវុធស្វយ័តដ៍សាហាវក្នុងការរក្សាការអនុលោមតាម IHL និងកាតព្វកិច្ចច្បាប់អន្តរជាតិដែលអាចអនុវត្តបានផ្សេងទៀត។

(i) ក្នុងការបង្កើតវិធានការគោលនយោបាយដែលមានសក្តានុពល បច្ចេកវិទ្យាដែលកំពុងរីកចម្រើននៅក្នុងតំបន់នៃប្រព័ន្ធអាវុធស្វយ័តដ៍សាហាវមិនគួរត្រូវបាន anthropomorphized;

(j) ការពិភាក្សា និងវិធានការគោលនយោបាយដែលមានសក្តានុពលដែលបានធ្វើឡើងនៅក្នុងបរិបទនៃ CCW មិនគួររារាំងដល់វឌ្ឍនភាព ឬលទ្ធភាពទទួលបានការប្រើប្រាស់ដោយសន្តិវិធីនៃបច្ចេកវិទ្យាស្វ័យភាពឆ្លាតវៃនោះទេ។

(k) CCW ផ្តល់នូវក្របខណ្ឌសមស្របមួយសម្រាប់ការដោះស្រាយបញ្ហានៃបច្ចេកវិទ្យាដែលកំពុងរីកចម្រើននៅក្នុងតំបន់នៃប្រព័ន្ធអាវុធស្វយ័តដ៍សាហាវក្នុងបរិបទនៃគោលបំណង និងគោលបំណងនៃអនុសញ្ញា ដែលស្វែងរកការធ្វើឱ្យមានតុល្យភាពរវាងភាពចាំបាច់ខាងយោធា និងការពិចារណាផ្នែកមនុស្សធម៌។

These and other various laws of war and laws of armed conflict, or IHL (International Humanitarian Laws) serve as a vital and ever-promising guide to considering what we might try to do about the advent of autonomous systems that are weaponized, whether by keystone design or by after-the-fact methods.

Some say we should outrightly ban those AI autonomous systems that are weaponizable. That’s right, the world should put its foot down and stridently demand that AI autonomous systems shall never be weaponized. A total ban is to be imposed. End of story. Full stop, period.

Well, we can sincerely wish that a ban on lethal weaponized autonomous systems would be strictly and obediently observed. The problem is that a lot of wiggle room is bound to slyly be found within any of the sincerest of bans. As they say, rules are meant to be broken. You can bet that where things are loosey-goosey, riffraff will ferret out gaps and try to wink-wink their way around the rules.

នេះគឺជាចន្លោះប្រហោងដែលអាចមានមួយចំនួនដែលសក្តិសមក្នុងការពិចារណា៖

  • ការ​ទាមទារ​នៃ​ការ​មិន​ស្លាប់​. Make non-lethal autonomous weapons systems (seemingly okay since it is outside of the ban boundary), which you can then on a dime shift into becoming lethal (you’ll only be beyond the ban at the last minute).
  • ការទាមទារប្រព័ន្ធស្វយ័តតែប៉ុណ្ណោះ. រក្សាការហាមប្រាមដោយមិនបង្កើតប្រព័ន្ធស្វយ័តដែលផ្តោតសំខាន់លើគ្រោះថ្នាក់ ទន្ទឹមនឹងនោះ ធ្វើឱ្យមានវឌ្ឍនភាពច្រើនលើការបង្កើតប្រព័ន្ធស្វយ័តប្រចាំថ្ងៃដែលមិនទាន់ (មិនទាន់) បំពាក់សព្វាវុធ ប៉ុន្តែអ្នកអាចធ្វើឡើងវិញនូវអាវុធ។
  • Claims of Not Integrated As One. Craft autonomous systems that are not at all weaponized, and when the time comes, piggyback weaponization such that you can attempt to vehemently argue that they are two separate elements and therefore contend that they do not fall within the rubric of an all-in-one autonomous weapon system or its cousin.
  • Claims That It Is Not Autonomous. Make a weapon system that does not seem to be of autonomous capacities. Leave room in this presumably non-autonomous system for the dropping in of AI-based autonomy. When needed, plug in the autonomy and you are ready to roll (until then, seemingly you were not violating the ban).
  • ផ្សេងទៀត

មានការលំបាកជាច្រើនទៀតដែលបានសម្តែងជាមួយនឹងការព្យាយាមហាមឃាត់ទាំងស្រុងនូវប្រព័ន្ធអាវុធស្វយ័តដ៍សាហាវ។ ខ្ញុំនឹងគ្របដណ្តប់ពួកគេមួយចំនួនទៀត។

បណ្ឌិតខ្លះប្រកែកថាការហាមប្រាមមិនមានប្រយោជន៍ជាពិសេសទេ ហើយផ្ទុយទៅវិញគួរតែមានបទប្បញ្ញត្តិ។ គំនិតនេះគឺថា contraptions ទាំងនេះនឹងត្រូវបានអនុញ្ញាត ប៉ុន្តែត្រូវបានប៉ូលីសយ៉ាងតឹងរ៉ឹង។ ចំនួននៃការប្រើប្រាស់ស្របច្បាប់ត្រូវបានដាក់ចេញ រួមជាមួយនឹងវិធីកំណត់គោលដៅស្របច្បាប់ ប្រភេទនៃសមត្ថភាពស្របច្បាប់ សមាមាត្រស្របច្បាប់ និងផ្សេងៗទៀត។

តាមទស្សនៈរបស់ពួកគេ ការហាមប្រាមដោយត្រង់គឺដូចជាដាក់ក្បាលរបស់អ្នកទៅក្នុងខ្សាច់ ហើយធ្វើពុតថាដំរីនៅក្នុងបន្ទប់នោះមិនមាន។ ការឈ្លោះប្រកែកគ្នានេះ ធ្វើឱ្យមានការបង្ហូរឈាមរបស់អ្នកដែលប្រឆាំងជាមួយនឹងអំណះអំណាងថា ដោយបង្កើតការហាមឃាត់ អ្នកអាចកាត់បន្ថយយ៉ាងខ្លាំងនូវការល្បួងឱ្យបន្តដំណើរការប្រព័ន្ធទាំងនេះ។ ប្រាកដណាស់ អ្នកខ្លះនឹងបង្ហាញការហាមប្រាម ប៉ុន្តែយ៉ាងហោចណាស់សង្ឃឹមថាភាគច្រើននឹងមិនធ្វើទេ។ បន្ទាប់មកអ្នកអាចផ្តោតការយកចិត្តទុកដាក់របស់អ្នកទៅលើអ្នកអួត ហើយមិនចាំបាច់បំបែកការយកចិត្តទុកដាក់របស់អ្នកចំពោះមនុស្សគ្រប់គ្នានោះទេ។

ការជជែកដេញដោលទាំងនេះទៅជុំវិញ។

Another oft-noted concern is that even if the good abide by the ban, the bad will not. This puts the good in a lousy posture. The bad will have these kinds of weaponized autonomous systems and the good won’t. Once things are revealed that the bad have them, it will be too late for the good to catch up. In short, the only astute thing to do is to prepare to fight fire with fire.

There is also the classic deterrence contention. If the good opt to make weaponized autonomous systems, this can be used to deter the bad from seeking to get into a tussle. Either the good will be better armed and thusly dissuade the bad, or the good will be ready when the bad perhaps unveils that they have surreptitiously been devising those systems all along.

ការប្រឆាំងទៅនឹងបញ្ជរទាំងនេះគឺថា តាមរយៈការបង្កើតប្រព័ន្ធស្វយ័តដែលមានអាវុធ អ្នកកំពុងប្រកួតប្រជែងអាវុធ។ ភាគីម្ខាងទៀតនឹងស្វែងរកដូចគ្នា។ ទោះបីជាពួកគេមិនអាចបង្កើតប្រព័ន្ធបែបបច្ចេកវិជ្ជាថ្មីបានក៏ដោយ ពេលនេះពួកគេនឹងអាចលួចយកផែនការរបស់ "ល្អ" ត្រលប់មកវិញនូវភាពក្លាហាននៃបច្ចេកវិទ្យាខ្ពស់ ឬធ្វើត្រាប់តាមអ្វីដែលពួកគេហាក់ដូចជាមើលឃើញថាជាការសាកល្បង និងពិត។ វិធីដើម្បីទទួលបានការងារ។

Aha, some retort, all of this might lead to a reduction in conflicts by a semblance of mutual. If side A knows that side B has those lethal autonomous systems weapons, and side B knows that side A has them, they might sit tight and not come to blows. This has that distinct aura of mutually assured destruction (MAD) vibes.

ហើយដូច្នេះនៅលើ។

Looking Closely At The Second Paragraph

We have already covered a lot of ground herein and only so far considered the first or opening paragraph of the Open Letter (there are four paragraphs in total).

Time to take a look at the second paragraph, here you go:

  • “As with any new technology offering new capabilities, the emergence of advanced mobile robots offers the possibility of misuse. Untrustworthy people could use them to invade civil rights or to threaten, harm, or intimidate others. One area of particular concern is weaponization. We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues. Weaponized applications of these newly-capable robots will also harm public trust in the technology in ways that damage the tremendous benefits they will bring to society. For these reasons, we do not support the weaponization of our advanced-mobility general-purpose robots. For those of us who have spoken on this issue in the past, and those engaging for the first time, we now feel renewed urgency in light of the increasing public concern in recent months caused by a small number of people who have visibly publicized their makeshift efforts to weaponize commercially available robots” (as per posted online).

Upon reading that second paragraph, I hope you can see how my earlier discourse herein on AI weaponization comes to the fore.

Let’s examine a few additional points.

One somewhat of a qualm about a particular wording aspect that has gotten the dander up by some is that the narrative seems to emphasize that “untrustworthy people” could misuse these AI robots. Yes, indeed, it could be bad people or evildoers that bring about dastardly acts that will “misuse” AI robots.

At the same time, as pointed out toward the start of this discussion, we need to also make clear that the AI itself could go awry, possibly due to embedded bugs or errors and other such complications. The expressed concern is that only emphasizing the chances of untrustworthy people is that it seems to ignore other adverse possibilities. Though most AI companies and vendors are loath to admit it, there is a plethora of AI systems issues that can undercut the safety and reliability of autonomous systems. For my coverage of AI safety and the need for rigorous and provable safeguards, see តំណភ្ជាប់នៅទីនេះ, ឧទាហរណ៍។

Another notable point that has come up amongst those that have examined the Open Letter entails the included assertion that there could end up undercutting public trust associated with AI robots.

On the one hand, this is a valid assertion. If AI robots are used to do evil bidding, you can bet that the public will get quite steamed. When the public gets steamed, you can bet that lawmakers will jump into the foray and seek to enact laws that clamp down on AI robots and AI robotic makers. This in turn could cripple the AI robotics industry if the laws are all-encompassing and shut down efforts involving AI robotic benefits. In a sense, the baby could get thrown out with the bathwater (an old expression, probably deserving to be retired).

The obvious question brought up too is whether this assertion about averting a reduction in public trust for AI robots is a somewhat self-serving credo or whether it is for the good of us all (can it be both?).

អ្នកសម្រេចចិត្ត។

We now come to the especially meaty part of the Open Letter:

  • “We pledge that we will not weaponize our advanced-mobility general-purpose robots or the software we develop that enables advanced robotics and we will not support others to do so. When possible, we will carefully review our customers’ intended applications to avoid potential weaponization. We also pledge to explore the development of technological features that could mitigate or reduce these risks. To be clear, we are not taking issue with existing technologies that nations and their government agencies use to defend themselves and uphold their laws” (as per posted online).

We can unpack this.

Sit down and prepare yourself accordingly.

Are you ready for some fiery polarization?

On the favorable side, some are vocally heralding that these AI robot makers would make such a pledge. It seems that these robot makers will thankfully seek to not weaponize their “advanced-mobility general-purpose” robots. In addition, the Open Letter says that they will not support others that do so.

Critics wonder whether there is some clever wordsmithing going on.

For example, where does “advanced-mobility” start and end? If a robot maker is devising a សាមញ្ញ-mobility AI robot rather than an advanced one (which is an undefined piece of techie jargon), does that get excluded from the scope of what will មិនមាន be weaponized? Thus, apparently, it is okay to weaponize simple-mobility AI robots, as long as they aren’t so-called កម្រិតខ្ពស់.

The same goes for the phrasing of general-purpose robots. If an AI robot is devised specifically for weaponization and therefore is not shall we say a គោលបំណង​ទូទៅ robot, does that become a viable exclusion from the scope?

You might quibble with these quibbles and fervently argue that this is just an Open Letter and not a fifty-page legal document that spells out every nook and cranny.

This brings us to the seemingly more macro-level qualm expressed by some. In essence, what does a “pledge” denote?

Some ask, where’s the beef?

A company that makes a pledge like this is seemingly doing so without any true stake in the game. If the top brass of any firm that signs up for this pledge decides to no longer honor the pledge, what happens to that firm? Will the executives get summarily canned? Will the company close down and profusely apologize for having violated the pledge? And so on.

As far as can be inferred, there is no particular penalty or penalization for any violation of the pledge.

You might argue that there is a possibility of reputational damage. A pledging firm might be dinged in the marketplace for having made a pledge that it no longer observed. Of course, this also assumes that people will remember that the pledge was made. It also assumes that the violation of the pledge will be somehow detected (it distinctly seems unlikely a firm will tell all if it does so). The pledge violator would have to be called out and yet such an issue might become mere noise in the ongoing tsunami of news about AI robotics makers.

Consider another angle that has come up.

A pledging firm gets bought up by some larger firm. The larger firm opts to start turning the advanced-mobility general-purpose robots into AI weaponized versions.

Is this a violation of the pledge?

The larger firm might insist that it is not a violation since they (the larger firm) never made the pledge. Meanwhile, the innocuous AI robots that the smaller firm has put together and devised, doing so with seemingly the most altruistic of intentions, get nearly overnight revamped into being weaponized.

Kind of undermines the pledge, though you might say that the smaller firm didn’t know that this would someday happen. They were earnest in their desire. It was out of their control as to what the larger buying firm opted to do.

Some also ask whether there is any legal liability in this.

A pledging firm decides a few months from now that it is not going to honor the pledge. They have had a change of heart. Can the firm be sued for having abandoned the pledge that it made? Who would sue? What would be the basis for the lawsuit? A slew of legal issues arise. As they say, you can pretty much sue just about anybody, but whether you will prevail is a different matter altogether.

Think of this another way. A pledging firm gets an opportunity to make a really big deal to sell a whole bunch of its advanced-mobility general-purpose robots to a massive company that is willing to pay through the nose to get the robots. It is one of those once-in-a-lifetime zillion-dollar purchase deals.

What should the AI robotics company do?

If the AI robotics pledging firm is publicly traded, they would almost certainly aim to make the sale (the same could be said of a privately held firm, though not quite so). Imagine that the pledging firm is worried that the buyer might try to weaponize the robots, though let’s say there isn’t any such discussion on the table. It is just rumored that the buyer might do so.

Accordingly, the pledging firm puts into their licensing that the robots aren’t to be weaponized. The buyer balks at this language and steps away from the purchase.

How much profit did the pledging AI robotics firm just walk away from?

Is there a point at which the in-hand profit outweighs the inclusion of licensing restriction requirement (or, perhaps legally wording the restriction to allow for wiggle room and still make the deal happen)? I think that you can see the quandary involved. Tons of such scenarios are easily conjured up. The question is whether this pledge is going to have teeth. If so, what kind of teeth?

In short, as mentioned at the start of this discussion, some are amped up that this type of pledge is being made, while others are taking a dimmer view of whether the pledge will hold water.

យើងបន្តទៅមុខទៀត។

Getting A Pledge Going

The fourth and final paragraph of the Open Letter says this:

  • “We understand that our commitment alone is not enough to fully address these risks, and therefore we call on policymakers to work with us to promote safe use of these robots and to prohibit their misuse. We also call on every organization, developer, researcher, and user in the robotics community to make similar pledges not to build, authorize, support, or enable the attachment of weaponry to such robots. We are convinced that the benefits for humanity of these technologies strongly outweigh the risk of misuse, and we are excited about a bright future in which humans and robots work side by side to tackle some of the world’s challenges” (as per posted online).

This last portion of the Open Letter has several additional elements that have raised ire.

Calling upon policymakers can be well-advised or ill-advised, some assert. You might get policymakers that aren’t versed in these matters that then do the classic rush-to-judgment and craft laws and regulations that usurp the progress on AI robots. Per the point earlier made, perhaps the innovation that is pushing forward on AI robotic advances will get disrupted or stomped on.

Better be sure that you know what you are asking for, the critics say.

Of course, the counter-argument is that the narrative clearly states that policymakers should be working with AI robotics firms to figure out how to presumably sensibly make such laws and regulations. The counter to the counter-argument is that the policymakers might be seen as beholding to the AI robotics makers if they cater to their whims. The counter to the counter of counter-argument is that it is naturally a necessity to work with those that know about the technology, or else the outcome is going to potentially be a kilter. Etc.

On a perhaps quibbling basis, some have had heartburn over the line that calls upon everyone to make similar pledges as to not ភ្ជាប់ weaponry to advanced-mobility general-purpose robots. The keyword there is the word ភ្ជាប់. If someone is making an AI robot that incorporates or seamlessly embeds weaponry, that seems to get around the wording of ភ្ជាប់ something. You can see it now, someone vehemently arguing that the weapon is not attached, it is completely part and parcel of the AI robot. Get over it, they exclaim, we aren’t within the scope of that pledge, and they could even have otherwise said that they were.

This brings up another complaint about the lack of stickiness of the pledge.

Can a firm or anyone at all that opts to make this pledge declare themselves unpledged at any time that they wish to do so and for whatever reason they so desire?

Apparently so.

There is a lot of bandying around about making pledges and what traction they imbue.

សន្និដ្ឋាន

Yikes, you might say, these companies that are trying to do the right thing are getting drummed for trying to do the right thing.

What has come of our world?

Anyone that makes such a pledge ought to be given the benefit of the doubt, you might passionately maintain. They are stepping out into the public sphere to make a bold and vital contribution. If we start besmirching them for doing so, it will assuredly make matters worse. No one will want to make such a pledge. Firms and others won’t even try. They will hide away and not forewarn society about what those darling dancing robots can be perilously turned into.

Skeptics proclaim that the way to get society to wise up entails other actions, such as dropping the fanciful act of showcasing the frolicking dancing AI robots. Or at least make it a more balanced act. For example, rather than solely mimicking beloved pet-loyal dogs, illustrate how the dancing robots can be more akin to wild unleashed angry wolves that can tear humans into shreds with nary a hesitation.

That will get more attention than pledges, they implore.

Pledges can indubitably be quite a conundrum.

As Mahatma Gandhi eloquently stated: “No matter how explicit the pledge, people will turn and twist the text to suit their own purpose.”

Perhaps to conclude herein on an uplifting note, Thomas Jefferson said this about pledges: “We mutually pledge to each other our lives, our fortunes, and our sacred honor.”

When it comes to AI robots, their autonomy, their weaponization, and the like, we are all ultimately going to be in this together. Our mutual pledge needs at least to be that we will keep these matters at the forefront, we will strive to find ways to cope with these advances, and somehow find our way toward securing our honor, our fortunes, and our lives.

Can we pledge to that?

ខ្ញុំសង្ឃឹមថាដូច្នេះ។

Source: https://www.forbes.com/sites/lanceeliot/2022/10/09/ai-ethics-and-ai-law-asking-hard-questions-about-that-new-pledge-by-dancing-robot-makers-saying-they-will-avert-ai-weaponization/