Integrity Score 920
No Records Found
No Records Found
No Records Found
As AI systems increasingly make decisions, human beings need recourse. Policymakers must act quickly to enshrine our rights
By Valerie Hudson
Congress is finally holding hearings on how to regulate artificial intelligence, just as the founders of OpenAI have called for the equivalent of an International Atomic Energy Agency to vet AI efforts for potential harm.
The government will be playing catch-up for some time to come, not only as AI progresses technically, but also begins to display unanticipated behavior, such as seeking to emotionally manipulate human beings as New York Times columnist Kevin Roose discovered to his chagrin. The EU, the U.K. and China are much farther down this policymaking road than the United States.
While there are many areas of regulation to be addressed, one of the most pressing is decision-making by AI, described by the acronym AIDM. This involves decisions concerning consumer loans or government benefits, decisions concerning medical diagnoses, and already on the horizon, decisions about legal guilt and punishment. Some AIDM merely assists human decision-makers, but in other cases the decision-maker is the AIDM system itself. Not only is there documented evidence of the biases of training corpora affecting AI decisions, AI has been shown to be flat-out wrong in many troubling cases, and has even asserted wrongdoing by individuals who are completely innocent, such as Jim Buckley, who was falsely identified as the perpetrator of a 1992 bombing.
The very first step in regulating AI decision-making is to establish fundamental principles. I suggest three. If a human being holds the foundational rights to know they are interacting with an AIDM system, the right to appeal any decision made by such a system, and the right to litigate harm resulting from a decision undertaken by such a system, then effective governance safeguarding human rights can be constructed.
The right to know
Every human has the right to know when they are engaging with an AI system. Beyond simple notification that they are encountering an AI system, individuals should have unfettered access to a standardized identification label with the contact information for the party having a fiduciary obligation for the performance of the system.
https://www.deseret.com/2023/5/29/23738971/ai-decisions-aidm-human-rights-chatgpt