White House AI Bill of Rights Blueprint is Well-Done, but Backwards
Human design — particularly by experts and community leaders — needs to be put in first place
With Miroslaw Manicki
In October of 2022, the U. S. White House Office of Science and Technology Policy released the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. It can be seen below.
The work is well-thought-out. It incorporates concepts and priorities that are critical to us all.
It supports a kind of flow of concepts from safety to algorithmic equity, data privacy, notice and explanation, and human alternatives, including opting out of system use in favor of interpersonal support. In a sense, all of this makes sense. It doesn’t make for dependable solutions, however, in that it presents its material in reverse order of how they should appear.
W. Edwards Deming famously said that a problem with many systems is that they serve to create errors. In large part, this is expected, as systems are typically not designed and implemented considering the wide range of issues and process characteristics in each case. Deming says that most systems “burn the toast”, leaving an expensive and time-consuming following effort to “scrape the toast”.
This is the problem with the “AI Bill of Rights” as presented. There are many substantive ideas here, but they are presented in such a way as to first “burn the toast”, necessitating removal of burned portions before enjoying the desired result of the effort. In short, as seen below, it can be seen that the five elements of the program are presented in backwards order of implementation.
By not firmly establishing the human and community requirements of the program, a growing phenomena is allow to fester, leading to increasing disorder and associated stresses and bedlam. By not binding system features to key, detailed requirements of communities and other human institutions, it makes it very difficult to resolving them later.
All have experienced interactions with customer support representatives on the telephone or in online chats under such circumstances that have proved unsatisfying. Such representatives often have limited capacity to resolve issues, particularly after they have likely been ‘burned in’ by systems themselves where they have not brought satisfaction.
This can be overcome by in essence to flip the blueprint over, to firmly place humans in their capacities as community members, experts, and stakeholders to investigate, to collaborate, and to digitize detailed guidance to system users so that things are done effectively in the first place.
There are actually two kinds of communities here, as outlined well by Ferdinand Tönnies. First are the literal communities, referred to him as Gemeinschafts, which are the cultures and societies that make up the heart of civilization. The purpose of these is to care for and protect their members in all ways. These are typically the users of the systems in question. They are those that would constitute the validators of the system, in the second step of the revised AI Bill of Rights.
The second kind are called Gesellschafts, which constitute societies with a purpose. These are those needed principally for valid content. This includes networks of experts, authorities, interest groups, commercial enterprises, activists, governments, and other formal and informal groups and institutions. The AI Bill of Rights could well be refined by incorporating this distinction.
As seen below, desired outcomes can be solidified and confirmed through comprehensive human design and human feedback that eliminates “back door” corruption, wields untrammeled computation power, and makes safe and effective systems more possible, even likely.
Much is being said about AI — its strengths and weaknesses, its contributions and risks. This includes commentary by us.
Our main point is that such efforts are premature. They give up too soon on human capacity. They give up on our ability to resolve problems, particularly among ourselves. Civilization itself is founded on our capacity in such matters. If there is a failing, it is in the failure by technologists to tap into the primary psyche and capacity of humans.
We have also written about this.
This is where we should be directing our attention.