Anticipating a Pervasive AI Casino

The rhapsodic reveries about artificial intelligence always include dire warnings

Kenneth Tingey
7 min readMay 1, 2024

With Miroslaw Manicki

What are the odds? Do the 8,106,744,361 people on the Earth at this moment need the crazy odds brought on by machine-based reasoning?

First, let’s ask important underlying questions. Did humans stop thinking somewhere along the way?

No. Human still think. Many humans are specifically trained to think, and they do a very good job of it.

Team of medical research scientists work on a new generation disease cure. They use microscope, test tubes, data implementing technology. Adobe Stock

Do prevalent computer systems do a very good job of reflecting this knowledge?

No. They do a very poor job of conveying knowledge where and when it is needed. They do a very poor job of representing knowledge in the first place — despite the fact that most documentation of knowledge takes place using computers, but in static forms that cannot be directly used when and where they are needed.

Do leaders in information technology work hard to resolve this problem?

No. Few are concerned at all about the enormous costs to society and to the economy from systems’ inability to reflect knowledge in useful and dependable ways. They rather focus on the sketchy model described by Yoshana Zuboff (2019), who coined their main work with the rubric “surveillance capitalism”.

As Professor Zuboff says, surveillance capitalism is not something that we would want to be subjected to. We would not ask for it.

Enter AI on top of that. This is not a coincidence. In spite of the ubiquity of Silicon Valley products and systems, the high-tech sector has a very big problem: Lack of new products. They are wholly addicted to the rewards of conquest. If they do not get this new one, they are in big trouble (Moore, 2023).

The point is to purloin data and then to use it to market to and to manipulate users. This is the core function of both surveillance capitalism and AI.

I know of which I speak. I was invited in 2003 to a DARPA (US Defence Advanced Research Project Agency) conference on “rapid knowledge formation”. I had earlier applied to the project for funding, but I was turned down, I was informed by the project manager, because I referred to “knowledge-based processes” in my proposal. My point was that it would be useful to provide tools to experts so that they could design knowledge prototypes so that artificial intelligence designers could improve their systems. My “knowledge-based process” approach was apparently too ‘human-centered’ to him. That was in 1999.

In the 2003 conference, participants voiced concern in the end-of-project meeting that they hadn’t been able to enlist enough experts to cooperate with them. They wanted the experts to give them all the facts (which they called axioms) that pertained to their fields. The AI people would then use their tools to chain the facts together to “think”.

Image from Rapid Knowledge Formation Project. DARPA, 2003

It takes little imagination to understand that experts did not find this particularly interesting — or even a good idea. The logic was much the same as the recent Hollywood strikes, where actors and other creative artists fought against their products and images being scanned and then used in future productions in lieu of the actors — this without any royalties. As it turns out, subject-matter-experts of all kinds are smart by definition.

I was able to make my case in an hour-long presentation to the AI people in 2003. They were not in the least interested in anything that would empower experts. My guess is that they worried that once experts were able to design knowledge-based systems they would just go ahead and use them. If this had happened back in 2003 or another time when the generative taxonomy model was available, we would be in much better shape now.

They asked one trivial question at the end of the DARPA session that one of the presenters had apparently thought would be difficult for me. When I readily answered that, they stonewalled me for the rest of the session.

Nothing has changed on that score. They still ignore experts in this regard — other than the ones that report AI outcomes as outlined earlier that are favorable to them. We continue to insist: Go for 100% by empowering the experts in ways that readily extend their knowledge to others.

We have written about this before (https://medium.com/@ken-tingey/ignorance-of-ignorance-and-meeting-the-needs-of-the-people-8465ab8745f9).

The AI proposition is weird. Remember, people still think. That has been enhanced by technology, but a common process environment for extending knowledge is lacking. The failure lies at the feet of the technologists, who now want to reach for the sky, at least in their own way.

The AI people want to take the work product of people of all levels and convert it into a production system that is decidedly not human. They have a theory that the human knowledge production sector can be replaced based on scant knowledge of or concern for how the scientific sector and other knowledge-based enterprises are conducted. These are largely people who skipped college altogether or at least dodged the “humanistic” parts of their educations and who are fully committed to showing the rest of us how smart they are by trumping human thinking with their machines. They look at knowledge simplistically, as if it amounted to the rules of a game or a series of equations.

Remember the Cheshire Cat of Alice in Wonderland fame. Numbers are precise, but they do not in themselves convey meaning. That is a dense, semantically-complex arrangement of symbols, media, and images that have much to do with humanity. The cat’s unsettling smile represents numbers alone, which are independent of this.

AI outputs were documented recently in Poland with regard to diagnosis of autism (Janik, 2024). The report was that AI systems brought better diagnostic results than specialists. Here are the AI results:

As you may know, and as we discuss below, AI tools can result in crazy outcomes — and dangerous ones. In this case, the question is of autism diagnoses. These will probably not cause World War III as is otherwise thought, but it can have devastating results to the subjects in question, to their families and communities.

Here are the specialist results:

Not as good. Much of this can be laid at the feet of insufficient information guidance — both in terms of data and of guidance.

Here are the underlying results as per the experts. These were used as a baseline to determine the other results.

Well, the experts know the answers. Why not provide a system that extends their knowledge to the others without the aberrations brought on by the surveillance capitalists and the guessing representative of AI?

What happens to society in their brave new world? They are not sure. They do often offer their opinions that their products might destroy civilization. There is that (Dowd, 2023).

Following is another chart representing a more realistic set of outcomes. The numbers are representative, but in league with what AI proponents are telling us and what early reported use of AI tools connotes. There can be some good surprises and possible answers that give rise to interesting perspectives, but there are wrong answers, crazy-off-the-wall answers, and outcomes that are truly dangerous.

Once again, how is it that pervasive AI is worth the risk, given that people continue to generate perfectly good thoughts and much good knowledge is being purloined and misused via surveillance capitalism.

My partners and I are prophets in the wilderness in this regard. Since the early 1990s, when I learned about knowledge-based processes as described here, I had dozens of opportunities to spread the word about direct expert empowerment. It is central to my doctoral research. The phenomenon opens the door to so much else, as we have outlined in the 2020 Program for Global Health (https://2020globalhealth.com).

There are those who would say that perfection is not possible. We would recommend that they test out the theory in a symphonic music concert — particularly by professionals. Count the errors and let us know. It is not likely that you will notice any in the course of the millions of notes you will hear. If symphonies are not to your taste, go listen to something else. The same will be true of professionals in all music genres.

Music performance artists. Adobe Stock

AI proponents — and their cousins, the surveillance capitalism crowd — do not intend to improve human communication. They couldn’t care less — as shown in their behavior — about the kinds of outcomes that people need. They do not intend to leverage the obvious capabilities of computers in calculation, communication, and sorting through classifications. They just want to keep the deals rolling in.

Notes:

Dowd, M. 2023, December 2. Sugarcoating the apocalypse. New York Times. https://www.nytimes.com/2023/12/02/opinion/ai-sam-altman-openai.html.

Janik, J. 2024, April 17. AI is better at diagnosing the autism spectrum than specialists. Warsaw: PAP/ Mira Suchodolska.

Moore, E. 2023. Venture capital dry powder has nowhere to go: As the start-up downturn continues, it has become difficult to persuade investors to part with their money. Financial Times. https://on.ft.com/4bl2919l.

Zuboff, S. 2019. The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: PublicAffairs.

--

--

Kenneth Tingey
Kenneth Tingey

Written by Kenneth Tingey

Proponent of improved governance. Evangelist for fluidity, the process-based integration of knowledge and authority. Big-time believer that we can do better.