Real Knowledge, Hard-earned, by Real People
The artificial intelligence gambit is a ruse
With Miroslaw Manicki
Does someone think that people aren’t very good at thinking? It is true that there are semantic and integration issues. The vaunted collective action problem comes to mind. Improved computerization is an aspect of this.
We discuss fluidity, the effective integration of knowledge-based and authoritative processes. This is the key to improved governance. Behind both questions are many people, organized and individually. These constitute social phenomena, decidedly not the exclusive realm of computers. People need to work through questions of interest and importance to them. This involves the understanding, manipulation, and evaluation of both numbers and symbols — math and language.
As to knowledge, the question is a matter of social networks. In some cases, these can extend throughout the world. As to authority, it is organizations that define and oversee the processes of importance to them. These underscore the legitimacy of organizations, both public and private.
How does artificial intelligence fit into this? It doesn’t.
It isn’t that there aren’t useful software programs and that many of them make good use of data. Often, they can do things more rapidly than you could do on your own. We all benefit from these.
When technologists refer to artificial intelligence, however, they are not talking about these, but to something entirely different. They are referring to creative thought, even judgment and discernment. They refer to decisionmaking solely by computers that is better served by and among people. Think of the major social media companies and all of their braggadocio about AI — Facebook and Google, for example. How are they at resolving the world’s problems?
Not good.
I had some interactions with a major DARPA artificial intelligence program about twenty years ago, for about five years’ duration. I had access to a technology that can be used by experts and authorities to lay out complex processes based on their knowledge and responsibilities without any conventional computer knowledge or skill. They certainly do not need to learn or use programming languages to do this.
That led me to be interested in the DARPA program in 1999, which was titled “Rapid knowledge formation”. I tried to get funded into the program. The program manager would not do that, but he did relent and allowed me to listen in and come to conferences on my own dime. His rationale for not funding my project was specious. He said that what I did was not knowledge-based. I pointed out that it was difficult to comprehend that processes designed by experts were not knowledge-based. Maybe that is why he allowed me to tag along.
I readily came to understand the nature of their challenge. They wanted to harvest what they called axioms from experts. Where they referred to axioms, a lay person might say “factoid”. The axioms they wanted were truisms like daughters are younger than their mothers or cows’ milk is white. They concluded prior to the program in question that they could gather axioms from experts OK, just that they needed to do it ten times a fast as they had in the past.
So, if you were a cardiologist — preferably a leading one by some measure — according to that program you would give all of the axioms of cardiology to the program that they could think about. They weren’t interested in the flow of ideas and axioms in practice. The computers would fill in the blanks as long as they were fed with facts. The idea was that once they got all of the ideas from a cardiologist for example, they would feed them into a tree structure they called an inference engine to start to come up with cardiologist-like answers.
Ideally, they would be able to have a standard tree for all subject areas, but this wasn’t working. They had set up seven different trees to for different purposes. They had to tweak their tools in this manner, giving the computers something of a boost — not from the experts, but from the computer engineers.
There were two major near-term problems with this, a social one and a technical one. The social one comes with no surprise. Leading cardiologists do not find it particularly enlightening to disclose all of the facts of the practice in the course of a day or a week or a month. After a point, they clue in on what is happening and they decide they have better things to do with their time — like engaging in cardiology. In the axiom disclosure business, they aren’t really learning anything. It is boring. They more or less clue in on the fact that they are participating in a process that could ultimately convert some kind of robot into actual cardiologists.
As it turns out, even the most cooperative and giving of experts are, well, smart. They quit.
That is was and has always been the goal of the artificial intelligence crowd, dating back to the famous Dartmouth Conference in 1956 and before. Alan Turing hinted about the prospects. It was of the “gee whiz, look what we might be able to do” category rather than the, “the world needs this because people can’t think any more” category.
So, in “…proceed[ing] on the basis of [a] conjecture” they set the world on a ‘gee whiz’ journey based on not even a notion that such a thing was needful or desirable. Certainly, no literature review of such a question was in the offing.
In the last meeting of the DARPA artificial intelligence activity that I participated in, in 2003, they let me make a presentation. In that presentation, I offered the idea that if you were to give the tools to experts to design full models that worked as prototypes, you would be more successful at getting them to cooperate. The point was, gathering up scraps of axioms was getting harder and harder as the experts were wising up.
They responded by asking me a dumb question: “How could matrix data be represented in trees”. It didn’t matter to them that I swatted it away like a lazy fly. They had long before set themselves on the path to disarm society with automated thinking.
A man died in 1950, six years before their vaunted conference, that they would have been well-served to cite. In fact, they would have been well-advised to heed that man’s call for something that would require computing capacity of some kind. The man was Alfred Korzybski, who had written to highly unique books, Manhood of Humanity and Science and Sanity.
Korzybski indicated that some form of cybernetics was needed in order to bring together the two realms of knowledge in what would be a corollary to the elusive unified field theory in physics: A way to bring symbols and mathematical constructs together in a functional way. This, as Korzybski indicated, would advance the prospects of mankind. He said that this would have to wait for “workers of the future”.
This is the needed goal: To leverage technology such that humans can learn to support the needs of the people by learning how to bring knowledge and authority together in cooperative, rational ways. This can be done. This is what fluidity, immersion, and dual control are all about.