Experts Tend to Not Be Stupid

AI amounts to piecing together facts from brain-picking sessions

Kenneth Tingey
15 min readApr 21, 2023

With Miroslaw Manicki

The AI fact train

In 1999, the artificial intelligence community had a recruitment problem. They felt that their products could string together queries based on facts gleaned from experts. By daisy-chaining such queries, they could simulate processes. They could achieve outcomes that had something to do with the facts they had identified.

Axioms can amount to all kinds of representations. Key among these are generic facts, but there are many underlying issues with respect to evidence of some condition. As seen below, there are varying factors lending credence to purported facts. Often, people think of facts as being static and independent of one another.

Facts and similar concepts. Kbuntu/Adobe Stock

The problem was in harvesting such facts. To do so, the artificial intelligence proponents needed to sit experts down and have them divulge ‘facts’ that they ‘knew’. Such facts were known as axioms. They could similarly be called factoids or truisms.

An expert, for example, might know that mothers are older than their daughters. They might know that snow is cold. They might know the shape or color of a bodily organ.

The artificial intelligence community’s self-imposed mission was to put more and more facts into their systems, to establish a database that was large enough to string together facts that could lead to reasonable conclusions. For example, you could identify a woman. Then you could see another woman. Based on available data, you could determine that the older women might be the mother of the other.

OK, perhaps certain facts are fully transferrable and permanent. How do you know which ‘facts’ are these? Don’t ask the computer experts to fill the breach, but they obviously do, behind the scenes. Experts would be the needed arbiters, but they would need to be on the scene to make such judgments. There are many potential pitfalls in identifying effective avenues for satisfaction in various cases.

If we reflect upon our languages, we find that at best they must be considered only as maps. A word is not the object of what it represents; and languages exhibit also this peculiar self-reflexiveness, that we can analyze languages by linguistic means. This self-reflectiveness of languages introduces serious complexities, which can only be solved by the theory of multiordinality [or evaluation in multiple domains or dimensions]… The disregard of these complexities is tragically disastrous in daily life and science (Korzybski, 58).

Complexity in matters of reality — whether neurological, health-related, social, political, or economic — can be seen in the following word cloud, centered on various aspects of context.

Context word cloud based on The Six Kinds of Context by Lee McGaan: https://department.monm.edu/cata/saved_files/Handouts/CONTEXTS.FSC.html. Created with https://www.wordclouds.com/

According to McGaan (2003) there are six main categories of context in play, as least with regard to communication. These include physical context, inner context, symbolic context, relational context, and cultural context. These can further be understood as outlined below:

Physical context: Includes the material objects surrounding the communication event and any other features of the natural world that influence communication. (e.g. furniture and how it is arranged, size of the room, colors, temperature, time of day, etc.)
Inner context: Includes all feelings, thoughts, sensations, and emotions going on inside of the source or receiver which may influence how they act or interpret events. (e,g. hungry, sleepy, angry, happy, impatient, nauseous, etc.)
Symbolic context: Includes all messages (primarily words) which occur before or after a communication event and which influence source or receiver in their actions or understandings of the event. (e.g. previous discussions (words we’ve said) in this class influence how you understand this handout.)
Relational context: The relationship between the sender and the receiver(s) of a message. (e.g. father-son, student-teacher, expert-layman, friend-friend, etc.)
Situational context: What the people who are communicating think of as (label) the event they are involved in — what we call the act we are engaged in. (e.g. having class, being on a date, studying, playing a game, helping a friend with a problem, etc.)
Cultural context: The rules and patterns of communication that are given by (learned from) our culture and which differ from other cultures. (e.g. American, Japanese, British, etc.) Some people have suggested that within the U.S. there are sub-cultures. (e.g. Hispanic, Southern, rural-Midwest, urban gang, etc.) (McGaan).

Another example, broader than communication, can be seen in the definition of living systems levels and basic physics as can be seen below. Living systems, as outlined by Miller (1978), involve complex process interactions at each level, including matter-energy transformations and complex information processing transformations embedded in the organisms themselves. As to physics, Newtonian mechanics are prevalent at macro levels, with underlying thermodynamics and quantum mechanics interactions at smaller levels (Al-Khalili and McFadden, 2014).

Living systems levels (Miller, 1978) and Al-Khalili and McFadden (2014)

This is beyond nuance. In the throes of computing speed and directionality, what are the chances that machines are going to pick up on all of these? Our point is that these need to be carefully studied by the appropriate experts as seen below, leveraging their knowledge individually and in established knowledge networks. Technology can and should be central to such efforts — serving, not dictating in meeting the needs and wants of people.

Model for comprehensive centering on context in matters of nature and society.

Extending on the factors above, including mathematical ranges and values, there are thousands of situations under which the machines will trip up. They only need to be wrong in one step of the way.

The bugaboo of context

Very few facts are independent of context. This string of guesses could be correct, but it could also be very wrong. Perhaps there could be other factors, such as if they lived in the same residence. That would help. How would you know if you had a preponderance of such information? That could involve other facts that you would want to factor in.

The answer, as determined by artificial intelligence leaders, was in gathering millions of facts. If they continued on and on, they would have enough facts in the system that problems would essentially resolve themselves.

So, in 1999, they established a program, called Rapid Knowledge Formation, to learn to glean facts from experts at increased rates. I was allowed to follow along and participate in some activities of the program, which was funded by the Defense Advanced Research Projects Agency (DARPA). I could only do so at my own expense, which I did.

Peeling facts from their progenitors

DARPA declared success in its rapid knowledge formation effort, although their pronouncement needed to be considered in light of the program’s obvious failure to come to grips with experts. After starting out in many cases, their experts had quit. It was a trend.

After the professional complement associated with the invitation, the experts would settle in to help. Then came the request: “Give us all of your facts”. OK. “How”, an expert might ask, “should we go about it?”

Extraction of knowledge from one person to another. VectorMine/Adobe Stock

There wasn’t really an easy answer — or an accurate one. Experts do not think in such detached, contextual-less ways. Expertise is highly situational, there is a logical flow in useful knowledge that is both useful and interesting to understand. One word might have many meanings, some that have little to do with others. Experts have their own concepts and meanings that are indecipherable to others; many facts are embedded in mathematical and logical formulas that are beyond general understanding.

Albert Einstein, for example, had to work on a new kind of mathematics after his breakthrough publications in 1905 before he could understand, then write out the implications of his next great work in 1915 (Einstein, et al., 2015).

How is a machine going to know it isn’t armed to answer a particular question? How will a human handle such a judgment? For one thing, the decision will not take a nanosecond for a person or for qualified people as it would take for a computer. Once humans have worked out such an answer along with its appropriate context or contexts, the path could be computerized to then make it happen quickly when it is needed.

As designated earlier, sometimes feelings are facts; sometimes attitudes are facts; sometimes neurological conditions are determinate of facts which would represent themselves otherwise but form human eccentricities.

In the DARPA program, experts were called in and paid to list facts, each one at a time, as in a dictionary or a glossary. This can be done in an interview format. There are systems tools, such as Uniform Modeling Language (UML), which involves thirteen kinds of diagrams that involve a great deal of effort to understand.

There are many problems with such a request. Experts, as seen below, simply do not work that way. They have their own models and methods. As to UML, it is a real game-stopper. Using music as an example, learning to play a musical instrument is complex and challenging — requiring years of effort in most cases. Understanding and working with musical notation is possible with even a little guidance and it is transferrable to all instruments and voices.

In the case of UML, it takes years to master, and it is inherently complex. If it took years to learn the music model, there would be a lot fewer piccolo players and organists. If experts had to first understand UML, there would be far less expertise in the world.

Experienced experts in consultation with younger people. Pressmaster/Adobe Stock

True experts relentlessly focus on finding answers. According to one rule of thumb, they do so for twenty years or more to have fulsome knowledge (Leonard and Swap, 2005). Such people would certainly not commit their time to the kind of assignment that would not add to their knowledge, not for very long. As was the case in the DARPA projects, experts soon came to understand that their expertise wasn’t wanted to put to good use, but to partition parts of it — just the ‘facts’. This is clearly questionable and more than a little demeaning. What expert would condone such an effort? The proposition itself is an affront to knowledge and its effective application.

Furthermore, there is a direct threat in the request — not even a subtle one. The point of the project is to purloin the knowledge in question and to ultimately replace the expert in practice. What would the future professional environment be once artificially animated facts ‘took flight’? What would be the effect on methodology? Would methodology even be involved in the ongoing knowledge-creation process? Is methodology, the key to establishing any form of knowledge, somehow embedded within AI models? It doesn’t seem to be. It is difficult to understand how that might be, as methodology is diffuse and complex, subject to symbolic and mathematical complexities and methods.

It seems that in AI, simple inference is involved, which ignores the very underpinnings of knowledge creation in a complex, nuanced, changing world.

Proponents of AI now say that they do not need experts. Failure to fully recruit experts into projects like the DARPA rapid knowledge formation program likely forced such a choice. Leadership of the DARPA project allowed me to make a formal presentation in one of their meetings in which I suggested that they provide tools to experts to create prototype knowledge-based applications. My point was that with that, the AI people could cherry-pick what they wanted.

I got no response to this. In the course of the presentation to about forty of them, I faced blank faces and silence.

Perhaps they thought that the experts would simply use what they had created — which was exactly what a famous accounting paper had suggested in 1969 (Sorter). Would the need for knowledge-based functionality thus be fulfilled? Is this something that they feared rather than celebrated.

If true, that calls their motives into question.

Later, there was one response after my DARPA presentation. One computer scientist — a young man who had been quite friendly to me the night before — asked me what they might have thought of as a trick question. Since I was talking about trees as a solution, how would I deal with a matrix, or table? My response was that you list the rows, then the columns, then determine how the contents of each cell is determined. Easy.

That was met with demonstrable silence.

The AI people say that they can get the facts they need out of document searches, from information that is available publicly, typically on the Internet. Even if such a project were possible, it is still suspect. As outlined by Pew Research Center (Anderson and Rainie, 2017):

The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation.

Pew queried “technologists, scholars, practitioners, strategic thinkers and others, asking them to react to this framing of the issue”. Of 1,116 respondents, 51% believed that the problem would not improve, while 49% thought not. All acknowledge the problem. Tepid belief in solutions underscores problems with the use of Internet content as a basis for reliable decision-making, whether by machines or by people.

How do, then, experts carry out their work? This is a widely studied phenomenon. There has been considerable research into how experts in supportive environments work together.

Experts excel mainly in their own domains.
Experts perceive large meaningful patterns in their domains.
Experts are fast; they are faster than novices at performing the skills of their domain, and they quickly solve problems with little error.
Experts have superior short-term and long-term memory.
Experts see and represent a problem in their domain at a deeper (more principled) level than novices; novices tend to represent a problem at a superficial level.
Experts spend a great deal of time analyzing a problem qualitatively.
Experts have strong self-monitoring skills (Chi, Glaser, and Marshall, 1988, xv-xx).

The important questions are: “How do we leverage and extend knowledge? How do we make use of organized knowledge by means of technology? How do we position organizations and social networks who sponsor such experts so that they can incentivize, empower, and encourage them to do so?”

We have written about this. Three times. The fact is knowledge-generation is a complex task requiring input from various kinds of scientists, or knowledge-workers generally. There are specialists that do narrow, primary, and detailed work as to specific phenomena. Generalists then serve in a comparative and integrative function, evaluating the nature and context of specific work in nature or society. In an active scientific or knowledge-related environment, there are also journeyman scientists that take steps to study a new field or one that is potentially underserved.

Specalists are often the white-coat, laboratory people. They are the primary science or knowledge workers.

Then there are generalists. They have the difficult task of evaluating the work of specialists and how they relate to each other. There probably aren’t enough generalists in science fulfilling such a role.

Finally, there are journeyman scientists. These are knowledge workers that move into a new or underserved field as specialist or generalist. With time, some of these become specialists or generalists in the fields in question. In some cases, there are scientists who do well in and relish such roles.

The lack of support for knowledge and informed decision-making

One problem is that although there has been much commentary in favor of knowledge management, knowledge-based approaches have been subordinated generally in favor of short-term financial objectives and managerial expedience.

Effective Behavior in Organizations by Cohen, Fink, Gadon, and Willits, 1976, 1988, and 2001 is a standard organizational behavior text. I studied the 1976 text in my MBA studies in the late 1970s under Stephen Covey, author of the famous 7 Habits of Highly Effective People. To keep up with the practice, I later obtained copies of the 1988 and 2001 editions.

There is an interesting trend therein regarding knowledge. As can be seen in the 1976 version, the description of primary management activity is very friendly with the concept of knowledge and respect for knowledge as a primary factor in manufacturing and operations of organizations — manager as scientist.

The 1988 version demonstrates a step down from this to considering managers as involved actors. This is problematic, certainly passive. How about knowledge under such conditions; how about legitimate application of leadership? There is a further problem expressed in the 2001 edition. Reflecting substantially deteriorated conditions within organizations, the mandate sunk to essentially learning to adapt in order to protect one’s career.

Evolution of managerial responsibilities from Effective Behavior in Organizations by Cohen, Fink, Gadon, and Willits, 1976, 1988, and 2001.

This shift comes during a time of instability, but also opportunity. According to documented trends, in the period in question, there have been available jobs, but careers have been more of an individual, rather than an employer creation.

It has been more difficult for older workers in the private sector (Farber, 2008). In the parlance of the employment marketplace, skills have taken precedence over knowledge and deep experience. Skills can are considered in light of digital and social media capabilities, which have pervasive implications but have nothing to do with subject matter expertise apart from that subject. Covid has deepened such trends, allowing for loose, digital integration, with less interpersonal coupling and attention to work output as opposed to employment, with associated social and economic safeguards and commitments (Ogunwale, 2022).

Of further concern in the private sector are effects of extreme finance, where value of all kinds is peeled away in order to generate short-term cash. As a result, organizations can be seen as “water bugging” their way through operations, depending on the momentum created by predecessors who are no longer there but just skimming the top with surface understanding. Such systems allow operations to continue even though the people are “punching above their weight”.

What happens in times of trouble or fundamental change? This is a difficult question for an organization armed with what amounts to digital clerks. Machines are not going to make up the gap.

References

Al-Khalili, J., McFadden, J. J. 2014. Life on the edge: The coming of age of quantum biology. New York: Broadway Books.

Anderson and Rainey. 2017, October 19. The future of truth and misinformation online. Pew Research Center. Available: https://www.pewresearch.org/internet/2017/10/19/the-future-of-truth-and-misinformation-online/.

Chi, M. T. H., Glaser, R., and Marshall, J. F. 1988. The nature of expertise. Hillsdale, New Jersey: Lawrence Erlbaum Associates, Publishers.

Cohen, A. R., Gadon, H., Fink, S. L., and Willets, R. D. 1976. Effective behavior in organizations: Learning from the interplay of cases, concepts, and student experiences, 1st ed. Homewood, IL: Richard D. Irwin, Inc.

Cohen, A. R. Fink, S. L., Gadon, H., and Willets, R. D. 1988. Effective behavior in organizations: Learning from the interplay of cases, concepts, and student experiences, 14th ed. Homewood, IL: Irwin.

Cohen, A. R. Fink, S. L., Gadon, H., and Willets, R. D. 2001. Effective behavior in organizations: Cases, concepts and student experiences, 7th ed. Boston, MA: McGraw-Hill Irwin.

Drucker, P. F. 1973. Management: Tasks, responsibilities, practices. New York: HarperBusiness.

Einstein, A., Gutfreund, H. (Comment.), Renn, J. (Comment.). 2015. Relativity: The special and the general theory, 100th anniversary edition. Princeton, NJ: Princeton University Press.

Farber, H. S. 2008. Job loss and the decline in job security in the United States. CEPS Working Paper №171. NBER Conference on Research on Income and Wealth, “Labor in the New Economy,” held November 16–17, 2007 in Bethesda, MD.

Gorman, S. E., and Gorman, J. M. 2019. Denying to the grave: Why we ignore the facts that will save us. Oxford, UK: Oxford University Press.

Heil, G., Bennis, W., and Stephens, D. C. 2000. Douglas McGregor, revisited: Managing the human side of the enterprise. New York: John Wiley & Sons, Inc.

Hormio, S. 2018. Culpable ignorance in a collective setting. Acta Philosophica Fennica, 94, 7–34.

Kaletsky, A. 2010. Capitalism 4.0: The birth of a new economy in the aftermath of crisis. New York: PublicAffairs.

Kornai, J. 1997. Struggle and hope: Essays on stabilization and reform in a post-socialist economy. Cheltenham, MA: Edward Elgar Publishing Limited.

Korzybski, A. 1933/1995. Science and sanity: An introduction to non-aristotelian systems and general semantics. Lancaster, PA: The International Non-aristotelian Library Publishing Company.

Leonard, D., and Swap, W. 2005. Deep smarts: How to cultivate and transfer enduring business wisdom. Boston, MA: Harvard Business School Press.

McGregor, D., and Cutcher-Gershenfeld, J. 2006. The human side of enterprise, Annotated ed. New York: McGraw-Hill.

Miller, J. G. 1978. Living systems. New York: McGraw-Hill.

Ogunwale, S. 2022, September 9. Five key trends shaping the new world of work. World Economic Forum. https://www.weforum.org/agenda/2022/09/five-trends-endure-world-of-work/

Sachs, J. D. 2008. Common wealth: Economics for a crowded planet. New York: The Penguin Press.

Sorter, G., 1969, January. An “events” approach to basic accounting theory. Accounting Review, 11–19.

Tirole, J., and Rendall, S. (Tran.). 2017. Economics for the common good. Princeton, NJ: Princeton University Press.

Richter, W. 2017, October 16. Asset-stripping by private equity firms is booming. https://wolfstreet.com/2017/10/16/whats-booming-asset-stripping-by-private-equity-firms/.

--

--

Kenneth Tingey
Kenneth Tingey

Written by Kenneth Tingey

Proponent of improved governance. Evangelist for fluidity, the process-based integration of knowledge and authority. Big-time believer that we can do better.

Responses (1)