The ‘AI Letter’ in a Long Line of Such Warnings

Artificial intelligence is indeed artificial, but it is in no way intelligent

Kenneth Tingey
6 min readApr 14, 2023

With Miroslaw Manicki

There is the now-famous letter by AI and systems geniuses asking for a slowdown in the introduction of certain kinds of artificial intelligence tools.

The original letter became famous with about 1,100 signatories. Now there are over 25,000 signatures, including mine. The feelings are not universal. There is now a letter complaining about the letter. This includes still more famous people whose message is kind of a spooky “Pandora’s Box” take on the situation — it is dangerous to try to hold back on machine intelligence. Under such thinking, the machines must take over, and it must happen now.

https://www.cnbc.com/2023/04/06/bill-gates-ai-developers-push-back-against-musk-wozniak-open-letter.html

This forward and backward view of “thinking machines” happens time after time. Ever since the 1956 Dartmouth conference on artificial intelligence, there have been announcements of an impending ‘singularity’ when machines would take over thinking for us, as we are presumably so bad at it.

It works for them; they are all autistic. This is an observation and not a diagnosis.

Artificial intelligence is their fever dream; they want it to happen. This is how they get revenge for not going to the junior prom. This is how they get to be the cool guys in their high school reunions like the smart, rich, dapper guy in the movie “Peggy Sue Gets Married” that Peggy Sue comes to admire.

This is the cycle: They get worked up about some function that they automate. They vastly overstate both its capacity and its implications. They announce that there is more of the same to come. When that does not occur, they cast their forecast out further, but with larger claims (Kurzweil, 2005). Reference to “strong AI” in their literature seems to provide some level of comfort to AI insiders, but only cold comfort to others:

Eliezer Yudkowsky has extensively analyzed paradigms, architectures, and ethical rules that may help assure that once strong AI has the means of accessing and modifying its own design it remains friendly to biological humanity and supportive of its values. Given that self-improving strong AI cannot be recalled, Yudkowsky points out that we need to “get it right the first time,” and that its initial design must have “zero nonrecoverable errors.”

Inherently there will be no absolute protection against strong AI. Although the argument is subtle, I believe that maintaining an open free-market system for incremental scientific and technological progress, in which eash step is subject to market acceptance, will provide the most constructive environment for technology to embody widespread human values. As I have pointed out, strong AI is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will be intimately embeddded in our bodies and brains. As such, it will reflect our values because it will be us. Attempts to control these technologies via secretive government programs, along with inevitable underground development, would only foster an unstable environment in which the dangerous applications would be likely to become dominant (Kurzweil, 420).

Under baseball standards, after three strikes a batter is out. We count twenty-nine ‘strikes’ in the above, represented by words formatted in bold text. Jointly and severally, these represent alarming prospects.

Another matter of concern is the very idea that the material was deemed worthy of pubication. Perhaps under future conditions, in which their ‘strong AI’ had presented itself, there would be some sense on their part that they had warned us. By their terms, there would be no backing down. Even governments would be considered ‘secretive’ and thus illegitimate. All would be subject to the rage of the machines.

This is all chilling, to say the least. Why don’t they just quit?

Artificial intelligence enthusiasts have a problem with actual knowledge. There is no mystery to this. They typically have academic credentials, but blithely cast aside the importance of others’ knowledge. I worked with a DARPA project twenty years ago that was to support ‘rapid knowledge formation’ by querying experts on facts — referred to by them as axioms — that they could put into their AI programs for decisionmaking apart from other aspects of the experts’ knowledge.

Experts engaged in collaboration. MandriaPix/Adobe Stock

I suggested they at least arm experts to take their hard-earned facts and organize them usefully, but the AI people were not in the least interested. My point was that if they allowed the experts to do this, they would be more cooperative and supportive of the process of gleaning facts from them — which was admittedly a problem. As it turns out, experts tend to be smart, especially when it comes to their life’s work.

I sensed that the AI people were concerned that once armed with such capacity, the experts would create systems that reflected their knowledge and never look back. Such a concern, if true, would be difficult to rationalize with the declared commitment of AI proponents to the advancement of civilization generally. Their kind of civilization, clearly, would at least partially be determined by machines. More that that, machines by their description would have the last say. How could that in any way be a good thing?

Many leading IT entrepreneurs behind the personal computing and social media revolutions famously figured out technology by themselves, thus avoiding the need to get actual educations. Interestingly, in some cases they have taken to lecturing the rest of us on how education should work even though they didn’t really get one for themselves.

We have written on this subject.

The question presents itself: Why is this happening now? Why is AI being pushed onto society so hard in recent months? Has there been a substantive breakthrough in the challenge of artificially mapping and surpassing the capacity of the human mind? Perhaps, but crazy results from ChatGPT sessions call this into question.

There have been setbacks in tech for all to see. That may be the more salient reason for the AI onslaught. Silicon Valley deal flow is moribund. Possibly related, the failure and weakness of tech-centered banks is unprecedented. This is true of the IPO sector generally. Famously, Credit Swisse has been one of the most aggressive dealmakers for decades. Venture capitalists are spreading far and wide for solutions. Some may need money; others need deals. Either way, what we see is instability (Kinder and Hamilton).

Fundraising by venture capital firms hit a nine-year low at the end of 2022, according to research firm Preqin.

VCs are sitting on a record $300bn of “dry powder” — money raised that has not yet been deployed. But many are struggling to find lucrative investments in start-ups and will be unable to raise a new venture fund.

Cash that VCs put into start-ups has plunged more than 50 per cent over the past 12 months, according to data provider Crunchbase (Kinder and Hamilton)

Our sense is that this is more real than anything coming out of their AI laboratories. If they are so great, why don’t they just invest in them?

Our point is that human cognition has never been better. Technology can be used to reflect our collective knowledge once we have appropriately organized it and presented it in digital forms. Let’s make use of the speed and breadth that can come from electronic systems.

References

Kinder, T., and Hamilton, G. 2023, April 11. Silicon Valley VCs tour Middle East in hunt for funding. Financial Times. Available: https://www.ft.com/content/567ca518-b138-4273-bfe6-0712ef31e01d.

Kurzweil, R. 2005. The singularity is near. New York: Penguin Books.

Waldrop, M. M. 2001. J. C. R. Licklider and the revolution that made computing personal. New York: Penguin Books.

--

--

Kenneth Tingey
Kenneth Tingey

Written by Kenneth Tingey

Proponent of improved governance. Evangelist for fluidity, the process-based integration of knowledge and authority. Big-time believer that we can do better.

No responses yet