The Great Global Computer Outage Is a Warning We Ignore at Our Peril

ARTIFICIAL INTELLIGENCE-AI, 5 Aug 2024

Image by Markus Spiske.

2 Aug 2024 – July 18, 2024, will go down in history books as an event that shook up the world in a unique way. It gave the mass of humanity a pointed wake-up call about the inherent fragility of the technological systems we’ve created and the societal complexities they’ve engendered. Critical services at hospitals, airports, banks, and government facilities around the world were all suddenly unavailable. We can only imagine what it must have been like to be undergoing treatment in an emergency room at the time with a serious or life-threatening illness.

So, what are we to make of this event and how can we rationally get our collective arms around its meaning and significance? As a journalist who specializes in writing about the impacts of technology on politics and culture, I would like to share a few initial thoughts.

For some of us who have worked in the tech field for many years, such an event was entirely predictable. This is simply because of three factors: 1) the inherent fragility of computer code, 2) the always-present possibility of human error, and 3) the fact that when you build interconnected systems, a vulnerability in one part of the system can easily spread like a contagion to other parts. We see this kind of vulnerability in play daily in terms of a constant outpouring of news stories about hacking, identity theft, and security breaches involving all sorts of companies and institutions. However, none of these isolated events had sufficient scale to engender greater public awareness and alarm until The Great Global Computer Outage of July 18.

Inherent Fragility is Always Present

As impressive as our new digital technologies are, our technocrats and policymakers often seem to lose sight of an important reality. These now massively deployed systems are also quite fragile in the larger scheme of things. Computers and the communications systems that support them—so called virtual systems—can concentrate huge amounts of informational power and control by wielding it like an Archimedean lever to manage the physical world. A cynic could probably argue that we’re now building our civilizational infrastructures on a foundation of sand.

At the recently held Aspen Security Forum, Anne Neuberger—a senior White House cybersecurity expert—noted, “We need to really think about our digital resilience not just in the systems we run but in the globally connected security systems, the risks of consolidation, how we deal with that consolidation and how we ensure that if an incident does occur it can be contained and we can recover quickly.” With all due respect, Ms. Neuberger was simply stating the obvious and not digging deep enough.

The problem runs much deeper. Our government and that of other advanced Western nations is now running on two separate but equal tracks: technology and governance. The technology track is being overseen by Big Tech entities with little accountability or oversight concerning the normative functions of government. In other words, they’re more or less given a free hand to operate according to the dictates of the free market economy.

Further, consider this thought experiment: Given AI’s now critical role in shaping key aspects of our lives and given its very real and fully acknowledged downsides and risks, why was it not even being discussed in the presidential debate? The answer is simple: These issues are often being left to unelected technocrats or corporate power brokers to contend with. But here’s the catch: Most technocrats don’t have the policy expertise needed to guide critical decision-making at a societal level while, at the same time, our politicians (and yes, sadly, most of our presidential candidates) don’t have the necessary technology expertise.

Scope, Scale, and Wisdom

Shifting to a more holistic perspective, humanity’s ability to continue to build these kinds of systems runs into the limitations of our conceptual ability to embrace their vastness and complexity. So, the question becomes: Is there a limit in the natural order of things to the amount of technological complexity that’s sustainable? If so, it seems reasonable to assume that this limit is determined by the ability of human intelligence to encompass and manage that complexity.

To put it more simply: At what point in pushing the envelope of technology advancement do we get in over our heads and to what degree is a kind of Promethean hubris involved?

As someone who has written extensively about the dangers of AI, I would argue that we’re now at a tipping point whereby it’s worth asking if we can even control what we’ve created and whether the “harmful side effects” of seeming constant chaos is now militating against the quality of life. Further, we can only speculate as to whether we should consider if the CrowdStrike event was somehow associated with some sort of still poorly understood or recognized AI hacking or error. The bottom line is: If we cannot control the effects of our own technological invention then in what sense can those creations be said to serve human interests and needs in this already overly complex global environment?

Finally, the advent of under-the-radar hyper-technologies such as nanotechnology and genetic engineering also need to be considered in this context. These are also technologies that can only be understood in the conceptual realm and not in any concrete and more immediate way because (I would argue) their primary and secondary effects on society, culture, and politics can no longer be successfully envisioned. Decisively moving into these realms, therefore, is like ad hoc experimentation with nature itself. But as many environmentalists have pointed out, “Nature bats last.” Runaway technological advancement is now being fueled by corporate imperatives and a “growth at any cost” mentality that offers little time for reflection. New and seemingly exciting prospects for advanced hyper-technology may dazzle us, but if in the process they also blind us, how can we guide the progress of technology with wisdom?

_________________________________________________

Tom Valovic is a journalist and the author of Digital Mythologies (Rutgers University Press), a series of essays that explored emerging social and political issues raised by the advent of the Internet. He has served as a consultant to the former Congressional Office of Technology Assessment. Tom has written about the effects of technology on society for a variety of publications including Columbia University’s Media Studies Journal, the Boston Globe, and the San Francisco Examiner, among others.

Go to Original – counterpunch.org


Tags: , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

There are no comments so far.

Join the discussion!

We welcome debate and dissent, but personal — ad hominem — attacks (on authors, other users or any individual), abuse and defamatory language will not be tolerated. Nor will we tolerate attempts to deliberately disrupt discussions. We aim to maintain an inviting space to focus on intelligent interactions and debates.

× 6 = 12

Note: we try to save your comment in your browser when there are technical problems. Still, for long comments we recommend that you copy them somewhere else as a backup before you submit them.

This site uses Akismet to reduce spam. Learn how your comment data is processed.