Technostress, Misalignment, and “Artificial Intelligence”

Image produced by Akritasa for Wikipedia with the Creative Commons license. From: Neural Networks (Machine Learning)

Lately I’ve been mulling over the misleading nature of the term “artificial intelligence.” Taken without context, it connotes, unintentionally, the idea of an intelligence equal to human intelligence, only artificial. It is not. It is absolutely not. But I believe that much of the recent generative AI hype is stirred by companies taking advantage of this gap in the public’s understanding. The tension caused by this mismatch of expectations is the topic I would like to explore today.

When I was in Library School, I was captivated by an article written by Jessamyn West called Technostress and Jerks in the Library. The blog post was in service to a chapter she had written on the subject of Technostress in the book Information Tomorrow. In the blog post she summarizes her description of Technostress accordingly.

…technology stresses us out when we get stuck between other people’s expectations of what we need to do with technology and what we are actually able to do with it, for whatever reason.

Although many of the examples she cites are specific to a library setting, technostress can be found in any setting the where expectations of information technology can be misaligned:

  • Designers and IT people being expected to build 2.0 tools without any clear sense of WHY they’re building them
  • Managers getting snippy with staff for explaining technology in a way that is over their head, and both people being unclear whose responsibility it is to clear up the lack of knowledge
  • Vendors rolling out new features without fixing core functionality issues in their software
  • Updates, from anyone, that breaks things

Anyone who has been reading Factor blog posts and LinkedIn articles for a while would sense a theme here. These are alignment issues! She continues to add a point that distinguishes Technostress from general misalignment:

  • Everyone needing to recognize that in order to improve a lot of the technology we deal with, we may have to admit that some of it is lacking

This last point, in my mind, is what distinguishes Technostress from the general stresses that come from misalignment between teams. Technostress arises when two or more teams are misaligned on the capabilities and limitations of a given technology.

Technostress arises when two or more teams are misaligned on the capabilities and limitations of a given technology.

The reasons for Technostress come not from the technology itself but from someone’s failure to communicate what the technology is capable of.I cannot help but think of Technostress as it relates to where “AI” is in relation to the Gartner Hype Cycle.

The figure of the Gartner Hype Cycle from Wikipedia, with some helpful markup showing we have passed the Peak of Inflated Expectations and are entering the Trough of Disillusionment. Credit: Original work by Jeremy Kemp remixed under Creative Commons license by Sherrard Glaittli

It’s generally assumed we are somewhere around the Trough of Disillusionment, with some of us possibly beginning to enter the Slope of Enlightenment. Regardless, people’s peak expectations have popped and are now bottoming out. Why is that? I venture that the Peak of Inflated Expectations and subsequent Trough of Disillusionment is the result of West’s observed individual Technostress scenarios working themselves out in the aggregate. This has happened before, but I’m going to talk about it in the specific context of Generative AI and related technologies.

Let’s take our previously mentioned definition and apply it to AI: Technostress arises when two or more teams are misaligned on the capabilities and limitations of a given technology. Who are the teams? Teams are any teams, within the same organization or not, whose shared goals are dependent on the given technology. These can be IT, Designers and Management, but they can also be between Vendor and Client, or Vendor and the general public.

How do teams get misaligned on technical capabilities? It depends on the teams and their context. There are the perils of interdepartmental friction, opposing professional perspectives, or just breakdown of communication between teams. Technological misalignment can also occur because of marketing hype.

With the releases of ChatGPT 3.5 and 4 there was an explosion of press about the promise of Generative AI. Would it replace all our jobs? Would it become sentient and destroy humanity out of contempt? AI CEOs validated these fears and humbly called for regulation in the process. At the same time they would cite their plans for Artificial General Intelligence, despite the fact that the roadmap from here to there is wildly theoretical. This served as a distraction from real limitations of the real products. Let’s walk through a standard generative AI implementation from a Technostress perspective.

A leader sees a generative AI demonstration at a conference. They don’t understand all the technicalities but the demonstration, based on publicly trained data, looked amazing. The chatbot could even conceivably replace their legacy tech stack; at least the database and the front-end UI. Also, everyone else is seeing this so we need to implement this ASAP. A vague but firm directive is issued: make our product “AI” somehow. Design and IT teams struggle to turn this directive into concrete technical requirements and desired outcomes. In testing, the users struggle when the new AI version of the product is less useful than the legacy product. The product releases anyway and is pilloried in the press. How did all this misalignment occur? I would argue the misalignment occurred when generative AI was publicly tied to artificial intelligence. Generative AI has nothing – and I repeat – nothing to do with Artificial General Intelligence. But the act of saying “generative AI is a stepping stone to Artificial General Intelligence:, imbues generative AI technology with potential powers the technology does not have.

If we are to emerge from the Trough of Disillusionment, we, collectively, need to align our understanding of the term Artificial Intelligence itself. I appreciate the distinction that Simon Willison made recently in their keynote address at PyCon US this year:

“I don’t think of them [ChatGPT, Claude, Gemini] as Artificial Intelligence. Partly because what does that word even mean these days?… It’s kind of a distraction. So I think of these things as Imitation Intelligence. It turns out if you imitate what looks like intelligence closely enough, you can do really useful and interesting things. But it’s crucial to remember that these things, no matter how convincing they are when you interact with them, they are not planning and solving puzzles. They’re not intelligent entities. They’re just doing an imitation of that.”

Generative AI and related machine learning technologies aren’t going anywhere, but our understanding of their inherent value propositions will. We need to understand what models work in what specific use cases. The frameworks to make them Trustworthy, Consistent, Robust and Responsible are just being written. So much is up in the air, how can we align our technological expectations and reduce our Technostress? We can start by aligning on the same term, Imitation Intelligence, and the promise and limits that the term implies.

Sherrard Glaittli
Information Architect at Factor | + posts