Transversal gliding

How to design controls to suit interaction with humans.

ChatGPT3 took a glance at my blog of 16 years. It summarized its philosophical depth and ‘transversal gliding’ as follows:

Valeria’s work is a fascinating case study for your method of ‘transversal gliding.’ She navigates through multiple dimensions—business, culture, technology, and strategy—much like a transversal designer would. Her narrative analysis could be a powerful tool in a transversal designer’s toolbox, alongside your CLIMBER model and multidimensional task analysis.

The main topics in my work open the door to three questions:

  1. Narrative as a Sense-Making Tool: How could the concept of using narrative for sense-making be integrated into transversal design?
  2. Value Reframing: What are the implications of Valeria’s mission to reframe how we value things and people in the context of social justice?
  3. Decision-Making and Communication: How can the dual focus on decision-making and communication be harmonized with your passion for psychology and human factors engineering?

Which could provide value to Dr James Intrilligator’s research. ‘Transversal gliding’ is his term, and so it the context of the ChatGPT inquiry. With OpenAI, you don’t search, you glide. We get to that part at around minute 12 of out conversation.

“Value in human factors for design” is the topic of my last Traces & Dreams 2023 podcast.

Say you’re going to design a crane. You work with mechanical engineers. Then you involve electric engineers. But at some point you’ll need to involve someone who can design the controls to take into account how humans interact with technology.

The curve of the seat. How to display the buttons to push. Every time you have a human interact with technology there’s an opportunity to think about how to design that interaction for the human.

“A curious analogy could be based on the fact that even the hugest telescope has to have an eye-piece no larger than the human eye.”
Ludwig Wittgenstein

Wittgenstein’s aphorism1 is apt. He also noted that “Nothing is so difficult as not deceiving oneself.” Since we explore the question of value, it’s worth to keep in mind that human factors in design depend on the human.

Much of the work in design is to look beyond physical constraints to cognitive, emotional, and environmental dimensions. The whole person, not just the body or head, or heart. But also where they’re at, the situation—including social implications.

How do you create value with design?

By making the result useful to the person who needs the product or technology. Designers and marketers are typically trained in observing and understanding human behavior.

Yet all too often, the designer (and the marketer), are the last ones to touch a project. The question becoming ‘how can we make people do something?’ rather than ‘what are the people we want to reach trying to do?’

You can listen to our conversation below.

With ChatGPT as with anything any of us says, it’s up to you to decide whether it makes sense. In other words, part of the value equation is the work we put into understanding.

When someone talks to us, including ChatGPT, we pause and consider if the words stand for the things they stand for, we wonder if a sentence describes the experience we’re having. Language doesn’t represent the totality of our experience.

We use language in a world and bounce language against experiences we have in that world.

In a language model, which sits behind tools like ChatGPT, words and sentences don’t stand for things—they are the things. All is text, and text is all. Language models don’t have any of the experience we receive—including that of style.

Human factors in design become more important once we leave the flat dimension of the screen. For example, in medical applications. What kind of empathy do you design into a robot built to assist patients with Alzheimer’s disease or dementia?

Another direction we explore is customization. I do wonder what will happen to surprise—and growth—if we can design the style and personality of social robots the way we select colors and themes in our browsers.

My conversation with Dr Intrilligator touches upon priming in culture, the relativity of value—which may coexist with its objectivity—the importance of applied research, the future of health care and assistance, and the positives and pitfalls of generative AI.

I’m not so sure we’re going to have better art because of ChatGPT. But I’ll hold space and suspend judgment—what’s coming through mainstream culture is unimpressive, repetitive, and often just boring.

Perhaps the artificial attempt to create perfection (and meet expectations) is because we try and redress the imperfection of new tools. In a 1995 diary entry, Brian Eno:

Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit – all these will be cherished and emulated as soon as they can be avoided. 

It’s the sound of failure: so much of modern art is the sound of things going out of control, of a medium pushing to its limits and breaking apart. The distorted guitar is the sound of something too loud for the medium supposed to carry it. The blues singer with the cracked voice is the sound of an emotional cry too powerful for the throat that releases it. The excitement of grainy film, of bleached-out black and white, is the excitement of witnessing events too momentous for the medium assigned to record them. 

We may be surprised by artificial intelligence. Keep in mind that language models don’t have a sense of time beyond language.

You’ll notice it as you watch AI play ‘bad god’ in 7th Mission: Impossible – Dead Reckoning, Part One. (Should you choose to accept to see it, it might help to catch up on the franchise to get a bit of context.)

Perhaps the ‘bad god’ is mad due to sensory deprivation. It has little aperture to reality, which the senses provide. And it also has no constrains. The same traits it shares with the ‘bad guys.’

Imagine how hard it would be to have the whole world blow through you.

We have bodies to set boundaries and frictions. Human factors design takes into account those constraints. Art uses the same constraints to take us beyond them and let us experience something new and different—in breadth and depth.

Thus, art is incredibly hard to make. It takes hours, days, years of hard work and dedication, often with little to no acknowledgement. It’s all too easy to want to replicate previous success with formulas.

Writers, directors, painters, sculptors, dancers and choreographers put value into culture. Through their work, these artists offer opportunities to travel with our imagination, to reconnect with parts of our self—and with others.

Joy and escapism are part of it, too. Art is objectively valuable. We’ll talk more about art and forms of value transfer in culture next year.


Wittgenstein, Ludwig, Culture and Value (first published as Vermischte Bemerkungen in the original German in 1977, translated by Peter Winch, University of Chicago Press, 1984)

Eno, Brian, A Year of Swollen Appendices (Faber & Faber; Anniversary, Reprint edition, 2021)


An aphorism (from the Greek apo (= from), and horos (= boundary) is a poignant and memorable observation that petitions the reader to accept a universal truth.

In The World in a Phrase: A Brief History of the Aphorism James Geary refers to aphorisms as “literature’s hand luggage.” Aphorisms are compact stories, honed precisely to reveal an entire world in highly economical form. In the book, Geary claims all good aphorisms share five essential components they’re: brief, definitive, philosophical, personal, and have a twist. He says:

“aphorisms are not bits of uplifting text meant for passive consumption. They are challenging statements that demand a response: either the recognition of a shared insight … or a rejection and retort.”

, ,