Issue 4/2014 - Net section


Haptic Feedback

Interview with David Parisi on the role of “touch” in digital environments

Christian Höller


David Parisi numbers among those Media Studies experts who explore the functioning of certain media through the prism of the sensory dispositive that is activated and often also reformatted by these media. To date there has been a strange dearth of theory in the history of this dispositive concerning the sense of touch in particular, the crucial foundation for virtually all media utilisation. Parisi seeks to remedy this through a cross-disciplinary discourse on the haptic – in conjunction with a notion of the “media realm” that is conceptualised from a broad range of different perspectives.

Christian Höller: Our prevalent media cultures seem to be, for the most part and the longest time in history, driven by visual and auditory perception. While the dominant interfaces to those prevailing perceptive modes are navigated by touch (as e.g. in the ever more popular “touchscreens”), the sense of touch itself seems to have a most difficult standing in the historical development of media technologies. What do you consider the primary reasons for this?

David Parisi: It’s crucial to recognize from the outset—and your question drives precisely at this point—that when we speak of touch’s role in media history, we are operating at the level of narrative; we are referring to touch’s discursive status and positioning, rather than its actual importance in the experience of a given medium.
Actively touching and manipulating media containers—holding a book, turning a page, twisting the tuning dial on a radio, folding a newspaper, pressing the buttons on a television remote control, moving a computer mouse, or pinching one’s fingers together on a touchscreen—provides an experiential frame for the reception of the audiovisual data stored in the container. Further, each of these interactions are underpinned by a set of design decisions that impact and configure the user’s experience with the particular medium in question—the binding of a book, the thickness of the paper words are printed on, the resistance provided by the tuning dial, the size and shape of the remote control’s buttons, the gestures intelligible to the touchscreen. The question becomes how much emphasis we place on this tangible, haptic materiality when narrating the significant dimensions of mediatic experience. At this moment in media history, I would suggest that we are experiencing somewhat of an upheaval in material design and configuration of container technologies. Such an upheaval serves as a McLuhanesque “anti-environment,” calling attention to the existence of an environment that previously existed beneath the level of conscious reflection. In McLuhan’s formulation—his famous “fish don’t know water exists until beached” metaphor—we only become aware of an environment with its disruption. So once buttons and knobs and pages and bindings are each subsumed by the touchscreen, we begin to attend to their previously taken-for-granted significance.
Specific to haptic interfaces, there’s a long-standing assumption, informed by what Jacques Derrida described as a “haptocentric intuitionalism,” that “touching resists virtualization”—in this formulation, touch possesses a set of innate qualities which make it resistant to the modes of inscription available to the media of seeing and hearing. But the technical lineage of haptic human-computer interfaces, as Derrida pointed out in a very brief treatment of the subject, suggests that this haptocentric intuitionalism is simply misguided, informed by ideology rather than empirical observation—both the “algorithms of immediate contact” (haptics software) and the machines responsible for enacting these algorithms (haptics hardware) have already been engineered and deployed. Considered from this perspective, touch’s virtualization exists as a fact media scholars confront with reluctance and suspicion.

Höller: The development of human-computer interaction seems to be characterized by the dominance of the graphic user interface. Does that alleged primacy of visuality and optics belie the fact that other, maybe equally important bodily or material processes are involved in that kind of interfacing?

Parisi: The graphical user interface (GUI) provided a useful means of mapping computational space, one that made the abstract world of the computer more readily accessible to the human sensorium. This mode of interfacing imposed a set of conventional orderings (or biases) onto the nascent technology imported from prior media forms. But even the earliest GUIs implicated the body—and the hand in particular—in the image. Ivan Sutherland’s Sketchpad and Douglas Engelbart’s computer mouse, for example, each made the body’s movements legible to the computer through a process motion capture and coding. These input technologies are underpinned by a set of ergonomic considerations, and corresponding norms of bodily comportment, as users attempt to adjust themselves to the materiality of the interface. So as much as the GUI might seem to be a relationship of pure opticality, it is accompanied by a shifting set of bodily techniques for apprehending and manipulating the image—an ‘ergonomic unconscious’ that pushes human bodies into all sorts of odd contortions, all aimed at comfortably and efficiently interacting with the GUI. Watching people distractedly walk down the street as they attempt to read the small screens held in their palms, or shift about to find the most comfortable position for reading their laptops—each of these bodily habits are attempts to accommodate screens to bodies and bodies to screens, and each are elided by the framing of computer as dominated by the relationship between eyes and screens.

Höller: In your work, you trace the development of “bodily interfaces” (with respect to particular media technologies) largely in the realm of computer gaming. What major historical steps have punctuated that development in terms of involving more and more sensual realms? Is gaming at the forefront of making tactile qualities operative with respect to better and better media simulation?
DP: Since the non-digital arcade machines of the Industrial Era, those in what can be loosely understood as the game industry have experimented with different modes of bodily interaction between player and machine, with each interface invoking the body in different ways. Industrial-era boxing simulators, electric shock games popular in early twentieth century arcades, pinball machines etc.; digital-era driving and flight simulators; rhythm games interfaces like the dance mat for Dance Dance Revolution, the instrument-shaped controllers for Guitar Hero and Rock Band, motion-capture interfaces such as Microsoft’s Kinect and Nintendo’s Wii Remote, the forthcoming Oculus Rift and Morpheus VR headsets —each machine invokes the player’s body in a unique way, requiring it to assimilate to the materiality of the game interface, so that bodily movements can be productively read.
Specific to the question of touch, and “making tactile qualities operative,” as you eloquently phrase it: certainly, the game industry can be credited with pushing new touch feedback technologies to market. But for each success on this front, there are dozens of haptic feedback game technologies that fail, at least from a commercial standpoint. Gamers, in spite of fetishizing, celebrating, and constantly demanding novelty, actually prove quite conservative in their preferences for interface technologies—in the face of many (often outlandish!) proposed alternatives, the keyboard and mouse combo has been the dominant control schematic for PC gaming throughout much of its history. Notwithstanding the successes of the Wii Remote and Kinect, in console gaming, the situation is relatively similar: the physical layout of the console controller, and the mechanism it uses to generate rumble (or ‘force’) feedback, has remained relatively unchanged for over fifteen years.

Höller: In the historical course of the increasing computerization and virtualization of the senses, the technological simulation of touch appears as particularly intractable and elusive. Why has this task proved to be so difficult overall? And what sorts of “haptic interfaces” would you consider as the most successful steps on that difficult path so far?

Parisi: Part of the problem with the technological simulation of touch concerns its multiplicity: from the field’s earliest days, the interface designers concerned with Computer Haptics (understood as analogous to the field of Computer Graphics) recognized that touch is not really just one sense, but a useful shorthand for a range of related senses, each with their own associated neurophysiological processes. Sensations of contact, pressure, weight, temperature, vibration, texture and movement are all gathered together under the umbrella designation ‘haptic’. Designing a haptic interface, then, involves a process of selecting which subcomponents of the haptic system will be stimulated by the device. Some devices are capable of simulating weight, but incapable of simulating temperature, while others can simulate texture, but not weight. So different systems each speak to different combinations of these subcomponents.

Adding to this technical problem is the practical question of convincing consumers to purchase the rather expensive devices required to produce robust forms of haptic feedback. Even these expensive devices, due to the aforementioned selective reproduction of touch’s subcomponents, don’t really quite deliver on the ‘high-fidelity touch’ promised by their promoters—while the devices provide tactile cues that hint at the materiality of onscreen objects, they often fail to sufficiently trick their users into believing in the physical presence of absent objects.
In terms of successes: Perhaps the most long-standing paradigm in haptic interfacing employs a point-contact based model of interaction and feedback. For example, the Geomagic PHANToM (Personal HAptic iNTerface Mechanism), designed by Thomas Massie in the mid-1990s at the Massachusetts Institute of Technology, uses a single point of contact between its user’s hand and a virtual environment to provide sensations of weight, pressure, and contact. Primarily, single-point devices like the PHANToM have been limited in their deployment to professional and specialist contexts (for example, medical simulation, computer-assisted design, and scientific visualization), where they’ve proven quite effective. But the Novint Falcon—a point-contact haptic interface released in 2007 and marketed primarily to college-age videogamers—failed in the consumer marketplace, suggesting that gamers haven’t embraced the aesthetic or utilitarian benefits offered by these devices [cf. David Parisi, „Reach In and Feel Something: On the Strategic Reconstruction of Touch in Virtual Space“, in: Animation, July 2014, Vol. 9, No. 2, pp. 228–244].
However, given the recent proliferation of touchscreen interfaces, the most potentially impactful application of haptics technology involves the use of complex vibrational cues, sent to the fingertips via a set of tiny motors in the touchscreen device, to simulate the texture of onscreen objects and produce realistic so-called “haptic effects.” These sorts of vibrating cues have been employed successfully in videogames (the rumble motors in the Sony’s DualShock controller, for example), but touchscreens expand the range of potential applications far beyond gaming. Haptic effects systems—such as Immersion Corporation’s TouchSense—that use targeted bursts of vibrations at different steps of intensity and duration to simulate on screen events are already widely used in smartphones and tablets. As advances in the precision of actuator motors and haptic effects software continue, engineers expect substantial growth in the accuracy of feedback provided by vibrating touchscreens. And, correspondingly, industry analysts predict a sharp spike in demand for the components required to manufacture touch feedback enabled screens—the market research firm Lux anticipates that by 2025, haptics technology will be a $13.8 billion market, up from $842 million in 2012. If their forecasts prove accurate, you’ll soon be able to feel the texture of an orange—or of a piece of fabric, or of a loved one’s skin—as you rub your finger across its image on the screen.

David Parisi teaches at the College of Charleston, South Carolina. In June 2014, he was a guest speaker at the conference Texture Matters: The Optical and Haptical in Media in Vienna (organised under the aegis of the eponymous project, which is funded by the Wissenschaftsfonds FWF). Preparatory work is currently underway on the conference proceedings, which will also contain Parisi’s paper “A Technics of Media Touch”; http://texturematters.univie.ac.at/.

Part II of this interview will be published in springerin 1/2015.