by Oliver Wrede
published in “formdiskurs – Zetschrift für Theorie und Design”, January 1997
[ verfügbar in Deutsch ]
The personal computer has resurrected an anciet art: the art of remembering. While desktop interfaces have enabled the average human to actually use the computer, they fail miserably to support future knowledge work which will resemble more and more the skills of mediaeval scholars.
Columbus’ contemporaries prophesied that he would fall off the edge of the earth when he reached the end of the sea. The first railway passengers had to face the superstitious belief that they would leave their souls behind if they traveled too swiftly. In the 17th century, it was thought that the heads of those who traveled in their minds through history would overflow with pictures. Today, there are many who maintain that virtual reality leads to – the alienation of our own imaginary worlds because it reduces interaction with a machine to a perceptual level, thus compensating for the users’ own lack of imagination; after all, for many people creativity peaks in early childhood.
At first glance, this proposition appears naive, although such concerns are not inevitably only the product of conservative or Luddite attitudes. They can also go hand in hand with a progressive view which believes that to date there is no selfreflective, emancipatory way of approaching the new media – and not only at the design level, but also and in particular with respect to the ways in which such media are used. This may be one reason why topics such as remembering and the memory have been the focus of a series of recent articles. For the order of the day is to find new conditions for ways of using the world of multimedia, which is becoming increasingly free of technical restraints thanks to digitalization and networking. As a special method for tapping into the memory, mnemonics (or the art of remembering) plays a unique role in this context, specifically because it is in principle independent of ongoing technological developments.
Progressive de-materialization is rendering information and communications direct, omnipresent, and simultaneous; they no longer exercise a representative function for what is authentic and continuous, but are rather potentially real, especially in terms of their impact.  The drive to make everything electronic is leading to a true flood of the existing pool of information, although electronically published information accounts for only a small share of the total as yet. The information catastrophe forecast by critics of this development is already in progress. The general public would appear to view the flood as just as informative as it is misinforming. The most effective life jacket in the tidal wave of information is often a waterproof strategy of ignorance, closely followed by intelligent agents trained to retrieve and apportion information.
Steve Jobs explains in an interview  that electronic information will no longer consist exclusively of signs representing language, but rather objects which can reciprocally influence each other. Jim White of General Magic predicts that messages will become computer programs  with the matrix consequently mutating into a universal machine, with every pocket calculator capable of locking into its vast possibilities. Like the stuff messages are made of, information diffuses through the networks and is collected by agents designed for that purpose; all they require is confirmation of payment in order to make the encoded material available or to perform an encoded service. This cycle does not exclude established purveyors of information (radio, publishing companies, libraries); instead, the stability of such institutions can be integrated, put to use, and developed further.
In this respect, there are three decisive areas for interface design; development in these areas is determined primarily by economic conditions:
1. methods for acquiring/creating knowledge; teaching media;
2. artificial intelligence and paradigms for interaction;
3. forms of interpersonal cooperation
Each of these areas is subject to continual change. Thus, the conditions for design are also changing constantly. All these areas will overlap to an increasing degree, at some point perhaps merging to create an overarching problem. There will always be a central aspect of the feedback control system between human and machine  which can at best be approached by means of models. Such models are being discussed in the cognitive sciences, fields which view the computer as a malleable material that provides ideal terrain for the constructive acquisition of knowledge, with direct consequences for communicative action.
The élan of the information media could be channeled into revitalizing education. Here, electronic teaching media are both an opportunity and an objective necessity; for their customized teaching programs initially promise more efficient learning. As a result, teachers in present educational institutions are forced to reconstruct increasingly disintegrative forms of cooperation and socialization using ever fewer resources. . Thus, electronic teaching media cannot be expected to make up for a lack of pedagogical ability. We can expect there to be a shift in the standards used to evaluate success in learning. Individuals will have to absorb and assess more information than previously in order to arrive at relevant insights.
The density of information to be coped with varies in quality. In multimedia, text can be transferred easily and in large quantities thanks to its digital character; at the same time, there are increasing possibilities for manipulating and constructing images by breaking them down into pixels that the human eye can barely perceive. It then becomes the burden of the user to handle the conflict between the relativity of atomized alphabetical expressions and symbolic graphic constructions. The user has to make certain that s/he does not lose his/her way in an ambivalent information mass; however, finding it is increasingly difficult in the face of expanding possibilities.  The limits set on information design are not determined by perceptibility, but rather by how much information can be transformed into knowledge. While we have learned to “outsource” areas of memory with the help of technical storage devices, the memory now has to find and remember this information, and create or reconstruct a greater number of judgments by referentially linking the various bits.
The first known records of mnemonics date back to Classical Antiquity. There, the creation of a memory system was at the heart of the art of remembering as a method – here, the mnemonic images are placed at imaginary locations (loci or topoi). In Classical rhetoric (Cicero, De Oratore), such imaginary locations e.g., the rooms of a house – made it easier to speak without notes, as the orator could wander mnemonically through them, calling up images to monitor the progress of the speech. However, other imaginary topologies such as landscapes, paintings, parts of the bodies, stories, and the like could also serve as memory systems. For a long time, mnemonics was considered a secret art, which for scholars represented a key to wisdom, as there was in those days no external means for storing knowledge. After mnemonics had been saved from oblivion by medieval authors, in 1610 Johann Heinrich Alsted used their works as the basis for creating his Systema Mnemonica, an encyclopedic compilation on mnemonics. Alsted’s urge to systematize everything contradicted his own view, namely that remembered images influenced the soul and were consequently of fundamental importance for psychology. A number of dissertations on the origins of didactic theory quote Alsted’s student Johann Amos Comenius, who took a decisive step forward by examining the works of enlightened pedagogues whose criticism of mnemonics was motivated by a fear of the power of images. In his Bohemian Didactics, Comenius reworked mnemonics to create a representational symbolic order. He proposed teaching human anatomy using a model which consisted of a labeled leather reproduction of the human body. In this way, he ensured that the material which was to be remembered and the memory system used for the purpose of remembering matched, with the latter in turn being given symbolic references in the form of labels. In the end, mnemonics was crushed by the triumphant forward march of literary storage space – made possible by the invention of the printing press – and was considered to be frivolous and obscure. It was rediscovered in part thanks to the electronic mass media, which have once again caused non-literary media to account for a greater share in how we experience the world and have rekindled the conflict between the sensory and topological imagination, on the one hand, and logical, symbolic representationalism, on the other; a dispute which has lain dormant since the 17th century.
As regards the future design of graphic interfaces , a knowledge of mnemonics is useful in particular with respect to “cognitive tools”, which fulfill their intended purpose only when the user brings his own knowledge and intelligence to bear. The point is not to reduce a discussion on information design to an instrumental consideration of one kind of mnemonics; however, a great many theories have developed based on mnemonics. The concept of hypermedia makes it possible to organize contents topologically, while VR can be used to reproduce the locations of memory visually.
The desktop as topoi
But you do not have to go quite so far looking for mnemonic constructions in the computer: The first widespread VR application of this kind on a computer was a highly abstract and relatively flat (although in no way two-dimensional) replica of an office with a desk. It is not a pure metaphor, as has been suggested by the use of the word “desktop”, which has meanwhile become a buzzword. The degree of explicitness – created by metonymic and iconographic elements – goes beyond the merely metaphorical, which tends to cloak the unknown in the familiar. Even if the desktop metaphor still fulfills its function today – after all, the majority of computers are used in an office – its primary advantage lies in facilitating use. This is, in particular, true because the organization of a computer is simultaneously explained and visualized by the interface. This readily tangible organization is basically implemented ad absurdum throughout the amorphous overall machine (with its several million CPUs) which shows us its topology in several layers at once (geographical, chronological, dialogical, etc.). Such multiple dimensions are not a design shortfall, but instead are quite necessary if users are to be able to tap their own memory potential in a constructive way, to the extent that knowledge is generated by forging links and insights are documented and expressed by creating structures. 
Following the invention of the programmable machine, mathematical logic gave rise to computer science. A conflict emerged from the discussion of the possibilities opened up by algorithms for thought processes and the digitalization of knowledge. The discourse on the relationship between humans and machines is still taking place to day. On the one hand, this no doubt stems from the increasingly faster and cheaper processors which make the unthinkable suddenly appear feasible; on the other, it also a response to the claim put forward by numerous AI researchers that psychological processes and functions of thought can in principle be traced back to physiological realities, and can therefore in principle be given a mechanical form.
“Outsourcing” large parts of the memory – the existence of which is a prerequisite for intelligence – is conceivable’ at least in the form of global information storage units, which can be accessed telematically by a kind of computerized collective memory. However, there is much uncharted terrain between memory and knowledge, despite the fact that the more radical AI researchers stop at nothing to find clues to this mystery, decrying all hypotheses that exclude a causal link and a purely empirical epistemological paradigm as the sentimental exaggeration of the human being as some mythical beast. Alan Turing’s definition of artificial intelligence – a perfect imitation which, while it presupposes human judgment, no longer depends on defining intelligence as the consequence of physical functions – helped free the issue from the constraints and concerns in the humanities of the day. Thus, on an experimental level, attempts are being made to devise hypothetical rules which substitute for unknown functions of human intelligence, and then determine whether the knowledge gleaned from the processed information is identical, or even similar, to human knowledge. By continually adjusting these rules, the wish is to achieve a degree of similarity which goes beyond what can be achieved through algorithms. The information collected during the processing phase is used for the further autonomous development of this phase. In contrast to Turing’s definition of an intelligent machine, in this approach the ability to learn is considered to be the basis for intelligence.
It is at this point that computer scientists are beginning to take an interest in learning theories, or more precisely: in models of knowledge.  While the goal is not to make computers “human”, they are nonetheless to be furnished with modes of behavior which one normally expects from other members of society. This means, for example, that occasionally breaking the rules must be offset and individual characteristics (habits, preferences, knowledge, tolerance, etc.) must be taken into consideration. Here, the machine is no longer purely reactive and instead an active system. Large software companies are financing projects with names such as “social interface” or “knowledge agent” which suggest that the discoveries made in the cognitive sciences will be of importance for the next generation of interfaces.
In his publication “The Society of Mind”, Marvin Minsky attempts – very much in keeping with Alsted’s method – to devise a kind of “Systema Cognita” and describe the phenomenon “intelligence” as the result of combined mechanisms. Minsky describes these entities as “agents”  which require memories in order to achieve a degree of consistency and to repeat past actions. To the extent that sociologists, for example, do not reject the existence of collective intelligence, things begin to overlap when a form of collective memory emerges thanks to a corresponding network and externalization and group-specific “agencies ” are created. Minsky distinguishes between “polynemic” and “isonomic” concepts of communication as employed by the various agents. While polynemes evoke an individual response from each recipient, isonomes leave the same impression on different recipients.  Minsky writes: “Both isonomes and polynemes have to do with memories – however, in essence polynemes constitute memories themselves, while isonomes control the way in which memories are used. […] Therefore, the strength of polynemes is derived from the way in which they learn to stimulate a number of different processes at the same time, while isonomes are able to exploit abilities which many agencies already have in common.” If our communication is based on isonomic concepts (conventions and symbolic systems), the question then arises as to how collective polynemes (memories) which supposedly originate in a collective memory are transported in the first place. To the extent that the concept of agencies can be transposed onto this domain in the first place, the construction of polynemes would have to be a collective achievement.
The well-known “desktop” metaphor can once again serve as an example. The elements of known interfaces are standardized, but they increasingly potentially allow individual specifications to be taken into account and create unique ordering systems (or disorder) which provide external reference points for the internal medium of remembering. Anyone who maintains a complex informational web will avail himself of such possibilities, just as he will not forfeit being able to organize his desktop or office (or to maintain his own chaos). This kind of structure is first generated by the relationships of the elements to one another (structural definition). In designing interfaces” being able to provide “functional definitions” of things  is always problematic, because such definitions always imply a very general user who is nonetheless expected to be brought to understand them. Just as remembering the relationship between items of information serves to maintain structures, so, too, it must be possible for the user to define the relationships between the objects in interfaces if they are to be used as mnemonic systems.
While storage in an external memory presupposes merely motoric action (e.g., creating a notch in a stone with a chisel, or clicking a mouse), storage in a biological memory requires a capacity for understanding that simultaneously prevents the person remembering from getting lost in a jumble of unrelated information.  Motoric storage is only minimally useful until storage and medium blend into an “intermedium” and allow the capacity for understanding to be “inscribed” along with the information; only in this way does the act of storing information become an action which creates a structures. Information media users, future computer specialists and producers of information must endeavor to ensure that these structures have a communicative value; only thus will they be able to function as agents within the matrix, helping to spawn new forms of cooperation and helping all participants to act more effectively. Graphic interfaces can be a hindrance rather than a help if they are not designed for this purpose.
The sum total of features which a user can describe for an object (stored information or its representation) is often very small. However, there are situations in which far rnore features are available than are assigned to an object in the interface, or can be perceived visually or aurally. This is especially true of the relationships among the objects, for only these allow us to establish a context in the first place. The “interaction grid”  is usually determined in such a way that the user is unable to instrumentalize his ability to make statements about the objects. This has partly to do with the fact that the first requirernent of graphic user interfaces was to replace the command prompt and visualize the file systems. From the present point of view, the command prompt had more to do with mnemonics than the graphic interfaces which replaced it, for when the user wanted to know what s/he had to enter next, s/he hardly had any other option than to rely on memory – a good memory was practically a prerequisite for being able to use the machine. This only superficially contradicts the demand that more differentiated forms of externalization be made feasible. The primary difference between the command prompt and the graphic interface was not that one fostered mnemonics better than the other.
Rather, it lay in the effort that had to be invested in learning how to use the computer” and in the very tasks that were to be solved using these devices – a further comparison would thus be completely inappropriate. At present, the tasks which users are to perform with the help of a computer are being redefined along these lines. The computer serves not only as a tool for producing media, but also as a tool for producing information itself.
However, there are reasons for claiming that the progressive externalization of memory will ultimately cause our ability to create mnemonic images to atrophy – if externalization is taken to mean that the function of these mnemonic images can be replaced. We need to design forms that stimulate and activate aspects of remembering. Such mnemonic constructions cannot always become “transsubjective”  nor defined outside the particular context in which they are used, which may in part be a reason why until now interface design has only tentatively confronted this challenge. A part of the task must be delegated to the user who is participating in the design process. As long as concepts for interfaces are based on models of computer functions rather than on models of users ” thought processes” we will not be able to get away from defining knowledge as an instrumental form of understanding. This is a waste of the user’s cognitive potential for learning about particular mechanisms and modes of operation which can only be of use to the medium in question if individual characteristics and work modes are not to exert any significant influence on interface design. 
If interface design does not confront this problem and continues to prioritize the hitherto usual claim that the interface is the exclusive agent, then the increasing capacity of the new media for audio-visual differentiation and their ever-greater flexibility in terms of design will force users into accepting an increasingly passive role. Users would then merely zap elegantly through possibilities instead of being empowered to understand the available information as raw material that is to be processed and worked up actively. Interface designers will then leave themselves open to the criticism that they are at best meeting their own needs rather than those of users. Using the computer as a medium will become an end in itself, and the attempt to make this appear legitimate by appealing to the inevitability of technological progress  is merely an excuse.
Kittler, Friedrich (Ed.), Matejovski Dirk: Literatur im Informationszeitalter, Frankfurt/Main 1996
Kuhlen, Rainer: Hypertext, ein nicht-lineares Medium zwischen Buch und Wissensbank, Berlin 1991
Schulmeister, Rolf: Grundlagen hypermedialer Lernsysteme: Theorie – Didaktik – Design, Bonn 1996
Minsky, Marvin: The Society of Mind, New York 1994, CD-ROM
Weizenbaum, Joseph: Computer Power and Human Reason, 1976
Bartels, Klaus: Die Welt als Erinnerung – Mnemotechnik und virtuelle Räume, in: Spuren, Nr. 41, April 1993, p.31 ff.
Spangenberg, Peter M.: Beobachtungen zu einer Medientheorie der Gedächtnislosigkeit, in: Kunstforum – Konstruktionen des Erinnerns, Bd. 127, Juli-September 1994, p. 120-123
 See Kuhlen, “Zur Virtualisierung von Bibliotheken und Bühern” in: Kittler/Matejovski 1996, p. 116
 Steve Jobs Interview with Gary Wolf: The next insanely great thing, from Wired 4.02, February 1996, p. 102-107 und p. 158-163
 Vrgl. Das Postscript der Telekommunikation, from MACup, February 1994, p. 22 f.
 With reference to possible criticism of cornputer scientists by rnernbers of the hurnanities, it should be noted that there is only a minor difference between the model of man-cornputer interaction and that of computer-supported interpersonal interaction. In many cases, both rnodels serve to present similar contents, although they emphasize different aspects. Communicative interaction in the man-computer interaction model is just as plausible as is the fact that computer-supported interaction between two humans can also involve only one person and one machine playing a role in solving a concrete design problem. Here, the concept of man-cornputer interaction has been borrowed from the information sciences; I might also add that interaction with a computer can go beyond an input-output model that is reminiscent of instruction, and the above-mentioned feedback control system does not necessarily have to represent a known system.
 A number of research project reports conclude with observations on possible changes in the social behavior of the test person or group. These observations differ radically depending on the project sponsor and the design of the experiment. No fundamental observation has been made as to whether computer-supported media lead to positive or negative changes in social behavior. The novelty of the media makes it possible to take up plausible positions somewhere between techno-euphoria and cultural pessimism, and these are not atypical in this connection.
 Critics of hypertext systems consider one of the greatest weaknesses of the concept to be the burden placed on the user by having to orient himself and navigate through the hypertext. This criticism has induced hypertext writers to reflect on the design of the structures, as well as new ways of representing information which would help counteract ambiguous navigation and orientation features. The “serendipity effect” describes a phenomenon that often occurs while navigating through hypertexts: a new goal will often take precedence over the original one, of which the user in turn loses sight. In most cases, however, an undesired loss of orientation occurs, although this can be seen as a particular degree of freedom in certain situations (see Kuhlen 1991, chap. 2.3.2, pp. 132-136).
 Mnemonics is also an issue in discussions on other areas of perception; the reduction to graphic interfaces is a paramount example, but we should not exclude the possibility that audio or haptic interfaces (or combinations thereof) can pose comparable design problems which likewise need to be solved, and that mnemonics also plays a role there. This is especially true in respect of the fact, that our mind stores memories together with perceptive patterns of all kinds. Accidently re-activating these patterns (e.g. a specific smell, known from childhood) may bring up chains of memories.
 Structure is meant here to mean several concepts of networks: An information network involves organizing and designing information so as to help users access the contents; communicative networks define flows of information between individuals; institutional networks designate defined relationships between groups of people and corporations. This approach also supports the notion that these areas mutually define each other.
 Here, knowledge is not meant in the sense of wisdom, but rather information.
 Minsky’s definition of “agents” and “agencies” must be explained here (see Minsky, 1994, p. 328):
Each element or process of the mind which taken alone is in itself simple enough to be understood – although the way in which such agents interact in groups can produce phenomena that are far more difficult to understand.
Each grouping of elements according to what the grouping can accomplish as a unit, without taking into consideration the separate impact of each of its individual elements.
 See Minsky, 1994, p. 227
 Functional definition:
Defining an object with regard to its possible practical application, rather than in reference to its elements and the relationships between them (see Minsky 1994, p. 330)
 This becomes especially clear if you consider that it is almost impossible to quote someone in a language in which you are not fluent, or to copy characters from memory without knowing their meaning when you read them. For example, the children’s game “Memory” (remembering the positions of concealed pairs of cards) trains the ability to devise a means of orientation in an unconnected system (abstract illustrations and a random order) by developing a sort of translational strategy so as to make the connection from an image to a position.
 Interaction grid:
This refers to a collection of rules which determines the form the interaction will take in terms of the software design. Essentially, it establishes which steps a user must take to perform a certain action, and the alternatives that are available to him as he does so. Such an interaction grid allows certain sequences to be transferred analogously to other situations, thus making learning easier.
outside the subjective (which describes the object as distinct from the subject, although without any metaphysical implications)
 Kuhlen explains the basic principle of computer science (pragmatic primacy), as “taking into account the relevance of inforrnation. According to this basic understanding, information is knowledge in action. As a rule, information can only become relevant to action if we consider the conditions under which it is being used in a given context, e.g. individual capacity to process information or organizational goals. In order to meet the demands of pragmatic primacy in hypertext systems, the dialogical principle is proposed as a supplement to direct manipulation. Among other things, the user models developed in the context of the artificial intelligence are useful here.” (see Kuhlen, 1991, p. 338).
 According to Weizenbaum, computer scientists are begging the question by pointing out that technological developments are inevitable and that we have no alternatives, rather than ethically representing their actions (see Weizenbaum, 1976)
Additional Information in the Internet
(unsorted selection, appended 1999; may be outdated)
William H. Calvin and George A. Ojemann
Conversations with Neil’s Brain – The Neural Nature of Thought & Language
Navigation in Textual Virtual Environments using a City Metaphor
From World-Wide Web to Super-Brain
(from Principia Cybernetica Web)
Mind Tools Ltd.
Memory Techniques and Mnemonics
Designing Organizational Memory: Preserving Intellectual Assets in a Knowledge Economy
The Effect of the Media User Interface on Interactivity and Content
Collected by Peter Hancock:
Workshop on Information Theory and the Brain Abstracts
Wayne L. Abbott
The power of the human brain – A Computer Hardware and Software Representation
Paul J. Werbos
Optimization methods for brain-like intelligent control
Die Metapher des ‘Netzes’
Von der Keilschrift bis zum Internet – Verschwinden die Subjekte im Speicher?
© Oliver Wrede 1996-1997 | Translation from German by Jeremy Gaines