<< preface

this blog is nina wenhart's collection of resources on the various histories of new media art. it consists mainly of non or very little edited material i found flaneuring on the net, sometimes with my own annotations and comments, sometimes it's also textparts i retyped from books that are out of print.

it is also meant to be an additional resource of information and recommended reading for my students of the prehystories of new media class that i teach at the school of the art institute of chicago in fall 2008.

the focus is on the time period from the beginning of the 20th century up to today.

>> search this blog

2009-10-28

>> Cybernetic Serendipity, review in TIME, Friday Oct 4th, 1968

from: http://www.time.com/time/printout/0,8816,838821,00.html

"Can computers create? Maybe not, but many of their programmers have a lot of fun trying to make them behave as if they could. Some technicians feed a set of numbers into the computer which activates a mechanical arm which in turn plots designs on paper. Photographs, too, can be analyzed and stored in a computer's memory, then reorganized and distorted on electronic command. The results are often tantalizing facsimiles of op and pop. In addition, computers can be programmed to direct kinetic sculptures through any number of varied cycles.

Indeed, so widely has the computer's brain been applied to esthetic pursuits that London's Institute of Contemporary Art has mounted an entire exhibit devoted to "Cybernetic Serendipity." In seven weeks, it has packed in 40,000 London art lovers, schoolboys, mathematicians and Chelsea old-age pensioners, and from admissions alone has all but recouped its $45,000 cost.

Frog to a Phoenix. Visitors are caught up in a carnivalesque March of Progress from the moment they enter. At the door, they find that their bodies have been sighted by an electric eye, which in turn triggers the computer-generated voice that welcomes them in a deep monotone. They may be approached by R.O.S.A. (Radio Operated Simulated Actress) Bosom, a roving electronic robot who actually appeared with live performers in a 1966 London production of The Three Musketeers (R.O.S.A. played the Queen of France).

On the walls hang graceful, abstract designs that look like snail shells, plus computer variations on op designs by Jeffrey Steele and Bridget Riley. Ohio State University's Charles Csuri, a painter turned programmer, employs EDP (Electronic Data Processing) to sketch funhouse-mirror distortions of Leonardo da Vinci's drawing of a man in Vitruvian proportions. Japanese Engineer Fujio Niwa has produced a computer portrait of John F. Kennedy that converts a photograph into a series of dashes, all of which converge with sinister impact on the left ear.

From the ceiling hangs a huge mobile by Britain's Gordon Pask that responds electronically to lights flashed on it by visitors. Wen Ying Tsai's sonically activated bed of strobe-lit steel rods sways to each clap of the viewer's hands. Taped sounds of computer-composed music fill the air, and computer-made poetry is on view. Some of it reads rather like Alice in Wonderland as rewritten by Charles Olson.

One Hand Clapping. Even at its best, the show proves not that computers can make art, but that humans are more essential than ever. For each of the drawings, a detailed program, painstakingly prepared by a human, was needed; the computer did no more than fill in the requested dots and lines. No genuinely observant viewer could ever confuse a vibrant Riley or a vertigo-inducing Steele painting with the computer's dry, mechanical variants on the original works. And, elaborate though Tsai's kinetic sculpture may be, it too needs a human, in fact two: one to build it and one to clap it into life in the exhibition hall. EDP does not respond to ESP, and no esthetic results can be expected from the sound of one hand clapping."

2009-10-09

>> "ARTSPEAK -- A Computer Language For Young At Heart And The Art Lover", J.T.Schwartz

article by Jehosua Friedmann, published in "The Best of Creative Computing, vol.2", 1980, pp. 62-65

ARTSPEAK, by Jacob Theodore Schwartz (, who passed away March 2nd, 2009), Courant Institute of Mathematical Sciences, New York University

the article below is archived by and on atariarchives.org




graphic of page



graphic of page



graphic of page


publications about ARTSPEAK:

- "The Art of Programming ARTSPEAK: a computer graphics language ", Henry Mullish, Courant Institute of Mathematical Sciences, New York University, 1974

- "ARTSPEAK: A graphics language for artists", Caroline Wardle, in ACM SIGGRAPH Computer Graphics, Volume 10 , Issue 1, 1976; pp.32-39


- short passage about ARTSPEAK, quotes from H.Mullish in Robert Kaupelis, "Experimental Drawing", 1980, p. 180, 181

2009-10-08

>> "General Motors", Phil Morton, 1976




Phil Morton, "General Motors", 1976
made with the sandin image processor
digitized by joncates, SAIC, who also started and hosts the complete phil morton archive

enjoy a lot!

also see jon's entry on morton on wikipedia: http://en.wikipedia.org/wiki/Phil_Morton
as well as jon's online archive which also includes the distribution religion, a predecessor to open source licences:
http://copyitright.wordpress.com/

>> "Hole in Space", Kit Galloway, Sherrie Rabinowitz, 1980



the project's website: http://www.ecafe.com/getty/HIS/
from this website:

"HOLE-IN-SPACE was a Public Communication Sculpture. On a November evening in 1980 the unsuspecting public walking past the Lincoln Center for the Performing Arts in New York City, and "The Broadway" department store located in the open air Shopping Center in Century City (LA), had a surprising counter with each other.

Suddenly head-to-toe, life-sized, television images of the people on the opposite coast appeared. They could now see, hear, and speak with each other as if encountering each other on the same sidewalk. No signs, sponsor logos, or credits were posted -- no explanation at all was offered. No self-view video monitors to distract from the phenomena of this life-size encounter. Self-view video monitors would have degraded the situation into a self-conscience videoconference.

If you have ever had the opportunity to see what the award winning video documentation captured then you would have laughed and cried at the amazing human drama and events that were played out over the evolution of the three evenings. Hole-In-Space suddenly severed the distance between both cities and created an outrageous pedestrian intersection. There was the evening of discovery, followed by the evening of intentional word-of-mouth rendezvous, followed by a mass migration of families and trans-continental loved ones, some of which had not seen each other for over twenty years.

Created and produced by Kit Galloway and Sherrie Rabinowitz. Funded in part by by grants from the National Endowment for the Arts and The Broadway Department Store, with support from Avery Fisher Hall, and the support of many companies including Western Union, General Electric and Wold Communications."

2009-10-06

>> "Baldessari sings LeWitt", John Baldessari, 1972

from ubuweb: http://www.ubu.com/film/baldessari_lewitt.html




>> "Sentences on Conceptual Art", Sol LeWitt, 1968

from ubuweb: http://www.ubu.com/papers/lewitt_sentences.html


1) Conceptual Artists are mystics rather than rationalists. They leap to conclusions that logic cannot reach.
2) Rational judgments repeat rational judgments.
3) Illogical judgments lead to new experience.
4) Formal art is essentially rational.
5) Irrational thoughts should be followed absolutely and logically.
6) If the artist changes his mind midway through the execution of the piece he compromises the result and repeats past results.
7) The artist’s will is secondary to the process he initiates from idea to completion. His willfulness may only be ego.
8) When words such as painting and sculpture are used, they connote a whole tradition and imply a consequent acceptance of this tradition, thus placing limitations on the artist who would be reluctant to make art that goes beyond the limitations.
9) The concept and idea are different. The former implies a general direction while the latter is the component. Ideas implement the concept.
10) Ideas alone can be works of art; they are in a chain of development that may eventually find some form. All ideas need not be made physical.
11) Ideas do not necessarily proceed in logical order. They may set one off in unexpected directions but an idea must necessarily be completed in the mind before the next one is formed.
12) For each work of art that becomes physical there are many variations that do not.
13) A work of art may be understood as a conductor from the artists’ mind to the viewers. But it may never reach the viewer, or it may never leave the artists’ mind.
14) The words of one artist to another may induce a chain of ideas, if they share the same concept.
15) Since no form is intrinsically superior to another, the artist may use any form, from an expression of words (written or spoken) to physical reality, equally.
16) If words are used, and they proceed from ideas about art, then they are art and not literature, numbers are not mathematics.
17) All ideas are art if they are concerned with art and fall within the conventions of art.
18) One usually understands the art of the past by applying the conventions of the present thus misunderstanding the art of the past.
19) The conventions of art are altered by works of art.
20) Successful art changes our understanding of the conventions by altering our perceptions.
21) Perception of ideas leads to new ideas.
22) The artist cannot imagine his art, and cannot perceive it until it is complete.
23) One artist may misperceive (understand it differently from the artist) a work of art but still be set off in his own chain of thought by that misconstruing.
24) Perception is subjective.
25) The artist may not necessarily understand his own art. His perception is neither better nor worse than that of others.
26) An artist may perceive the art of others better than his own.
27) The concept of a work of art may involve the matter of the piece or the process in which it is made.
28) Once the idea of the piece is established in the artist’s mind and the final form is decided, the process is carried out blindly. There are many side effects that the artist cannot imagine. These may be used as ideas for new works.
29) The process is mechanical and should not be tampered with. It should run its course.
30) There are many elements involved in a work of art. The most important are the most obvious.
31) If an artist uses the same form in a group of works and changes the material, one would assume the artist’s concept involved the material.
32) Banal ideas cannot be rescued by beautiful execution.
33) It is difficult to bungle a good idea.
34) When an artist learns his craft too well he makes slick art.
35) These sentences comment on art, but are not ar

NOTES

* Reprinted from Art-Language, Vol. 1, No. 1 (1969).

2009-10-01

>> "Systems Esthetics", Jack Burnham, 1968

from: http://www.dxarts.washington.edu/courses/470/current/reading/sys_aes.pdf

Reprinted from Artforum (September, 1968). Copyright 1968 by Jack Burnham.

A polarity is presently developing between the finite, unique work of high art, that is, painting or sculpture, and conceptions that can loosely be termed unobjects, these being either environments or artifacts that resist prevailing critical analysis. This includes works by some primary sculptors (though 0 some may reject the charge of creating environments), some gallery kinetic and luminous art, some outdoor works, happenings, and mixed media presentations. Looming below the surface of this dichotomy is a sense of radical evolution that seems to run counter to the waning revolution of abstract and nonobjective art. The evolution embraces a series of absolutely logical and incremental changes, wholly devoid of the fevered iconoclasm that accompanied the heroic period from 1907 to 1925. As yet the evolving esthetic has no critical vocabulary so necessary for its defense, nor for that matter a name or explicit cause.

In a way this situation might be likened to the "morphological development" of a prime scientific concept-as described by Thomas Kuhn in The Structure of Scientific Revolutions (1962). Kuhn sees science at any given period dominated by a single "major paradigm"; that is, a scientific conception of the natural order so pervasive and intellectually powerful that it dominates all ensuing scientific discovery. Inconsistent facts arising through experimentation are invariably labeled as bogus or trivial-until the emergence of a new and more encompassing general theory. Transition between major paradigms may best express the state of present art. Reasons for it lie in the nature of current technological shifts.

The economist, J. K. Galbraith, has rightly insisted that until recently the needs of the modern industrial state were never served by complete expression of the esthetic impulse. Power and expansion were its primary aims.

Special attention should be paid to Galbraith's observation. As an arbiter of impending socio-technical changes his position is pivotal. For the Left he represents America's most articulate apologist for Monopoly Capitalism; for the Right he is the socialist eminence grise of the Democratic Party. In The New Industrial State (1967) he challenges both Marxist orthodoxies and American mythologies premised upon laissez-faire capitalism. For them he substitutes an incipient technocracy shaped by the evolving technostructure. Such a drift away from ideology has been anticipated for at least fifty years. Already in California think-tanks and in the central planning committees of each soviet, futurologists are concentrating on the role of the technocracy, that is, its decision-making autonomy, how it handles the central storage of information, and the techniques used for smoothly implementing social change. In the automated state power resides less in the control of the traditional symbols of wealth than in information.

In the emergent "superscientific culture" long-range decision-making and its implementation become more difficult and more necessary. Judgment demands precise socio-technical models. Earlier the industrial state evolved by filling consumer needs on a piecemeal basis. The kind of product design that once produced "better living" precipitates vast crises in human ecology In the 1960s. A striking parallel exists between the "new" car of the automobile stylist and the syndrome of formalist invention in art, where "discoveries" are made through visual manipulation. Increasingly "products"-either in art or life-become irrelevant and a different set of needs arise: these t revolve around such concerns as maintaining the biological livability of the earth, producing more accurate models of social interaction, understanding [ the growing symbiosis in man-machine relationships, establishing priorities for the usage and conservation of natural resources, and defining alternate patterns of education, productivity, and leisure. In the past our technologically-conceived artifacts structured living patterns. We are now in transition M from an object-oriented to a systems-oriented culture. Here change emanates, not from things, but from the way things are done.

The priorities of the present age revolve around the problems of organization. A systems viewpoint is focused on the creation of stable, on-going relationships between organic and nonorganic systems, be these neighbor hoods, industrial complexes, farms, transportation systems, information 0 centers, recreation centers, or any of the other matrices of human activity. All living situations must be treated in the context of a systems hierarchy of values. Intuitively many artists have already grasped these relatively recent distinctions, and if their "environments" are on the unsophisticated side, this will change with time and experience.

The major tool for professionally defining these concerns is systems analysis. This is best known through its usage by the Pentagon and has more to do with the expense and complexity of modern warfare, than with any innate relation between the two. Systems analysts are not cold-blooded logicians; the best have an ever-expanding grasp of human needs and limitations. One of the pioneers of systems applications, E. S. Quade, has stated that "Systems analysis, particularly the type required for military decisions, is still largely a form of art. Art can be taught in part, but not by the means of fixed rules.... " ' Thus "The Further Dimensions" elaborated upon by Galbraith in his book are esthetic criteria. Where for some these become the means for tidying up a derelict technology, for Galbraith esthetic decision-making becomes an integral part of any future technocracy. As yet few governments fully appreciate that the alternative is biological self-destruction.

Situated between aggressive electronic media and two hundred years of industrial vandalism, the long held idea that a tiny output of art objects could somehow "beautify" or even significantly modify the environment was naive. A parallel illusion existed in that artistic influence prevails by a psychic osmosis given off by such objects. Accordingly lip service to public beauty remains the province of well-guarded museums. Through the early stages of industrialism it remained possible for decorative media, including painting and sculpture, to embody the esthetic impulse; but as technology progresses this impulse must identify itself with the means of research and production. Obviously nothing could be less true for the present situation. In a society thus estranged only the didactic function of art continues to have meaning. The artist operates as a quasipolitical provocateur, though in no concrete sense is he an ideologist or a moralist. L'art pour l'art and a century's resistance to the vulgarities of moral uplift have insured that.

The specific function of modern didactic art has been to show that art does not reside in material entities, but in relations between people and between people and the components of their environment. This accounts for the radicality of Duchamp and his enduring influence. It throws light on Picasso's lesser position as a seminal force. As with all succeeding formalist art, cubism followed the tradition of circumscribing art value wholly within finite objects.

In an advanced technological culture the most important artist best succeeds by liquidating his position as artist vis-a-vis society. Artistic nihilism established itself through this condition. At the outset the artist refused to participate in idealism through craft. "Craft-fetishism," as termed by the critic Christopher Caudwell, remains the basis of modern formalism. Instead the significant artist strives to reduce the technical and psychical distance between his artistic output and the productive means of society. Duchamp, Warhol, and Robert Morris are similarly directed in this respect. Gradually this strategy transforms artistic and technological decision-making into a single activity-at least it presents that alternative in inescapable terms. Scientists and technicians are not converted into "artists," rather the artist becomes a symptom of the schism between art and technics. Progressively the need to make ultrasensitive judgments as to the uses of technology and scientific information becomes "art" in the most literal sense. As yet the implication that art contains survival value is nearly as suspect as attaching any moral significance to it. Though with the demise of literary content, the theory that art is a form of psychic preparedness has gained articulate supporters.

Art, as an adaptive mechanism, is reinforcement of the ability to be aware of the disparity between behavioral pattern and the demands consequent upon the interaction with the environment. Art is rehearsal for those real situations in which it is vital for our survival to endure cognitive tension, to refuse the comforts of validation by affective congruence when such validation Is inappropriate because too vital interests are at stake....

The post-formalist sensibility naturally responds to stimuli both within and outside the proposed art format. To this extent some of it does begin to resemble "theater," as imputed by Michael Fried. More likely though, the label of theatricality is a red herring disguising the real nature of the shift in priorities. In respect to Mr. Fried's argument, the theater was never a purist medium, but a conglomerate of arts. In itself this never prevented the theater from achieving "high art." For clearer reading, rather than maintaining Mr. Fried's adjectives, theatrical or literalist art, or the phrase used until now in this essay, post-formalist esthetic, the term systems esthetic seems to encompass the present situation more fully.

The systems approach goes beyond a concern with staged environments and happenings; it deals in a revolutionary fashion with the larger problem of boundary concepts. In systems perspective there are no contrived confines such as the theater proscenium or picture frame. Conceptual focus rather than material limits define the system. Thus any situation, either in or outside the context of art, may be designed and judged as a system. Inasmuch as a system may contain people, ideas, messages, atmospheric conditions, power sources, and so on, a system is, to quote the systems biologist, Ludwig von Bertalanffy, a "complex of components in interaction," comprised of material, energy, and information in various degrees of organization. In evaluating systems the artist is a perspectivist considering goals, boundaries, structure, input, output, and related activity inside and outside the system. Where the object almost always has a fixed shape and boundaries, the consistency of a system may be altered in time and space, its behavior determined both by external conditions and its mechanisms of control.

In his book, The New Vision, Moholy-Nagy described fabricating a set of enamel on metal paintings. These were executed by telephoning precise: instructions to a manufacturer. An elaboration of this was projected recently by the director of the Museum of Contemporary Art in Chicago, Jan van der Marck, in a tentative exhibition, "Art by Telephone." In this instance the recorded conversation between artist and manufacturer was to become part of the displayed work of art. For systems, information, in whatever form conveyed, becomes a viable esthetic consideration.

Fifteen years ago Victor Vasarely suggested mass art as a legitimate function of industrial society. For angry critics there existed the fear of undermining art's fetish aura, of shattering the mystique of craft and private creation. If some forays have been made into serially produced art, these remain on the periphery of the industrial system. Yet the entire phenomenon of reproducing an art object ad infinitum is absurd; rather than making quality available to a large number of people, it signals the end of concrete objects embodying visual metaphor. Such demythification is the Kantian Imperative applied esthetically. On the other hand, a system esthetic is literal in that all phases of the life cycle of a system are relevant. There is no end product that is primarily visual, nor does such an esthetic rely on a "visual" syntax. It resists functioning as an applied esthetic, but is revealed in the principles underlying the progressive reorganization of the natural environment.

Various postures implicit in formalist art were consistently attacked in the later writings of Ad Reinhardt. His black paintings were hardly rhetorical devices (nor were his writings) masking Zen obscurities; rather they were the means of discarding formalist mannerism and all the latent illusionism connected with postrealistic art. His own contribution he described as:

The one work for the fine artist, tile one painting, is the painting of the onesized canvas... The single theme, one formal device, one color-monochrome one linear division in each direction, one symmetry, one texture, one free-hand brushing, one rhythm, one working everything into dissolution and one indivisibility, each painting into one overall uniformity and nonirregularity.

Even before the emergence of the anti-formalist "specific object" there appeared an oblique type of criticism, resisting emotive and literary associations. Pioneered between 1962 and 1965 in the writings of Donald Judd, it resembles what a computer programmer would call an entity's list structure, or all the enumerated properties needed to physically rebuild an object. Earlier the phenomenologist, Maurice Merleau-Ponty, asserted the impossibility of conceptually reconstructing an object from such a procedure. Modified to include a number of perceptual insights not included in a "list structure," such a technique has been used to real advantage by the antinovelist, Alain Robbe-Crillet. A web of sensorial descriptions is spun around the central images of a plot. The point is not to internalize scrutiny in the Freudian sense, but to infer the essence of a situation through detailed examination of surface effects. Similar attitudes were adopted by Judd for the purpose of critical examination. More than simply an art object's list structure, Judd included phenomenal qualities which would have never shown up in a fabricator's plans, but which proved necessary for the "seeing" of the object. This cleared the air of much criticism centered around meaning and private intention.

It would be misleading to interpret Judd's concept of "specific objects" as the embodiment of a systems esthetic. Rather object art has become a stage towards further rationalization of the esthetic process in general-both by reducing the iconic content of art objects and by Judd's candidness about their conceptual origins. However, even in 1965 he gave indications of looking beyond these finite limits.

A few of the more general aspects may persist, such as the work's being like an object or even being specific, but other characteristics are bound to develop. Since its range is wide, three-dimensional work will probably divide into a number of forms. At any rate, it will be larger than painting and much larger than sculpture, which, compared to painting, is fairly particular.... Because the nature of three dimension isn't set, given beforehand, something credible can be made, almost anything.

In the 1966 "68th American Show" at the Chicago Art Institute, the sculptor, Robert Morris, was represented by two large, L-shaped forms which were shown the previous year in New York. Morris sent plans of the pieces to the carpenters at the Chicago museum where they were assembled for less than the cost of shipping the originals from New York. In the context of a systems esthetic, possession of a privately fabricated work is no longer important. Accurate information takes priority over history and geographical location.

Morris was the first essayist to precisely describe the relation between sculpture style and the progressively more sophisticated use of industry by artists. He has lately focused upon material-forming techniques and me arrangement of these results so that they no longer form specific objects but remain uncomposed. In such handling of materials the idea of process takes precedence over end results: "Disengagement with preconceived enduring forms and orders of things is a positive assertion." Such loose assemblies of materials encompass concerns that resemble the cycles of industrial processing. Here the traditional priority of end results over technique breaks down; in a systems context both may share equal importance, remaining essential parts of the esthetic.

Already Morris has proposed systems that move beyond the confines of the minimal object. One work proposed to the City of New York last fall was later included in Willoughby Sharp's "Air Art" show in a YMHA gallery in Philadelphia. In its first state Morris's piece involved capturing steam from the pipes in the city streets, projecting this from nozzles on a platform. In Philadelphia such a system took its energy from the steam-bath room. Since 1966 Morris's interests have included designs for low relief earth sculptures consisting of abutments, hedges, and sodded mounds, visible from the air and not unlike Indian burial mounds. "Transporting" one of these would be a matter of cutting and filling earth and resodding. Morris is presently at work on one such project and unlike past sculptural concerns, it involves precise information from surveyors, landscape gardeners, civil engineering contractors, and geologists. In the older context, such as Isamu Noguchi's sunken garden at Yale University's Rare Book Library, sculpture defined the environment; with Morris's approach the environment defines what is sculptural.

More radical for the gallery are the constructions of Carl Andre. His assemblies of modular, unattached forms stand out from the works of artists who have comprised unit assembly with the totality of fixed objects. The mundane origins of Andre's units are not "hidden" within the art work as in he technique of collage. Andre's floor reliefs are architectural modifications -though they are not subliminal since they visually disengage from their surroundings. One of Andre's subtler shows took place in New York last year. 8 The viewer was encouraged to walk stocking-footed across three areas. each 12 by 12 feet and composed by 144 one-foot-square metal plates. One was not only invited to see each of these "rugs" as a grid arrangement in various | metals, but each metal grid's thermal conductivity was registered through the [ soles of the feet. Sight analysis diminishes in importance for some of the best new work; the other senses and especially kinesthesis makes "viewing" a more integrated experience. The scope of a systems esthetic presumes that problems cannot be solved by a single technical solution, but must be attacked on a multileveled, interdisciplinary basis. Consequently some of the more aware sculptors no longer think like sculptors, but they assume a span of problems more natural to architects, urban planners, civil engineers, electronic technicians, and cultural anthropologists. This is not as pretentious as some critics have insisted. It is a legitimate extension of McLuhan's remark about Pop Art when he said that it was an announcement that the entire environment was ready to become a work of art.

As a direct descendant of the "found object," Robert Smithson's identifying mammoth engineering projects as works of art ("Site-Selections") makes eminent sense. Refocusing the esthetic away from the preciousness of the work of art is in the present age no less than a survival mechanism. If Smithson's "Site-Selections" are didactic exercises, they show ; a desperate need for environmental sensibility on a larger than room scale. Sigfried Giedion pointed to specific engineering feats as objets d'art thirty years ago. Smithson has transcended this by putting engineering works into their natural settings and treating the whole as a time-bound web of man nature interactions.

Methodologically Les Levine is possibly the most consistent exponent of a systems esthetic. His environments of vacuum-formed, modular plastic units are never static; by means of experiencing ambulation through them, they consistently alter their own degree of space-surface penetrability. Levine's Clean Machine has no ideal vantage points, no "pieces" to recognize, as are implicit in formalist art. One is processed as in driving through the Holland Tunnel. Certainly this echoes Michael Fried's reference to Tony Smith's night time drive along the uncompleted New Jersey Turnpike" Yet if this is theater, as Fried insists, it is not the stage concerned with focused upon events. That has more to do with the boundary definitions that have traditionally circumscribed classical and post-classical art. In a recent environment by Levine rows of live electric wires emitted small shocks to passersby. Here behavior is controlled in an esthetic situation with no primary reference to visual circumstances. As Levine insists, "What I am after here is physical reaction, not visual concern."

This brings to mind some of the original intentions of the "Group de Recherches d'Art Visuel" in the early 1960s. The Paris-based group had sought to engage viewers kinesthetically, triggering involuntary responses through ambient-propelled "surprises." Levine's emphasis on visual disengagement is much more assured and iconoclastic; unlike the labyrinths of the GRAV, his possesses no individual work of art deflecting attention from the environment as a concerted experience.

Questions have been raised concerning the implicit anti-art position connected with Levine's disposable and infinite series. These hardly qualify as anti-art as John Perreault has pointed out. Besides emphasizing that the context of art is fluid, they are a reductio ad absurdum of the entire market mechanism that controls art through the fiction of "high art." They do not deny art, they deny scarcity as a legitimate correlative of art.

The components of systems-whether these are artistic or functional- have no higher meaning or value. Systems components derive their value solely through their assigned context. Therefore it would be impossible to regard a fragment of an art system as a work of art in itself-as say, one might treasure a fragment of one of the Parthenon friezes. This became evident in j December 1967 when Dan Flavin designed six walls with the same alternate pattern of "rose" and "gold" eight-foot fluorescent lamps. This "Broad Bright Gaudy Vulgar System," as Flavin called it, was installed in the new ; Museum of Contemporary Art in Chicago. The catalog accompanying the exhibition scrupulously resolves some of the important esthetic implications for modular systems

The components of a particular exhibition upon its termination are replaced in another situation. Perhaps put into non-art as part of a different whole in a different future. Individual units possess no intrinsic significance beyond their concrete utility. It is difficult either to project into them extraneous qualities, a spurious insight, or for them to be appropriated for fulfillment or personal inner needs. The lights are untransformed. There are no symbolic transcendental redeeming or monetary added values present. .

Flavin's work has progressed in the past six years from light sources mounted on flat reliefs, to compositions in fluorescent fixtures mounted directly on walls and floors, and recently to totalities such as his Chicago "walk-in" environment. While the majority of other light artists have continued to fabricate "light sculpture"-as if sculpture were the primary concern-Flavin has pioneered articulated illumination systems for given spaces.

By the fact that most systems move or are in some way dynamic, kinetic art should be one of the more radical alternatives to the prevailing formalist esthetic. Yet this has hardly been the case. The best publicized kinetic sculpture is mainly a modification of static formalist sculpture composition. In most instances these have only the added bonus of motion, as in the case of Tinguely, Calder, Bury, and Rickey. Only Duchamp's kinetic output managed to reach beyond formalism. Rather than visual appearance there is an entirely different concern which makes kinetic art unique. This is the peripheral perception of sound and movement in space filled with activity. All too often gallery kinetic art has trivialized the more graspable aspect of motion: - this is motion internalized and experienced kinesthetically.

There are a few important exceptions to the above. These include Otto Piene's early "Light Ballets" (1958-1962), the early (1956) water hammocks and informal on-going environments of Japan's Gutai group, some works by Len Lye, Bob Breer's first show of "Floats" (1965), Robert Whitman's laser show of "Dark" (1967), and most recently, Boyd Mefferd's "Strobe-Light Floor" (1968).

Formalist art embodies the idea of deterministic relations between a composition's visible elements. But since the early 1960s Hans Haacke has depended upon the invisible components of systems. In a systems context, invisibility, or invisible parts, share equal importance with things seen. Thus air, water, steam, and ice have become major elements in his work. On both coasts this has precipitated interest in "invisible art" among a number of young artists. Some of the best of Haacke's efforts are shown outside the gallery. These include his Rain Tree, a tree dripping patterns of water; Sky Line, a nylon line kept aloft by hundreds of helium-filled white balloons; a weather balloon balanced over a jet of air; and a large-scale nylon tent with air pockets designed to remain in balance one foot off the ground.

Haacke's systems have a limited life as an art experience, though some are quite durable. He insists that the need for empathy does not make his work function as with older art. Systems exist as on-going independent entities away from the viewer. In the systems hierarchy of control, interaction and autonomy become desirable values. In this respect Haacke's Photo-Electric Viewer Programmed Coordinate System is probably one of the most elegant, responsive environments made to date by an artist (certainly more sophisticated ones have been conceived for scientific and technical purposes). Boundary situations are central to his thinking.

A "sculpture" that physically reacts to its environment is no longer to be regarded as an object. The range of outside factors affecting it, as well as its own radius of action, reach beyond the space it materially occupies. It thus merges with the environment in a relationship that is better understood as a "system" of interdependent processes. These processes evolve without the viewer's empathy. He becomes a witness. A system is not imagined, it is real.

Tangential to this systems approach is Allan Kaprow's very unique ,concept of the Happening. In the past ten years Kaprow has moved the Happening from a rather self-conscious and stagy event to a strict and elegant procedure. The Happening now has a sense of internal logic which was lacking before. It seems to arise naturally from those same considerations that have crystallized the systems approach to environmental situations. As described by their chief inventor, the Happenings establish an indivisibility between themselves and everyday affairs; they consciously avoid materials and procedures identified with art; they allow for geographical expansiveness and mobility; they include experience and duration as part of their esthetic format; and they emphasize practical activities as the most meangingful mode of procedure. . . As structured events the Happenings are usually reversible. Alterations in the environment may be "erased" after the Happening, or as a part of the Happening's conclusion. While they may involve large areas of place, the format of the Happening is kept relatively simple, with the emphasis on establishing a participatory esthetic.

The emergence of a "post-formalist esthetic" may seem to some to embody a kind of absolute philosophy, something which, through the nature of concerns cannot be transcended. Yet it is more likely that a "systems esthetic" will become the dominant approach to a maze of socio-technical conditions rooted only in the present. New circumstances will with time generate other major paradigms for the arts.

For some readers these pages will echo feelings of the past. It may be remembered that in the fall of 1920 an ideological schism ruptured two factions of the Moscow Constructivists. The radical Marxists, led by Vladimir Tatlin, proclaimed their rejection of art's false idealisms. Establishing ourselves as "Productivists," one of their slogans became: "Down with guarding the traditions of art. Long live the constructivist technician." As a group dedicated to historical materialism and the scientific ethos, most of its members were quickly subsumed by the technological needs of Soviet Russia. As artists they ceased to exist. While the program might have d some basis as a utilitarian esthetic, it was crushed amid the Stalinist anti-intellectualism that followed.

The reasons are almost self-apparent. Industrially underdeveloped, food and heavy industry remained the prime needs of the Soviet Union for the next forty years. Conditions and structural interdependencies that naturally develop in an advanced industrial state were then only latent. In retrospect it is doubtful if any group of artists had either the knowledge or political strength to meaningfully affect Soviet industrial policies. What emerged was another vein of formalist innovation based on scientific idealism; this manifested itself in the West under the leadership of the Constructivist emigres, Gabo and Pevsner.

But for our time the emerging major paradigm in art is neither an ism nor a collection of styles. Rather than a novel way of rearranging surfaces and spaces, it is fundamentally concerned with the implementation of the art impulse in an advanced technological society. As a culture producer, man has traditionally claimed the title, Homo Faber: man the maker (of tools and images). With continued advances in the industrial revolution, he assumes a new and more critical function. As Homo Arbiter Formae his prime role becomes that of man the maker of esthetic decisions. These decisions- whether they are made concertedly or not-control the quality of all future life on the earth. Moreover these are value judgments dictating the direction of technological endeavor. Quite plainly such a vision extends beyond politlcal realities of the present. This cannot remain the case for long.

2009-09-30

>> Larry Cuba, computer animated scene from star wars

2009-09-24

>> "Mother of all Demos", Douglas Engelbart, 09.12.1968



presentation of the projects of the Augmentation Research Center (ARC), founded by Douglas Engelbart of the Stanford Research Institute (SRI), held @ Fall Joint Computer Conference (FJCC), December 9th, 1968, which became known as "the mother of all demos":
-- demo of NLS (= oNLine System)
-- "X-Y position indicator for a display system" = computer mouse (developed together with Bill English, 1967)
-- video/teleconference
-- hypertext

>> "Hyperland", Douglas Adams, 1990

50min documentary about hypertext, internet,...
written by Douglas Adams, produced by BBC2 in 1990

playlist in 5 parts on youtube:
http://www.youtube.com/view_play_list?p=E090E024E3E8E7A3



Douglas Adam's website for the project:
http://www.douglasadams.com/creations/hype.html
where it says:
"In this one-hour documentary produced by the BBC in 1990, Douglas falls asleep in front of a television and dreams about future time when he may be allowed to play a more active role in the information he chooses to digest. A software agent, Tom (played by Tom Baker), guides Douglas around a multimedia information landscape, examining (then) cuttting-edge research by the SF Multimedia Lab and NASA Ames research center, and encountering hypermedia visionaries such as Vannevar Bush and Ted Nelson. Looking back now, it's interesting to see how much he got right and how much he didn't: these days, no one's heard of the SF Multimedia Lab, and his super-high-tech portrayal of VR in 2005 could be outdone by a modern PC with a 3D card. However, these are just minor niggles when you consider how much more popular the technologies in question have become than anyone could have predicted - for while Douglas was creating Hyperland, a student at CERN in Switzerland was working on a little hypertext project he called the World Wide Web..."

>> brief history of the internet (excerpt)

from: http://www.isoc.org/internet/history/brief.shtml#Origins

"Origins of the Internet

The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his "Galactic Network" concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA, 4 starting in October 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.

Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. to the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the job. Kleinrock's conviction of the need for packet switching was confirmed.

In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the "ARPANET", publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at RAND. The RAND group had written a paper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The word "packet" was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps. 5

In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP's). The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt Beranek and Newman (BBN). As the BBN team worked on the IMP's with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network measurement system was prepared by Kleinrock's team at UCLA. 6

Due to Kleinrock's early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to be the first node on the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA and the first host computer was connected. Doug Engelbart's project on "Augmentation of Human Intellect" (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and including functions such as maintaining tables of host name to address mapping as well as a directory of the RFC's. One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this day.

Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications.

In October 1972 Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial "hot" application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages. From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of "people-to-people" traffic."

2009-09-21

>> "Man-Computer Symbiosis", J.C.R. Licklider, 1960

from: http://groups.csail.mit.edu/medg/people/psz/Licklider.html


Man-Computer Symbiosis


J. C. R. Licklider
IRE Transactions on Human Factors in Electronics,
volume HFE-1, pages 4-11, March 1960
Summary

Man-computer symbiosis is an expected development in cooperative interaction between men and electronic computers. It will involve very close coupling between the human and the electronic members of the partnership. The main aims are 1) to let computers facilitate formulative thinking as they now facilitate the solution of formulated problems, and 2) to enable men and computers to cooperate in making decisions and controlling complex situations without inflexible dependence on predetermined programs. In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. Preliminary analyses indicate that the symbiotic partnership will perform intellectual operations much more effectively than man alone can perform them. Prerequisites for the achievement of the effective, cooperative association include developments in computer time sharing, in memory components, in memory organization, in programming languages, and in input and output equipment.

1 Introduction
1.1 Symbiosis

The fig tree is pollinated only by the insect Blastophaga grossorun. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce wit bout the insect; the insect cannot eat wit bout the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative "living together in intimate association, or even close union, of two dissimilar organisms" is called symbiosis [27].

"Man-computer symbiosis is a subclass of man-machine systems. There are many man-machine systems. At present, however, there are no man-computer symbioses. The purposes of this paper are to present the concept and, hopefully, to foster the development of man-computer symbiosis by analyzing some problems of interaction between men and computing machines, calling attention to applicable principles of man-machine engineering, and pointing out a few questions to which research answers are needed. The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.
1.2 Between "Mechanically Extended Man" and "Artificial Intelligence"

As a concept, man-computer symbiosis is different in an important way from what North [21] has called "mechanically extended man." In the man-machine systems of the past, the human operator supplied the initiative, the direction, the integration, and the criterion. The mechanical parts of the systems were mere extensions, first of the human arm, then of the human eye. These systems certainly did not consist of "dissimilar organisms living together..." There was only one kind of organism-man-and the rest was there only to help him.

In one sense of course, any man-made system is intended to help man, to help a man or men outside the system. If we focus upon the human operator within the system, however, we see that, in some areas of technology, a fantastic change has taken place during the last few years. "Mechanical extension" has given way to replacement of men, to automation, and the men who remain are there more to help than to be helped. In some instances, particularly in large computer-centered information and control systems, the human operators are responsible mainly for functions that it proved infeasible to automate. Such systems ("humanly extended machines," North might call them) are not symbiotic systems. They are "semi-automatic" systems, systems that started out to be fully automatic but fell short of the goal.

Man-computer symbiosis is probably not the ultimate paradigm for complex technological systems. It seems entirely possible that, in due course, electronic or chemical "machines" will outdo the human brain in most of the functions we now consider exclusively within its province. Even now, Gelernter's IBM-704 program for proving theorems in plane geometry proceeds at about the same pace as Brooklyn high school students, and makes similar errors.[12] There are, in fact, several theorem-proving, problem-solving, chess-playing, and pattern-recognizing programs (too many for complete reference [1, 2, 5, 8, 11, 13, 17, 18, 19, 22, 23, 25]) capable of rivaling human intellectual performance in restricted areas; and Newell, Simon, and Shaw's [20] "general problem solver" may remove some of the restrictions. In short, it seems worthwhile to avoid argument with (other) enthusiasts for artificial intelligence by conceding dominance in the distant future of cerebration to machines alone. There will nevertheless be a fairly long interim during which the main intellectual advances will be made by men and computers working together in intimate association. A multidisciplinary study group, examining future research and development problems of the Air Force, estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance. That would leave, say, five years to develop man-computer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind.
2 Aims of Man-Computer Symbiosis

Present-day computers are designed primarily to solve preformulated problems or to process data according to predetermined procedures. The course of the computation may be conditional upon results obtained during the computation, but all the alternatives must be foreseen in advance. (If an unforeseen alternative arises, the whole process comes to a halt and awaits the necessary extension of the program.) The requirement for preformulation or predetermination is sometimes no great disadvantage. It is often said that programming for a computing machine forces one to think clearly, that it disciplines the thought process. If the user can think his problem through in advance, symbiotic association with a computing machine is not necessary.

However, many problems that can be thought through in advance are very difficult to think through in advance. They would be easier to solve, and they could be solved faster, through an intuitively guided trial-and-error procedure in which the computer cooperated, turning up flaws in the reasoning or revealing unexpected turns in the solution. Other problems simply cannot be formulated without computing-machine aid. Poincare anticipated the frustration of an important group of would-be computer users when he said, "The question is not, 'What is the answer?' The question is, 'What is the question?'" One of the main aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems.

The other main aim is closely related. It is to bring computing machines effectively into processes of thinking that must go on in "real time," time that moves too fast to permit using computers in conventional ways. Imagine trying, for example, to direct a battle with the aid of a computer on such a schedule as this. You formulate your problem today. Tomorrow you spend with a programmer. Next week the computer devotes 5 minutes to assembling your program and 47 seconds to calculating the answer to your problem. You get a sheet of paper 20 feet long, full of numbers that, instead of providing a final solution, only suggest a tactic that should be explored by simulation. Obviously, the battle would be over before the second step in its planning was begun. To think in interaction with a computer in the same way that you think with a colleague whose competence supplements your own will require much tighter coupling between man and machine than is suggested by the example and than is possible today.
3 Need for Computer Participation in Formulative and Real-Time Thinking

The preceding paragraphs tacitly made the assumption that, if they could be introduced effectively into the thought process, the functions that can be performed by data-processing machines would improve or facilitate thinking and problem solving in an important way. That assumption may require justification.
3.1 A Preliminary and Informal Time-and-Motion Analysis of Technical Thinking

Despite the fact that there is a voluminous literature on thinking and problem solving, including intensive case-history studies of the process of invention, I could find nothing comparable to a time-and-motion-study analysis of the mental work of a person engaged in a scientific or technical enterprise. In the spring and summer of 1957, therefore, I tried to keep track of what one moderately technical person actually did during the hours he regarded as devoted to work. Although I was aware of the inadequacy of the sampling, I served as my own subject.

It soon became apparent that the main thing I did was to keep records, and the project would have become an infinite regress if the keeping of records had been carried through in the detail envisaged in the initial plan. It was not. Nevertheless, I obtained a picture of my activities that gave me pause. Perhaps my spectrum is not typical--I hope it is not, but I fear it is.

About 85 per cent of my "thinking" time was spent getting into a position to think, to make a decision, to learn something I needed to know. Much more time went into finding or obtaining information than into digesting it. Hours went into the plotting of graphs, and other hours into instructing an assistant how to plot. When the graphs were finished, the relations were obvious at once, but the plotting had to be done in order to make them so. At one point, it was necessary to compare six experimental determinations of a function relating speech-intelligibility to speech-to-noise ratio. No two experimenters had used the same definition or measure of speech-to-noise ratio. Several hours of calculating were required to get the data into comparable form. When they were in comparable form, it took only a few seconds to determine what I needed to know.

Throughout the period I examined, in short, my "thinking" time was devoted mainly to activities that were essentially clerical or mechanical: searching, calculating, plotting, transforming, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capability.

The main suggestion conveyed by the findings just described is that the operations that fill most of the time allegedly devoted to technical thinking are operations that can be performed more effectively by machines than by men. Severe problems are posed by the fact that these operations have to be performed upon diverse variables and in unforeseen and continually changing sequences. If those problems can be solved in such a way as to create a symbiotic relation between a man and a fast information-retrieval and data-processing machine, however, it seems evident that the cooperative interaction would greatly improve the thinking process.

It may be appropriate to acknowledge, at this point, that we are using the term "computer" to cover a wide class of calculating, data-processing, and information-storage-and-retrieval machines. The capabilities of machines in this class are increasing almost daily. It is therefore hazardous to make general statements about capabilities of the class. Perhaps it is equally hazardous to make general statements about the capabilities of men. Nevertheless, certain genotypic differences in capability between men and computers do stand out, and they have a bearing on the nature of possible man-computer symbiosis and the potential value of achieving it.

As has been said in various ways, men are noisy, narrow-band devices, but their nervous systems have very many parallel and simultaneously active channels. Relative to men, computing machines are very fast and very accurate, but they are constrained to perform only one or a few elementary operations at a time. Men are flexible, capable of "programming themselves contingently" on the basis of newly received information. Computing machines are single-minded, constrained by their " pre-programming." Men naturally speak redundant languages organized around unitary objects and coherent actions and employing 20 to 60 elementary symbols. Computers "naturally" speak nonredundant languages, usually with only two elementary symbols and no inherent appreciation either of unitary objects or of coherent actions.

To be rigorously correct, those characterizations would have to include many qualifiers. Nevertheless, the picture of dissimilarity (and therefore p0tential supplementation) that they present is essentially valid. Computing machines can do readily, well, and rapidly many things that are difficult or impossible for man, and men can do readily and well, though not rapidly, many things that are difficult or impossible for computers. That suggests that a symbiotic cooperation, if successful in integrating the positive characteristics of men and computers, would be of great value. The differences in speed and in language, of course, pose difficulties that must be overcome.
4 Separable Functions of Men and Computers in the Anticipated Symbiotic Association

It seems likely that the contributions of human operators and equipment will blend together so completely in many operations that it will be difficult to separate them neatly in analysis. That would be the case it; in gathering data on which to base a decision, for example, both the man and the computer came up with relevant precedents from experience and if the computer then suggested a course of action that agreed with the man's intuitive judgment. (In theorem-proving programs, computers find precedents in experience, and in the SAGE System, they suggest courses of action. The foregoing is not a far-fetched example. ) In other operations, however, the contributions of men and equipment will be to some extent separable.

Men will set the goals and supply the motivations, of course, at least in the early years. They will formulate hypotheses. They will ask questions. They will think of mechanisms, procedures, and models. They will remember that such-and-such a person did some possibly relevant work on a topic of interest back in 1947, or at any rate shortly after World War II, and they will have an idea in what journals it might have been published. In general, they will make approximate and fallible, but leading, contributions, and they will define criteria and serve as evaluators, judging the contributions of the equipment and guiding the general line of thought.

In addition, men will handle the very-low-probability situations when such situations do actually arise. (In current man-machine systems, that is one of the human operator's most important functions. The sum of the probabilities of very-low-probability alternatives is often much too large to neglect. ) Men will fill in the gaps, either in the problem solution or in the computer program, when the computer has no mode or routine that is applicable in a particular circumstance.

The information-processing equipment, for its part, will convert hypotheses into testable models and then test the models against data (which the human operator may designate roughly and identify as relevant when the computer presents them for his approval). The equipment will answer questions. It will simulate the mechanisms and models, carry out the procedures, and display the results to the operator. It will transform data, plot graphs ("cutting the cake" in whatever way the human operator specifies, or in several alternative ways if the human operator is not sure what he wants). The equipment will interpolate, extrapolate, and transform. It will convert static equations or logical statements into dynamic models so the human operator can examine their behavior. In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.

In addition, the computer will serve as a statistical-inference, decision-theory, or game-theory machine to make elementary evaluations of suggested courses of action whenever there is enough basis to support a formal statistical analysis. Finally, it will do as much diagnosis, pattern-matching, and relevance-recognizing as it profitably can, but it will accept a clearly secondary status in those areas.
5 Prerequisites for Realization of Man-Computer Symbiosis

The data-processing equipment tacitly postulated in the preceding section is not available. The computer programs have not been written. There are in fact several hurdles that stand between the nonsymbiotic present and the anticipated symbiotic future. Let us examine some of them to see more clearly what is needed and what the chances are of achieving it.
5.1 Speed Mismatch Between Men and Computers

Any present-day large-scale computer is too fast and too costly for real-time cooperative thinking with one man. Clearly, for the sake of efficiency and economy, the computer must divide its time among many users. Timesharing systems are currently under active development. There are even arrangements to keep users from "clobbering" anything but their own personal programs.

It seems reasonable to envision, for a time 10 or 15 years hence, a "thinking center" that will incorporate the functions of present-day libraries together with anticipated advances in information storage and retrieval and the symbiotic functions suggested earlier in this paper. The picture readily enlarges itself into a network of such centers, connected to one another by wide-band communication lines and to individual users by leased-wire services. In such a system, the speed of the computers would be balanced, and the cost of the gigantic memories and the sophisticated programs would be divided by the number of users.
5.2 Memory Hardware Requirements

When we start to think of storing any appreciable fraction of a technical literature in computer memory, we run into billions of bits and, unless things change markedly, billions of dollars.

The first thing to face is that we shall not store all the technical and scientific papers in computer memory. We may store the parts that can be summarized most succinctly-the quantitative parts and the reference citations-but not the whole. Books are among the most beautifully engineered, and human-engineered, components in existence, and they will continue to be functionally important within the context of man-computer symbiosis. (Hopefully, the computer will expedite the finding, delivering, and returning of books.)

The second point is that a very important section of memory will be permanent: part indelible memory and part published memory. The computer will be able to write once into indelible memory, and then read back indefinitely, but the computer will not be able to erase indelible memory. (It may also over-write, turning all the 0's into l's, as though marking over what was written earlier.) Published memory will be "read-only" memory. It will be introduced into the computer already structured. The computer will be able to refer to it repeatedly, but not to change it. These types of memory will become more and more important as computers grow larger. They can be made more compact than core, thin-film, or even tape memory, and they will be much less expensive. The main engineering problems will concern selection circuitry.

In so far as other aspects of memory requirement are concerned, we may count upon the continuing development of ordinary scientific and business computing machines There is some prospect that memory elements will become as fast as processing (logic) elements. That development would have a revolutionary effect upon the design of computers.
5.3 Memory Organization Requirements

Implicit in the idea of man-computer symbiosis are the requirements that information be retrievable both by name and by pattern and that it be accessible through procedure much faster than serial search. At least half of the problem of memory organization appears to reside in the storage procedure. Most of the remainder seems to be wrapped up in the problem of pattern recognition within the storage mechanism or medium. Detailed discussion of these problems is beyond the present scope. However, a brief outline of one promising idea, "trie memory," may serve to indicate the general nature of anticipated developments.

Trie memory is so called by its originator, Fredkin [10], because it is designed to facilitate retrieval of information and because the branching storage structure, when developed, resembles a tree. Most common memory systems store functions of arguments at locations designated by the arguments. (In one sense, they do not store the arguments at all. In another and more realistic sense, they store all the possible arguments in the framework structure of the memory.) The trie memory system, on the other hand, stores both the functions and the arguments. The argument is introduced into the memory first, one character at a time, starting at a standard initial register. Each argument register has one cell for each character of the ensemble (e.g., two for information encoded in binary form) and each character cell has within it storage space for the address of the next register. The argument is stored by writing a series of addresses, each one of which tells where to find the next. At the end of the argument is a special "end-of-argument" marker. Then follow directions to the function, which is stored in one or another of several ways, either further trie structure or "list structure" often being most effective.

The trie memory scheme is inefficient for small memories, but it becomes increasingly efficient in using available storage space as memory size increases. The attractive features of the scheme are these: 1) The retrieval process is extremely simple. Given the argument, enter the standard initial register with the first character, and pick up the address of the second. Then go to the second register, and pick up the address of the third, etc. 2) If two arguments have initial characters in common, they use the same storage space for those characters. 3) The lengths of the arguments need not be the same, and need not be specified in advance. 4) No room in storage is reserved for or used by any argument until it is actually stored. The trie structure is created as the items are introduced into the memory. 5) A function can be used as an argument for another function, and that function as an argument for the next. Thus, for example, by entering with the argument, "matrix multiplication," one might retrieve the entire program for performing a matrix multiplication on the computer. 6) By examining the storage at a given level, one can determine what thus-far similar items have been stored. For example, if there is no citation for Egan, J. P., it is but a step or two backward to pick up the trail of Egan, James ... .

The properties just described do not include all the desired ones, but they bring computer storage into resonance with human operators and their predilection to designate things by naming or pointing.
5.4 The Language Problem

The basic dissimilarity between human languages and computer languages may be the most serious obstacle to true symbiosis. It is reassuring, however, to note what great strides have already been made, through interpretive programs and particularly through assembly or compiling programs such as FORTRAN, to adapt computers to human language forms. The "Information Processing Language" of Shaw, Newell, Simon, and Ellis [24] represents another line of rapprochement. And, in ALGOL and related systems, men are proving their flexibility by adopting standard formulas of representation and expression that are readily translatable into machine language.

For the purposes of real-time cooperation between men and computers, it will be necessary, however, to make use of an additional and rather different principle of communication and control. The idea may be highlighted by comparing instructions ordinarily addressed to intelligent human beings with instructions ordinarily used with computers. The latter specify precisely the individual steps to take and the sequence in which to take them. The former present or imply something about incentive or motivation, and they supply a criterion by which the human executor of the instructions will know when he has accomplished his task. In short: instructions directed to computers specify courses; instructions-directed to human beings specify goals.

Men appear to think more naturally and easily in terms of goals than in terms of courses. True, they usually know something about directions in which to travel or lines along which to work, but few start out with precisely formulated itineraries. Who, for example, would depart from Boston for Los Angeles with a detailed specification of the route? Instead, to paraphrase Wiener, men bound for Los Angeles try continually to decrease the amount by which they are not yet in the smog.

Computer instruction through specification of goals is being approached along two paths. The first involves problem-solving, hill-climbing, self-organizing programs. The second involves real-time concatenation of preprogrammed segments and closed subroutines which the human operator can designate and call into action simply by name.

Along the first of these paths, there has been promising exploratory work. It is clear that, working within the loose constraints of predetermined strategies, computers will in due course be able to devise and simplify their own procedures for achieving stated goals. Thus far, the achievements have not been substantively important; they have constituted only "demonstration in principle." Nevertheless, the implications are far-reaching.

Although the second path is simpler and apparently capable of earlier realization, it has been relatively neglected. Fredkin's trie memory provides a promising paradigm. We may in due course see a serious effort to develop computer programs that can be connected together like the words and phrases of speech to do whatever computation or control is required at the moment. The consideration that holds back such an effort, apparently, is that the effort would produce nothing that would be of great value in the context of existing computers. It would be unrewarding to develop the language before there are any computing machines capable of responding meaningfully to it.
5.5 Input and Output Equipment

The department of data processing that seems least advanced, in so far as the requirements of man-computer symbiosis are concerned, is the one that deals with input and output equipment or, as it is seen from the human operator's point of view, displays and controls. Immediately after saying that, it is essential to make qualifying comments, because the engineering of equipment for high-speed introduction and extraction of information has been excellent, and because some very sophisticated display and control techniques have been developed in such research laboratories as the Lincoln Laboratory. By and large, in generally available computers, however, there is almost no provision for any more effective, immediate man-machine communication than can be achieved with an electric typewriter.

Displays seem to be in a somewhat better state than controls. Many computers plot graphs on oscilloscope screens, and a few take advantage of the remarkable capabilities, graphical and symbolic, of the charactron display tube. Nowhere, to my knowledge, however, is there anything approaching the flexibility and convenience of the pencil and doodle pad or the chalk and blackboard used by men in technical discussion.

1) Desk-Surface Display and Control: Certainly, for effective man-computer interaction, it will be necessary for the man and the computer to draw graphs and pictures and to write notes and equations to each other on the same display surface. The man should be able to present a function to the computer, in a rough but rapid fashion, by drawing a graph. The computer should read the man's writing, perhaps on the condition that it be in clear block capitals, and it should immediately post, at the location of each hand-drawn symbol, the corresponding character as interpreted and put into precise type-face. With such an input-output device, the operator would quickly learn to write or print in a manner legible to the machine. He could compose instructions and subroutines, set them into proper format, and check them over before introducing them finally into the computer's main memory. He could even define new symbols, as Gilmore and Savell [14] have done at the Lincoln Laboratory, and present them directly to the computer. He could sketch out the format of a table roughly and let the computer shape it up with precision. He could correct the computer's data, instruct the machine via flow diagrams, and in general interact with it very much as he would with another engineer, except that the "other engineer" would be a precise draftsman, a lightning calculator, a mnemonic wizard, and many other valuable partners all in one.

2) Computer-Posted Wall Display: In some technological systems, several men share responsibility for controlling vehicles whose behaviors interact. Some information must be presented simultaneously to all the men, preferably on a common grid, to coordinate their actions. Other information is of relevance only to one or two operators. There would be only a confusion of uninterpretable clutter if all the information were presented on one display to all of them. The information must be posted by a computer, since manual plotting is too slow to keep it up to date.

The problem just outlined is even now a critical one, and it seems certain to become more and more critical as time goes by. Several designers are convinced that displays with the desired characteristics can be constructed with the aid of flashing lights and time-sharing viewing screens based on the light-valve principle.

The large display should be supplemented, according to most of those who have thought about the problem, by individual display-control units. The latter would permit the operators to modify the wall display without leaving their locations. For some purposes, it would be desirable for the operators to be able to communicate with the computer through the supplementary displays and perhaps even through the wall display. At least one scheme for providing such communication seems feasible.

The large wall display and its associated system are relevant, of course, to symbiotic cooperation between a computer and a team of men. Laboratory experiments have indicated repeatedly that informal, parallel arrangements of operators, coordinating their activities through reference to a large situation display, have important advantages over the arrangement, more widely used, that locates the operators at individual consoles and attempts to correlate their actions through the agency of a computer. This is one of several operator-team problems in need of careful study.

3) Automatic Speech Production and Recognition: How desirable and how feasible is speech communication between human operators and computing machines? That compound question is asked whenever sophisticated data-processing systems are discussed. Engineers who work and live with computers take a conservative attitude toward the desirability. Engineers who have had experience in the field of automatic speech recognition take a conservative attitude toward the feasibility. Yet there is continuing interest in the idea of talking with computing machines. In large part, the interest stems from realization that one can hardly take a military commander or a corporation president away from his work to teach him to type. If computing machines are ever to be used directly by top-level decision makers, it may be worthwhile to provide communication via the most natural means, even at considerable cost.

Preliminary analysis of his problems and time scales suggests that a corporation president would be interested in a symbiotic association with a computer only as an avocation. Business situations usually move slowly enough that there is time for briefings and conferences. It seems reasonable, therefore, for computer specialists to be the ones who interact directly with computers in business offices.

The military commander, on the other hand, faces a greater probability of having to make critical decisions in short intervals of time. It is easy to overdramatize the notion of the ten-minute war, but it would be dangerous to count on having more than ten minutes in which to make a critical decision. As military system ground environments and control centers grow in capability and complexity, therefore, a real requirement for automatic speech production and recognition in computers seems likely to develop. Certainly, if the equipment were already developed, reliable, and available, it would be used.

In so far as feasibility is concerned, speech production poses less severe problems of a technical nature than does automatic recognition of speech sounds. A commercial electronic digital voltmeter now reads aloud its indications, digit by digit. For eight or ten years, at the Bell Telephone Laboratories, the Royal Institute of Technology (Stockholm), the Signals Research and Development Establishment (Christchurch), the Haskins Laboratory, and the Massachusetts Institute of Technology, Dunn [6], Fant [7], Lawrence [15], Cooper [3], Stevens [26], and their co-workers, have demonstrated successive generations of intelligible automatic talkers. Recent work at the Haskins Laboratory has led to the development of a digital code, suitable for use by computing machines, that makes an automatic voice utter intelligible connected discourse [16].

The feasibility of automatic speech recognition depends heavily upon the size of the vocabulary of words to be recognized and upon the diversity of talkers and accents with which it must work. Ninety-eight per cent correct recognition of naturally spoken decimal digits was demonstrated several years ago at the Bell Telephone Laboratories and at the Lincoln Laboratory [4], [9]. To go a step up the scale of vocabulary size, we may say that an automatic recognizer of clearly spoken alpha-numerical characters can almost surely be developed now on the basis of existing knowledge. Since untrained operators can read at least as rapidly as trained ones can type, such a device would be a convenient tool in almost any computer installation.

For real-time interaction on a truly symbiotic level, however, a vocabulary of about 2000 words, e.g., 1000 words of something like basic English and 1000 technical terms, would probably be required. That constitutes a challenging problem. In the consensus of acousticians and linguists, construction of a recognizer of 2000 words cannot be accomplished now. However, there are several organizations that would happily undertake to develop an automatic recognize for such a vocabulary on a five-year basis. They would stipulate that the speech be clear speech, dictation style, without unusual accent.

Although detailed discussion of techniques of automatic speech recognition is beyond the present scope, it is fitting to note that computing machines are playing a dominant role in the development of automatic speech recognizers. They have contributed the impetus that accounts for the present optimism, or rather for the optimism presently found in some quarters. Two or three years ago, it appeared that automatic recognition of sizeable vocabularies would not be achieved for ten or fifteen years; that it would have to await much further, gradual accumulation of knowledge of acoustic, phonetic, linguistic, and psychological processes in speech communication. Now, however, many see a prospect of accelerating the acquisition of that knowledge with the aid of computer processing of speech signals, and not a few workers have the feeling that sophisticated computer programs will be able to perform well as speech-pattern recognizes even without the aid of much substantive knowledge of speech signals and processes. Putting those two considerations together brings the estimate of the time required to achieve practically significant speech recognition down to perhaps five years, the five years just mentioned.


References

[1] A. Bernstein and M. deV. Roberts, "Computer versus chess-player," Scientific American, vol. 198, pp. 96-98; June, 1958.

[2] W. W. Bledsoe and I. Browning, "Pattern Recognition and Reading by Machine," presented at the Eastern Joint Computer Conf, Boston, Mass., December, 1959.

[3] F. S. Cooper, et al., "Some experiments on the perception of synthetic speech sounds," J. Acoust Soc. Amer., vol.24, pp.597-606; November, 1952.

[4] K. H. Davis, R. Biddulph, and S. Balashek, "Automatic recognition of spoken digits," in W. Jackson, Communication Theory, Butterworths Scientific Publications, London, Eng., pp. 433-441; 1953.

[5] G. P. Dinneen, "Programming pattern recognition," Proc. WJCC, pp. 94-100; March, 1955.

[6] H. K. Dunn, "The calculation of vowel resonances, and an electrical vocal tract," J. Acoust Soc. Amer., vol. 22, pp.740-753; November, 1950.

[7] G. Fant, "On the Acoustics of Speech," paper presented at the Third Internatl. Congress on Acoustics, Stuttgart, Ger.; September, 1959.

[8] B. G. Farley and W. A. Clark, "Simulation of self-organizing systems by digital computers." IRE Trans. on Information Theory, vol. IT-4, pp.76-84; September, 1954

[9] J. W. Forgie and C. D. Forgie, "Results obtained from a vowel recognition computer program," J. Acoust Soc. Amer., vol. 31, pp. 1480-1489; November, 1959

[10] E. Fredkin, "Trie memory," Communications of the ACM, Sept. 1960, pp. 490-499

[11] R. M. Friedberg, "A learning machine: Part I," IBM J. Res. & Dev., vol.2, pp.2-13; January, 1958.

[12] H. Gelernter, "Realization of a Geometry Theorem Proving Machine." Unesco, NS, ICIP, 1.6.6, Internatl. Conf. on Information Processing, Paris, France; June, 1959.

[13] P. C. Gilmore, "A Program for the Production of Proofs for Theorems Derivable Within the First Order Predicate Calculus from Axioms," Unesco, NS, ICIP, 1.6.14, Internatl. Conf. on Information Processing, Paris, France; June, 1959.

[14] J. T. Gilmore and R. E. Savell, "The Lincoln Writer," Lincoln Laboratory, M. I. T., Lexington, Mass., Rept. 51-8; October, 1959.

[15] W. Lawrence, et al., "Methods and Purposes of Speech Synthesis," Signals Res. and Dev. Estab., Ministry of Supply, Christchurch, Hants, England, Rept. 56/1457; March, 1956.

[16] A. M. Liberman, F. Ingemann, L. Lisker, P. Delattre, and F. S. Cooper, "Minimal rules for synthesizing speech," J. Acoust Soc. Amer., vol. 31, pp. 1490-1499; November, 1959.

[17] A. Newell, "The chess machine: an example of dealing with a complex task by adaptation," Proc. WJCC, pp. 101-108; March, 1955.

[18] A. Newell and J. C. Shaw, "Programming the logic theory machine." Proc. WJCC, pp. 230-240; March, 1957.

[19] A. Newell, J. C. Shaw, and H. A. Simon, "Chess-playing programs and the problem of complexity," IBM J. Res & Dev., vol.2, pp. 320-33.5; October, 1958.

[20] A. Newell, H. A. Simon, and J. C. Shaw, "Report on a general problem-solving program," Unesco, NS, ICIP, 1.6.8, Internatl. Conf. on Information Processing, Paris, France; June, 1959.

[21] J. D. North, "The rational behavior of mechanically extended man", Boulton Paul Aircraft Ltd., Wolverhampton, Eng.; September, 1954.

[22] 0. G. Selfridge, "Pandemonium, a paradigm for learning," Proc. Symp. Mechanisation of Thought Processes, Natl. Physical Lab., Teddington, Eng.; November, 1958.

[23] C. E. Shannon, "Programming a computer for playing chess," Phil. Mag., vol.41, pp.256-75; March, 1950.

[24] J. C. Shaw, A. Newell, H. A. Simon, and T. O. Ellis, "A command structure for complex information processing," Proc. WJCC, pp. 119-128; May, 1958.

[25] H. Sherman, "A Quasi-Topological Method for Recognition of Line Patterns," Unesco, NS, ICIP, H.L.5, Internatl. Conf. on Information Processing, Paris, France; June, 1959

[26] K. N. Stevens, S. Kasowski, and C. G. Fant, "Electric analog of the vocal tract," J. Acoust. Soc. Amer., vol. 25, pp. 734-742; July, 1953.

[27] Webster's New International Dictionary, 2nd e., G. and C. Merriam Co., Springfield, Mass., p. 2555; 1958.

>> labels

>> timetravel

>> cloudy with a chance of tags


Powered By:Blogger Widgets

followers

.........

My photo
... is a Media Art historian and researcher. She holds a PhD from the University of Art and Design Linz where she works as an associate professor. Her PhD-thesis is on "Speculative Archiving and Digital Art", focusing on facial recognition and algorithmic bias. Her Master Thesis "The Grammar of New Media" was on Descriptive Metadata for Media Arts. For many years, she has been working in the field of archiving/documenting Media Art, recently at the Ludwig Boltzmann Institute for Media.Art.Research and before as the head of the Ars Electronica Futurelab's videostudio, where she created their archives and primarily worked with the archival material. She was teaching the Prehystories of New Media Class at the School of the Art Institute of Chicago (SAIC) and in the Media Art Histories program at the Danube University Krems.