Art has always been interactive; it's just the definition of interactivity that has changed. The experience of art is far from passive: whatever the artist's intent may be during the act of creation, the moment a work is shown to another, it serves as a communications device, transmitting information for the viewer to interpret. Meaning is constructed at the site of reception and therefore is determined by the viewer, making every work interactive.

In the last half of the 20th century, however, there has been an increasing emphasis on the viewer in both art and theory. As artists moved away from the idealized individual contemplation of a work in the purified atmosphere of the museum, they have sought to include the viewer in new ways. Performances, happenings, installations, and time-based media such as video demand a new kind of viewer participation ranging from physical, peripatetic exploration to actual involvement as an actor and even, at times, as a partner.

The new technologies take the artist-viewer relationship even further. Digital interactivity allows information to flow in both directions as the viewer acts and reacts to a work in dynamic dialogue with the artist and becomes, in effect, a collaborator. While the artist sets the stage--the parallels with stage or film direction are many in this new arena--by providing the environment and the programming structure, the viewer activates the piece, making decisions that create the specific experience. In this process, the viewer is empowered in a dramatic way.

Nonetheless, the artist is still in charge, setting the rules of play. Looking to the future, is it possible to make works of art that move beyond the limits set by the programmers so that the (v)user can achieve near equal stature with the artist by altering the very nature of the work being experienced?

The role of the artist shifts from one situation to another, and our thoughts and ideas about artists reflect deeply held cultural beliefs and ideologies. In different contexts, they have been seen as teachers, shamans, philosophers, prophets, therapists, magicians, and even demons. In most cases, they are seen as singular individuals with a unique vision to communicate. A seductive archetype is that of the lonely, misunderstood genius on the margins of society who prevails despite all odds (the Vincent Van Gogh model). This archetype relies on the idea of an individual creatively working alone.

This definition obviously doesn't fit those working in electronic/digital media. They rely on a production team, much like a stage or film director. The exact nature of the collaboration varies in each case, but a team approach is required to realize each work. A quick look at the credits for each installation in this exhibition proves this point. While this relationship is evolving, and involves the viewer as a partner as well, it is not new. An artist with a large workshop was the norm during medieval, Renaissance, and Baroque times with specialists required in each grouping: those who could mix and apply the plaster for frescoes, the pigment experts, the assistants who painted backgrounds or robes or faces. In some cases--Rubens, for example--whole works were done by assistants while the master came through to add a few touches, approve the work, and call it his own. This has led to generations of art historians trying to discern the various "hands" at work in each piece. This practice has continued throughout this century with artists like Grant Wood hiring "cow," "barn," and "cloud" specialists, Andy Warhol and his cohorts making art in the factory, or Mark Kostabi producing works on an assembly line. What remains the same with contemporary artists is that it is usually one individual--the artist--who gets the credit for the work. Can we imagine a scenario like the academy awards where some of the awards go to a team of players? Or can we imagine a time when the (v)user gets to put his or her name on the work as well?

As we race toward the end of the millennium, we can't escape the Y2 K hype. The media's obsession has become an inescapable crescendo. Banks assure us that their software is fixed, that there will be accurate accounting of funds on January 2nd. Reputable organizations like the Red Cross have issued lists of necessities to collect in case of massive computer malfunction. It is tempting to follow those who in 999 sold their possessions and fled to the hills to await the world's end. There is something fascinating about these fears, a technological twist on humanity's age-old helplessness in the face of overwhelming natural catastrophe.

The turn of the 20th century was marked by naive faith in technology and the inevitability of progress. The telephone, electric lights, moving pictures, phonographs, and motor cars were concrete signs that anything and everything was within the grasp of rational man. Some of the prognostications sound silly now, for example, that electrical currents would be able to silence crying babies, improve home life by reducing housework, control abusive husbands, eliminate the need for drugs, and even turn everyone's skin white. While the capabilities of electricity are enormous, the predictions reveal more about the writers and their times than they do about electricity itself. Today we are at a similar turning point, an arbitrary yet compelling historical marker. As we make a momentous entry into the 21st century, both optimistic predictions and fears are inspired by digital technologies in our lives.

The development of photography and its pervasive influence provides an intriguing parallel to the effect of computers in contemporary life. It is doubtful that anyone could have predicted how the power and proliferation of images in the last century would change how we experience our lives. From an experimental beginning in the 1830s, photography expanded steadily beyond the small group of inventors and the growing number of journalists and artist users of the last century to the millions of amateurs in this century. From a magical process that seemed to accurately capture appearances, photography became the preferred medium of communication. The history of the century is captured by images, both still and moving; we strive to document our personal lives similarly because we believe that a series of photos will help us understand them. Because of our exposure to vast numbers of images, people often relate their dreams as if they were movies and describe experiences as though watching themselves on television. Images have invaded our consciousness at an unconscious level; while World Wars I and II were experienced through narrated newsreels and clips at the movies, photographic journalism and radio announcers, television brought battlefield images and victims of the Vietnam War into our living rooms.

Today we are assaulted with so many images on a daily basis that we cannot process them all. In 1989, it was estimated that "this morning 260,000 billboards will line the roads to work. This afternoon, 11,520 newspapers and 11, 556 periodicals will be available for sale. And when the sun sets again, 21,689 theaters and 1,548 drive-ins will project movies; 27,000 video outlets will rent tapes; 162 million television sets will each play for 7 hours; and 41 million photographs will have been taken....Every day...the average person is exposed to 1,600 ads. Although only eighty of those are noticed and only twelve provoke reaction, the atmosphere is thick with messages." (Whitney Museum of American Art, New York: Image World: Art and Media Culture, November 1989-February 1990, pages 17-18). And "tomorrow there will be more."

We suffer from visual overload (and those figures are ten years old); information overload has been added to this weight. Digital technologies from fax machines to personal computers to worldwide Internet access are radically changing our lives much like photography did in the last century. Our experience of time and space are being altered, virtual reality challenges our definition of reality, and power is being challenged and redistributed. While buildings and communities are being wired for the future, dangers, real and imagined, are surfacing: the discovery that Pentium 3 chip leaves a digital trail or that the defense department's computers can be accessed (and in fact, hackers gain access 10 to 15 times a day), that children will grow up dependent on computer software for spelling, and that man-made viruses and weather conditions can shut down whole systems. The immediacy of communications, the possibility of creating alternate realities, and the opportunity for multi-tasking continue to alter the way we think and operate.

These changes, both those already in place and those not yet actualized or realized, are not appealing or available for everyone. A recent ABC news program revealed that 50% of Americans have no computer, that 66% are not connected to the internet, and that 13% have no access to the high-speed phone lines necessary for internet access. While Internet use has grown rapidly, the "digital divide" creates yet another gap between the haves and the have nots. Perhaps the key shift today is that while images are inherently static, computers now allow interactivity. (Bob Hope called the television "the box in the living room that looks back at me.") At the turn of this century, lives were irrevocably changed by the new technologies, and the power of images dominated the century whether people owned cameras or televisions or not. Now we can act, not only determining what images and data we access but also producing and disseminating images and data. It is this interactivity that shifts power to the individual, thereby changing how we relate to technology.