1
An Overview
Media studies has a short history but a long past. Though the name
is not more than three decades old, the central intellectual problems of media
studies have been pursued in a variety of intellectual traditions over the
course of the twentieth century (and before). Looking backwards, we are in a
good position to discover ancestors in the American progressives (Chicago), the
effects tradition (Columbia), critical theory (Frankfurt), and British cultural
studies (Birmingham). This class attempts to give a guided tour through this
thicket and to help you become familiar, comfortable, and even fluent with
theoretical vocabulary and questions that have been historically important in
the formation of media studies.
Of all the times in history to be studying the mass media, this is
probably the best. Not only the dizzying technological and economic upheavals
within the media industries themselves make it so, but also the outpouring of
theory, argument, and research on the mass media from diverse academic fields.
Theories about mass communication have never been more plural--or more
contentious. The area of knowledge we provisionally call "Mass
Communication Theories" is an unsettled terrain, something of a frontier,
and frontiers are known for adventures and dangers, lawlessness and open
vistas. This course does not pretend to offer more than a survey of important
landmarks.
It will deal with central traditions of study, topics of debate, and
conceptual problems in media studies, with a bias toward the United States. You
will not learn everything you need about mass communication theories, let alone
social theory. Recent work is slighted in this course (but not in other
courses). Your development into a social theorist of media, culture, and
society will likely take your entire program at least.
It is assumed that mass communication theory is best understood as a
branch of social theory. Not only does mass communication theory historically
mimic the main currents of social theory, but the concept of communication is
central to efforts to understand modern societies: indeed, the attempt to
theorize "society" and "communication" arise in the same
moment, as reflected in Cooley's Social Organization, for instance. The class
aims to help you begin to theorize about mass communication and society and to
introduce you to a variety of positions. Part of the Iowa tradition is that
every scholar, whatever his or her particular method, area, or topic, should be
a theorist, and a theorist is (to give a minimalist definition), one who
argues, gives reasons and makes connections to larger problems.
Theory is not only something that people do in their armchairs; it
is an art that every scholar, if not citizen and human, should cultivate.
"All anthropoi naturally desire to know," said Aristotle. This class
is an invitation to theory.
Our approach will be historical. A chief way to study theory is via
the history of theory. All theory is a rapprochement with the past of theory.
Further, historical narration, as many recent theorists have claimed, is both a
political and intellectual task. It is not a matter of stringing events or
milestones together, but of claiming a lineage, and thus staking a claim to the
present. Readings for this class have been chosen for cartographic rather than
cutting-edge qualities--their ability to help anchor a cognitive map of ideas.
This class also aims to help serve as a preparation for the
qualifying examination in media studies. It is something of a theory survival
course. It aims to introduce you to the vocabulary and intellectual style and
basic issues of social theory in general, to the world of theoretical talk in
which you will be immersed here. We will also pursue, in passing, what could be
called the philosophy of scholarship--why we should theorize, publish, teach,
and what it is all for. The personal resources that give rise to theory are
precious and need fostering.
Communication is the process of creating shared meaning. Communication
between a mass medium and its audience is mass communication, a primary
contributor to the construction and maintenance of culture. The precise
relation of culture to mass communication and its function in our lives has
long been debated. Because of the power mass communication has in shaping
culture, it presents us with both opportunities and responsibilities. Media
industries must operate ethically or risk negatively influencing the culture in
which they exist. Consumers likewise have the responsibility to critically
examine media messages. Both technology and money shape the mass communication
process. Innovations in technology bring about new forms of media, or make
older forms more accessible. As profit-making entities, the media must respond to
the wishes of both advertisers and audience. Ultimately, though, the consumers
choose which forms of media they support and how they react to the messages
that face them. Technological and economic factors such as convergence and
globalization will influence the evolution of mass communication.
In preliterate cultures knowledge was passed on orally. With the
advent of writing, literacy became more highly valued than memory. After
Gutenberg; and invention of the printing press, literacy spread to all levels
of society and by the mid-19th century, a middle class with discretionary time
and income had emerged, providing a mass audience of readers. Mass media helped
to unify the diverse cultures of the United States. Television in particular
was instrumental in transforming our country into a consumer economy.
Understanding the ways in which media impact individuals and society is an
important aspect of media literacy. Other elements include an understanding of
the process of mass communication and an awareness of media content as a
"text" that provides insight into contemporary culture.In order to
develop our media literacy, we must be able to understand the process by which
media sends messages and learn to analyze those messages. This requires an
ability and willingness to analyze media messages, a knowledge of genre
conventions, and an ability to distinguish emotional from reasoned reactions.
By increasing our critical awareness, we can make better choices from among
media content.
The Internet was inspired by
Joseph C.R. Licklider's vision of a nationwide network of computers, and
further developed by the U.S. military. Personal computers made the Internet
available to non-institutional users. The most common uses of the Internet are
accessing World Wide Web files, using e-mail, and participating in mailing
lists and USENET groups. It is difficult to estimate the number of Internet
users. Usage continues to increase with teenage girls now the fastest growing
group of users. The development of on-line commerce has been controversial,
since many of the original Internet users object to their medium being
overtaken by commercialization. MP3, audio file compression software, is a form
of convergence that is changing the distribution of music dramatically.
The Internet allows every user to become a publisher. This property
has raised First Amendment issues related to misinformation, online
pornography, and copyright protection. Privacy is another concern, regarding
both online communication and easy access to personal information. The Internet
is increasingly being used as a political forum in which citizens can
communicate directly with elected officials, but runs the risk of closing out
those who lack sufficient media literacy.
Most of the early literature printed in North America was political
or religious in content. Technological advances in printing and increases in
literacy led to the flowering of the novel in the 1800s. Due to the cultural
importance of books, censorship has been a controversial issue. There are now
several categories of books, including trade, professional, el-hi, higher ed,
and mass market paperbacks.Although publishing houses were traditionally small
businesses, most books today are published by huge conglomerates. At the same
time, independent bookstores are increasingly giving way to chains. Electronic
publishing is broadening the options for aspiring writers. Although illiteracy
is rare in America, aliteracy, unwillingness to read, has caused some to wonder
if books are a dying medium. The success of the Harry Potter books, however,
has turned this trend around and helped bring about a rebirth of reading.
Newspapers as we know them date back to the seventeenth century.
Even before the Revolutionary War, American newspapers largely maintained
independence from government control. The first mass circulation newspaper was
the New York Sun, emerging in 1833 and selling for one cent a copy. Groups such
as Native Americans and African Americans also used the medium at this time to
express views outside the mainstream. Competition in the 1880s led to the rise
of yellow journalism.
Newspaper chains began forming in the 1920s, and have grown more
numerous over time. The advent of television brought further changes to the
medium. Today, metropolitan dailies are losing readership as suburban and small
town papers grow in popularity. Nevertheless, chains control 82% of all
circulation. Civic journalism and changing technology are two important issues
for all newspapers. Editors are also facing the dilemma of giving younger
readers the soft news they want or losing them as customers.
The magazine was introduced to America in the mid-18th century.
Factors such as increased literacy and industrialization fueled growth in the
industry after the Civil War. The medium was an important force for social
change in the early 20th century, due to the muckrakers. After the coming of
television, magazines continued to prosper through increased specialization. Of
more than 22,000 magazines in operation today, the top 800 consumer magazines
account for three-quarters of the industry's revenue. Since space is sold on
the basis of circulation, research groups such as the Audit Bureau of
Circulation and Simmons verify a magazine's circulation numbers. New types of magazines,
such as Webzines and synergistic magazines, are currently emerging. In order to
compete with the specialization of cable television, many magazines are now
seeking global audiences. Advertisers are becoming increasingly influential
over the stories that appear with their ads.
Movies began with the sequential action photographs of Eadweard
Muybridge in 1877. Narrative films were introduced around the turn of the
century. Film soon became a large, studio-controlled business on the West
Coast. The industry weathered the Great Depression, only to be forced into
change with the coming of television.
Today, major studios produce most movie revenue, though independent
films are often more innovative. Rather than take chances, the blockbuster
mentality of the big studios leads them to rely on such strategies as concept
films, sequels and remakes, and movies based on television shows.
Convergence of film with other forms of media has allowed new
methods of distribution and exhibition. Digital video is beginning to open up
new methods of production. The practice of product placement often effects
artistic decisions, which a media literate person can learn to detect.
The technology for radio was developed in the late 19th century, at
about the same time that sound recording was being perfected. The medium was
used in the early decades of the 20th century for point-to-point communication,
and in 1920 KDKA made the first commercial radio broadcast.
Advertising became the economic base of radio in the 1920s. Because
it offered free entertainment, radio became increasingly popular during the
Great Depression. This time was known as the golden age of radio, until
television began to overtake it in popularity after World War II.
Radio is successful today largely because it is local and
specialized, which appeals to advertisers as well as listeners. The recording
industry, on the other hand, is primarily controlled by four major companies.
The two industries have changed and prospered due to technological advances such
as digital recording and convergence of radio and the Internet. Controversial
music file-sharing software such as Napster may transform the recording
industry, in spite of legal attempts to shut it down.
Although methods of television transmission were developed as early
as 1884, television first began to gain popularity after World War II. The 1948
television freeze allowed time for the FCC to develop a plan for growth, and by
1960, 90% of American homes had a television.
The business of television is still dominated by the networks, but
new technologies are beginning to erode their power. Cable, VCR, DVD, the
remote control, direct broadcast satellite, digital video recorders, digital
television, and the Internet have diminished networks' authority and changed
the relationship between medium and audience.
News staging is an ethical issue being debated by media literate
viewers. Staging can range from giving the false appearance that a reporter is
on the scene to re-creating or simulating an event, all for the purpose of
holding the audience's attention.
Public relations is difficult to define, because it can involve
publicity, research, public affairs, media relations, promotion, merchandising,
and more. In its maturation as an industry, public relations has passed through
four stages, culminating in advanced two-way communication. Some 200,000 people
work in public relations in the United States, in 4,000 firms and in in-house
PR operations. They typically carry out 14 activities, including publicity,
communication, public affairs, and government relations. PR is not the same as
advertising, in part because of its policy-making component.
Globalization, specialization, and converging technologies are
trends in public relations. An important issue in the industry is ethical
standards, one example being the proliferation of video news releases and their
challenge to media literate consumers.
Advertising dates back thousands of years. Changes caused by
industrialization and the Civil War fueled its growth, with magazines being an
important vehicle for advertising. Radio, then television, changed the nature
of advertising, as commercials in turn changed the nature of each medium.
Although many see advertising as a critical aspect of our capitalistic society,
others find it intrusive, deceptive, and demeaning to our culture.
The 6,000 advertising agencies in the United States are paid either
through retainers or commissions. Types of ads include retail, direct market,
institutional, and public service. The link between seeing an ad and buying the
advertising product is tenuous at best.
New methods of cyberadvertising have developed over the past several
years, such as transaction journalism and intermercials. Other important developments
are increasingly fragmented audiences and globalization.
There are many theories of mass communication. The paradigms of
these theories shift as new technologies and new media are introduced. Mass
communication theory has passed through four eras. The era of mass society saw
media as all-powerful and audiences as defenseless against their effects. In
the era of the scientific perspective, research showed that media affected some
people much more strongly than others, often according to social characteristics.
The era of limited effects theory included several theories, including attitude
change theory and the uses and gratifications approach. Recognizing the power
of media effects, theorists discussed agenda setting, dependency theory, and
social cognitive theory. Contemporary mass communication theory can be called
the era of cultural theory. Media effects are seen as shaped by audience
members involvement in the process and reality is seen as socially constructed.
GerbnerÃs cultivation analysis and critical cultural theory are two important
examples of contemporary theories.
Media industry researchers debate whether media effects are
diminished when audiences know that content is only make-believe and whether
media reinforce preexisting values or are replacing them. Social scientists
test the explanations of various theories advanced to answer these questions by
doing research.
Quantitative research methods include experiments, surveys, and
content analysis. Experiments sacrifice generalizability for control and the
demonstration of causation. Surveys sacrifice casual explanations for
generalizabilty and breadth. Qualitative methods include historical, critical,
and ethnographic research. Researchers use methods such as the analysis of
primary and secondary sources and the undertaking of participant-observer
studies.
One of the most studied effects issue is the impact of mediated
violence. Researchers have studied the link between violent media content and
subsequent aggressive behavior, with social learning theory discrediting the
notion of catharsis. There is, however, disagreement regarding the exact
interplay of content and behavior. The impact of media portrayals of different
groups of people and the impact of the media on political campaigns are two
other effects issues that have been studied. Freedom of the press is
established by the First Amendment of the Constitution. This protection extends
to all forms of media but can be suspended in cases of clear and present danger
and to balance competing interests, as in the conflict between a free press and
a fair trial. Libel, slander, and obscenity are not protected. Other specific
issues of media responsibility include definitions of indecency, the impact of
deregulation, and copyright. Social responsibility theory is the idea that to
remain free of government control, the media must serve the public by acting
responsibly. This does not free audiences from their responsibility to be media
literate.
Applied ethics is the practice of applying general ethical
guidelines and values to a specific situation. Self-regulation by the media
often results in ethical dilemmas involving such issues as truth and honesty,
privacy, confidentiality, and conflict of interest. Media professionals have
established formal standards of ethical behavior, though some people object
that they are ambiguous and unenforceable.
Almost since radio's inception, signals have been broadcast
internationally in order to circumvent government control. Today, so much
entertainment is popular internationally that many countries have laws limiting
the amount of airtime devoted to foreign content. The effects of a country's
political system on its mass communications can be broken into five concepts:
Western, Development, Revolutionary, Authoritarianism, and Communism.
Regardless of the concept, most radio and television programming follows the
model of the United States, though other countries might use the media to
enforce different social messages. The advent of satellites and the Internet
has thwarted the attempts of governments to control the media. UNESCO has
called for the establishment of international rules that allow governments to
monitor media content, but Western nations reject this limitation on the
freedom of the press. The global village is bringing world communities closer
together, but often at the expense of native cultures.
2
The Media
Theories
Cultivation Theory of Mass Media
"Cultivation analysis concentrates on the enduring and common consequences
of growing up and living with television. Theories of the cultivation process
attempt to understand and explain the dynamics of television as the distinctive
and dominant cultural force of our age. Cultivation analysis uses a survey
instrument, administered to representative samples of respondents. The
responses are analyzed by a number of demographic variables including gender,
age, race, education, income, and political self-designation (liberal,
moderate, conservative). Where applicable, other controls, such as urban-rural
residence, newspaper reading, and party affiliation are also used.
Cultivation analysis is a part of the Cultural Indicators (CI)
research project. CI is a data base and a series of reports relating recurrent
features of the world of television to viewer conceptions of reality. Its
cumulative content data archive contains observations on over 4,500 programmes
and 40,000 characters coded according to many thematic, demographic and action
categories."
"...specifies that repeated, intense exposure to deviant
definitions of 'reality' in the mass media leads to perception of the 'reality'
as normal. The result is a social legitimisation of the 'reality' depicted in
the mass media, which can influence behavior. (Gerbner, 1973 & 1977;
Gerbner et al., 1980.)"
Gerbner first introduced cultivation theory in 1969 with his work
Toward "Cultural Indicators": The Analysis of Mass Mediated Public
Message Systems. Gerbner begins developing cultivation as a structural piece
for the long-term examination of public messages in media influence and
understanding. He notes in the introduction that "the approach is based on
conception of these message systems as the common culture through which
communities cultivate shared and public notions about facts, values, and
contingencies of human existence". Gerbner clarifies that his objectives
are not with "information, education, persuasion, and the like, or with
any kind of direct communication 'effects.'" More accurately, his concern
remains with "the collective context within which, and in response to
which, different individual and group selections and interpretations of
messages take place". Nonetheless, Gerbner's works present a social
psychology theory on communication effects, and consequently, on persuasion as
related to mass media.
Gerbner speaks of the "cultivation of collective
conscious" in relation to the rapid growth of media outlets (in
particular, television) and the capacity of mass media to transcend traditional
"barriers of time, space, and social grouping". Cultivation then
describes the process in which entire publics are affected by content on
television. Potter (1993) notes Gerbner's intentions for using
"cultivation" as an academic term to define his interest in "the
more diffuse effects on perceptions that are shaped over a long period of
exposure to media messages". "Cultivation," rather than
"long-term effects" indicates the emphasis on the constant nurturing,
exposure, and consistent incorporation the viewing public experiences through
mass media channels.
Contexts of Communication
Humans communicate with each other across time, space, and contexts.
Those contexts are often thought of as the particular combinations of people
comprising a communication situation. For example, theories of interpersonal
communication address the communication between dyads (two people). Group
communication deals with groups, organizational communication addresses
organizations, mass communication encompasses messages broadcast, usually
electronically, to mass audiences, intercultural communication looks at
communication among people of different cultures, and gender communication
focuses on communication issues of women and between the sexes. Newer contexts
include health communication and computer-mediated communication.
Contexts of communication are best thought of as a way to focus on
certain communication processes and effects. Communication context boundaries
are fluid. Thus, we can see interpersonal and group communication in
organizations. Gender communication occurs whenever people of different sexes
communicate. We can have mass communications to individuals, group, and
organizations.
Using communication contexts as a means to help us study
communication helps us out of problems some people associate with the
intrapersonal context (some say the "so-called" intrapersonal
context). Some people facetiously say intrapersonal communication exists when
someone talks to themselves. It's more accurate to define intrapersonal
communication as thinking. While thinking normally falls within the purview of
psychology we can recognize that we often think, plan, contemplate, and
strategize about communication past, present, and future. It is legitimate to
study the cognitive aspects of communication processes. So, even if some people
call those cognitive aspects of communication thinking, it can be helpful to
allow the context of intrapersonal communication to exist, thereby legitimating
an avenue of communication research.
Coordinated Management of Meaning
The Coordinated Management of Meaning theorizes communication as a
process that allows us to create and manage social reality. As such, this
theory describes how we as communicators make sense of our world, or create
meaning. Meaning can be understood to exist in a hierarchy, depending on the
sources of that meaning. Those sources include:
- Raw sensory data: The inputs to your eyes and ears, the visual and auditory stimuli you will interpret to see images and hear sounds;
- Content: Interpreted stimuli, where the words spoken are understood by what they refer to;
- Speech acts: Content takes on more meaning when it is further interpreted as belonging to a speaker who has specific communication styles, relationships with the listener, and intentions;
- Episodes: In common terms, you may think of this as the context of the conversation or discourse where when you understand the context you understand what the speaker thinks he or she is doing;
- Master contracts: These define the relationships the communicating participants, or what each can expect of the other in a specific episode;
- Life scripts: The set of episodes a person expects they will participate in; and
- Cultural patterns: Culturally created set of rules that govern what we understand to be normal communication in a given episode.
Persons use two types of rules to coordinate the management of
meaning among those seven levels of meaning. First, we use constitutive rules
to help understand how meaning at one level determine meaning at another level.
Second, we use regulative rules to help us regulate what we say so that we stay
within what we consider to be normal communication in a given episode.
The Meaning of Meanings
The first concept most students learn about the meaning of meanings
is the Semantic Triangle. This label refers to the three part connection among
a referent, a reference, and a symbol. The referent is the thing, such as my
cat Baxter. The reference is the thought I have of Baxter, a 12 year old grey
tabby who loves to lounge on my computer keyboard. The symbol is the word
"Baxter." Notice that if I am talking about Baxter I have selected
the referent and have control over the reference and symbol. However, if I talk
to you about Baxter lounging on my keyboard you will not understand my meaning
until you understand the thing (the referent) I am speaking of and the thoughts
(references) I have of that thing.
The Semantic Triangle allows for some ambiguity. After all, it is
not possible for you to know exactly what my thoughts are. Still, there needs
to be ways for people to help make sure they are understand by others,
especially when referents, references, or symbols are new to one or more of the
communicators. There are four ways in which we help each other understand what
we mean:
- Definitions: These tell others what is in our heads, or what we mean when we use a certain word;
- Metaphors: Metaphors allow us to talk about something unfamiliar in terms of something else that is familiar. Metaphors also allow us to merge two concepts whose result is a third concept. For example, I could speak of Baxter as a furry bundle of energy.
- Feedforward: This is the process of providing feedback to ourselves before we even speak, so that we help ourselves choose the best way to communicate in a given situation to a given audience; and
- Basic English: The set of 850 words out of which any person can effectively communicate ideas, simple or complex.
Signs, Signifiers, and Signified
Semiotics is concerned with signs and their relationship with
objects and meaning. One way to view signs is to consider them composed of a
signifier and a signified. Simply put, the signifier is the sound associated
with or image of something (e.g., a tree), the signified is the idea or concept
of the thing (e.g., the idea of a tree), and the sign is the object that
combines the signifier and the signified into a meaningful unit. Stated
differently, the sign is the relationship between the concept and the
representation of that concept. For example, when I was a child I had a stuffed
animal. OK, it was a stuffed green rat, but it was a smiling rat. That rat was
the signifier. Think what a stuffed animal could signify to a child. In my
case, it signified safety, warmth, and comfort. So, when I walked into my room
and looked at my stuffed green rat it was a sign to me that everything was OK.
Notice that the signifier and the signified cannot be separated and still
provide a meaningful basis for the sign.
Today, that stuffed green rat is just a memory to me. I cannot even
recall what I named it. In fact, as time passed that rat became a sign of
something else. The rat is still a signifier but it signifies my early
childhood when the world seemed calm, safe, and inviting. Now the rat could be
considered a sign of my youthful innocence, long past and hard to remember,
just like the name of that rat.
Symbolic Interactionism
Symbolic Interactionism is based on three assumptions:
1. communication occurs through the creation of
shared significant symbols,
2. the self is constructed through communication,
and
3. social activity becomes possible through the
role-taking process.
You can get a basic grasp of this theory by learning its keywords
and how they fit together.
I -- the active portion of the self capable of performing behaviors.
Me -- the socially reflective portion of the self, providing social
control for the actions of the I.
Self -- the combination of the I and the Me. Self is a process, not
a structure. The I acts and the me defines the self as reflective of others.
Self-indication -- experience and feedback as the I acts and the Me
observes the I from the role of the Other. The Me then gives direction
regarding future action to the I.
Generalized Other -- the typical members of a society or culture.
Specific Other -- the idea of a specific person outside the Self.
Role Taking -- putting oneself in the place of another, or waling in
someone else's shoes. We learn to Role Take by Play and Games.
Play -- activity where a child is both the self and an other,
without recognizing the self. The child plays both roles without recognizing
the self in either role.
Game -- interaction where the child has the attitude of all the others
involved in the game. The child is the self but can recognize the other's
perspectives. Thus, behavior is not a response but an interpretive process. The
individual can comprehend the self only through interaction with other people.
Gesture (nonsymbolic) Interaction--an impulsive and spontaneous
action in the sense of a reflex response (e.g., pulling your hand away
quickly when after it touches something hot).
Symbolic Interaction -- an interpretation of a symbol.
Symbol -- the representation of one thing for another thing.
Significant Symbol -- a symbol that has shared meaning (e.g., the
words in a language).
Mind -- a social, behavioral process in which the human being is
capable of acting toward and even creating his or her environment, or objects
in the environment.
Attribution Theory
Attribution Theory assumes that people try to determine why people
do what they do. This search for a reason behind behavior allows people to
attribute causes to behavior. A behavioral cause could be situational, where a
person had to do something because of the situation they were in. A
behavioral cause could also result from something unique to person. Examples of
those unique attributes include, but are not limited to:
1. the person's desire to perform the behavior
(e.g., they did it because they wanted to do it),
2. the person's whim (e.g., they did it because
they felt like doing it),
3. the person's ability (e.g., the person is
capable of doing the behavior),
4. the person's sense of obligation (e.g., the
person did it because they felt they had to or should do the behavior), and
5. the person's sense of belonging (e.g., the
person did it to fit in with a group of people important to the person).
A person seeking to understand why another person did something may
attribute one or more causes to that behavior. However, a three-stage process
leads up to the final attribution:
1. the person must perceive or observe the
behavior,
2. then the person must believe that the behavior
was intentionally performed, and
3. then the person must determine if they believe
the other person was forced to perform the behavior (in which case the cause is
attributed to the situation) or not (in which case the cause is attributed to
the other person).
Constructivism
Constructivism makes three assumptions regarding communication:
1. all communication is intentional
2. communication is goal-driven
3. negotiation comes into play with shared
interpretation (meaning)
Constructivism focuses on individuals rather than interactions. It
tries to account for why people make the certain communicative choices.
Constructs are the basis of constructivism. They are dimensions of judgment and
can be thought of as filters, files, templates, or interpretive schemes. They are
domain specific, almost exclusively focusing on interpersonal message
variations. Constructs are assumed to change over time, following Werner's
Orthogenetic Principle (impressions start globally, undifferentiated, and
unorganized then get more complex, abstract, differentiated, and organized as
people develop).
Constructivist research uses the Role Category Question to find
constructs embedded in free response writing, often about a person the writer
likes and a person the writer dislikes. The more constructs a person uses the
more cognitively differentiated they are. Cognitive differentiation is a subset
of cognitive complexity, which measures the organization, quantity, and level
of abstractness of the constructs a person holds about another person. Cognitive
differentiation measures only the quantity of constructs but still predicts the
degree to which a communicator is person centered and other oriented.
Constructivism claims that the more cognitively differentiated a person is the
more likely they are to be a competent communicator (one who intentionally uses
knowledge of shared interpretations to express meaning is such a way as to
control another person's interpretations of some event, object, person, etc.).
Constructivist research shows moderately strong correlations between
the organizational level of a person and cognitive differentiation, persuasive
ability, and perspective taking. Smaller correlations have been found between
organizational level and self monitoring.
Elaboration Likelihood Model
The Elaboration Likelihood Model claims that there are two paths to
persuasion: the central path and the peripheral path. The central path is most
appropriately used when the receiver is motivated to think about the message
and has the ability to think about the message. If the person cares about the
issue and has access to the message with a minimum of distraction, then that
person will elaborate on the message. Lasting persuasion is likely if the
receiver thinks, or rehearses, favorable thoughts about the message. A
boomerang effect (moving away from the advocated position) is likely to occur
if the subject rehearses unfavorable thoughts about the message. If the message
is ambiguous but pro-attitudinal (in line with the receiver's attitudes) then
persuasion is likely. If the message is ambiguous but counter-attitudinal then
a boomerang effect is likely.
If the message is ambiguous but attitudinally neutral (with respect
to the receiver) or if the receiver is unable or not motivated to listen to the
message then the receiver will look for a peripheral cue. Peripheral cues
include such communication strategies as trying to associate the advocated
position with things the receiver already thinks postively towards (e.g., food,
money, sex), using an expert appeal, and attempting a contrast effect where the
advocated position is presented after several other positions, which the
receiver despises, have been presented. If the peripheral cue association is
accepted then there may be a temporary attitude change and possibly future
elaboration. If the peripheral cue association is not accepted, or if it is not
present, then the person retains the attitude initially held.
If the receiver is motivated and able to elaborate on the message
and if there are compelling arguments to use, then the central route to
persuasion should be used. If the receiver is unlikely to elaborate the
message, or if the available arguments are weak, then the peripheral route to
persuasion should be used.
Social Judgment Theory
The key point of the Social Judgment Theory is that attitude change
(persuasion) is mediated by judgmental processes and effects. Put differently,
persuasion occurs at the end of the process where a person understands a
message then compares the position it advocates to the person's position on
that issue. A person's position on an issue is dependent on:
1. the person's most preferred position (their
anchor point),
2. the person's judgment of the various
alternatives (spread across their latitudes of acceptance, rejection, and
noncommitment), and
3. the person's level of ego-involvement with the
issue.
Consider the course choices available to you next term. For the sake
of argument, let's say you have four required courses to finish but have one
three credit hour elective remaining. What courses open to you would you
definitely not enroll in, no matter what? Those courses fall in your Latitude
of Rejection. Do you think anyone could persuade you to take a class that falls
in that latitude? Not likely. And, the more ego-involved you are in the
decision to enroll in your course (the more you care about that decision) the
larger your Latitude of Rejection will be. Persuasive messages that advocate
positions in your Latitude of Rejection will be contrasted by you. That is,
they will appear to be further away from your anchor point than they
actually are. That's not good news for the would-be persuader.
Now consider the courses that you really don't have an opinion
about, that you don't have positive or negative feelings toward. Those courses
fall in your Latitude of Noncommitment. It's possible that someone could
persuade you to enroll in one of those courses, but you'd have to learn more
about the course first, at least enough until you an opinion, or judgment,
about it. Now, consider all those courses you would consider enrolling in.
Those courses fall in your Latitude of Acceptance. A person with good arguments
might be able to persuade you to take one of those courses, especially if, in
your judgment, the course is similar to your anchor point course. Persuasive
messages that advocate positions in your Latitude of Acceptance will be
assimilated by you. That is, they will appear to be closer to your anchor point
than they actually are. That's good news for the would-be persuader.
If you are persuaded, then the further a message's position is away
from your anchor point, the larger your attitude change will be. But remember
that it is very unlikely that you will be persuaded out of your Latitude of
Rejection. So, once a message enters that region and moves away from your
anchor point, the amount of your attitude change decreases.
Social Penetration Theory
Social Penetration Theory asserts that as relationships develop
persons communication from superficial to deeply personal topics, slowing
penetrating the communicators' public persona to reach their core personality
or sense of self. First viewed as a direct, continuous penetration from public
person to private person, social penetration is know considered to be a
cyclical and dialectical. Relationships have normal ebbs and flows. They do not
automatically get better and better where the participants learn more and more
about each other. Instead, the participants have to work through the tensions
of the relationship (the dialectic) while they learn and group themselves and a
parties in a relationships. At times the relationships is very open and
sharing. Other time, one or both parties to the relationship need their space,
or have other concerns, and the relationship is less open. The theory posits
that these cycles occur throughout the life of the relationship as the persons
try to balance their needs for privacy and open relationship.
Persons allow other people to penetrate their public self when they
disclose personal information. The decision to disclose is based on the
perceived rewards the person will gain if he or she discloses information. If a
person perceives that the cost of disclosing information is greater than the
rewards for disclosing information then no information will be disclosed. The
larger the reward-cost ratio the more disclosure takes place. If you think to
the relationships you have been in you will probably find that in almost all of
them more disclosure took place at the outset of the relationship than at any
other place. That happens because people initially disclose superficial
information that costs very little if another person finds it out. It matters
little if you know that I enjoy all types of music but especially enjoy
listening to blues, saxophone jazz, and straightforward rock-n-roll.
It gets a bit more personal when I start explaining why I like those
types of music, so I, like most people, will wait until you reciprocate and
tell me your favorite types of music before I allow you more visibility into
who I am. The deeper I allow you to penetrate my self, the more affective
information I will disclose to you. The closer you get to my core self the
higher my perceived costs will be for disclosing that information. Thus, it is
not likely that I will disclose very personal information to very many people.
Uncertainty Reduction Theory
The Uncertainty Reduction Theory asserts that people have a need to
reduce uncertainty about others by gaining information about them. Information
gained can then be used to predict the others' behavior. Reducing uncertainty
is particularly important in relationship development, so it is typical to find
more uncertainty reduction behavior among people when they expect or want to develop
a relationship than among people who expect or know they will not develop a
relationship. Consider how you try to reduce uncertainty about someone you have
just met and want to spend more time with. Now consider how you try to reduce
uncertainty about people you meet on an elevator.
There are three basic ways people seek information about another
person:
1. Passive strategies -- we observe the person,
either in situations where the other person is likely to be self-monitoring* (a
reactivity search) as in a classroom, or where the other person is likely to
act more naturally (a disinhibition search) as in the stands at a football
game.
2. Active strategies -- we ask others about the
person we're interested in or try to set up a situation where we can observe
that person (e.g., taking the same class, sitting a table away at dinner). Once
the situation is set up we sometime observe (a passive strategy) or talk with
the person (an interactive strategy).
3. Interactive strategies -- we communicate
directly with the person.
* Self-monitoring is a behavior where we watch and strategically
manipulate how we present ourselves to others.
Groupthink
Groupthink occurs when a homogenous highly cohesive group is so
concerned with maintaining unanimity that they fail to evaluate all their
alternatives and options. Groupthink members see themselves as part of an
in-group working against an outgroup opposed to their goals. You can tell if a
group suffers from groupthink if it:
1. overestimates its invulnerability or high
moral stance,
2. collectively rationalizes the decisions it
makes,
3. demonizes or stereotypes outgroups and their
leaders,
4. has a culture of uniformity where individuals
censor themselves and others so that the facade of group unanimty is
maintained, and
5. contains members who take it upon themselves
to protect the group leader by keeping information, theirs or other group
members', from the leader.
Groups engaged in groupthink tend to make faulty decisions when
compared to the decisions that could have been reached using a fair, open, and
rational decision-making process. Groupthinking groups tend to:
1. fail to adequately determine their objectives
and alternatives,
2. fail to adequately assess the risks associated
with the group's decision,
3. fail to cycle through discarded alternatives
to reexamine their worth after a majority of the group discarded the
alternative,
4. not seek expert advice,
5. select and use only information that supports
their position and conclusions, and
6. does not make contingency plans in case their
decision and resulting actions fail.
Group leaders can prevent groupthink by:
1. encouraging members to raise objections and
concerns;
2. refraining from stating their preferences at
the onset of the group's activities;
3. allowing the group to be independently
evaluated by a separate group with a different leader;
4. splitting the group into sub-groups, each with
different chairpersons, to separately generate alternatives, then bringing the
sub-groups together to hammer out differences;
5. allowing group members to get feedback on the
group's decisions from their own constituents;
6. seeking input from experts outside the group;
7. assigning one or more members to play the role
of the devil's advocate;
8. requiring the group to develop multiple
scenarios of events upon which they are acting, and contingencies for each
scenario; and
9. calling a meeting after a decision consensus
is reached in which all group members are expected to critically review the
decision before final approval is given.
Organizing
Karl Weick writes of the process oriented organizing, rather than
the structural oriented organization. Communication is key to the organizing
process because it is a large factor in the sense-making process people use
when they organize. The sense-making process is an attempt to reduce
equivocality, or multiple meanings, in the information used by the people in
the organization. When information is handled by the organizers they go through
the stages of:
- Enactment -- where they define the situation and begin the process of dealing with the information,
- Selection -- where they narrow the equivocality by deciding what to deal with and what to leave along, ignore, or disregard, and
- Retention -- where they decide what information, and its meaning, they will retain for future use.
In both the selection and retention stages there are additional
processes. These processes depend on double interacts. An act occurs when you
say something ("Can I have a popsicle?"). An interact occurs when you
say something and I respond ("No, it will spoil you dinner."). A
double interact occurs when you say something, I respond, then you respond to
that, adjusting your first statement ("Well, how about half a
popsicle?"). Double interacts works in:
1. assembly rules -- the operating procedures
(e.g., all requests for information from the media must be handled by the
Corporate Communications Dept., requests for pay raises must be made through
your immediate supervisor, etc.) used by the company to choose what to do
maximize the likelihood of achieving the goal at hand, and in the
2. behavior cycles -- sets of double interacts
the organization uses to facilitate the selection and retention process.
Examples of behavior cycles include staff meetings, coffee-break rumoring,
e-mail conversations, internal reports, etc.
Weick sees the the organization as a system taking in equivocal
information from its environment, trying to make sense of that information, and
using what was learned in the future. As such, organizations evolve as they
make sense out of themselves and their environment.
Muted Group Theory
Summary
Muted Group Theory is a critical theory because it is concerned with
power and how it is used against people. While critical theories can separate
the powerful and the powerless any number of ways, this theory chooses to
bifurcate the power spectrum into men and women.
Muted Group Theory begins with the premise that language is culture
bound, and because men have more power than women, men have more influence over
the language, resulting in language with a male-bias. Men create the words and
meaning for the culture, allowing expression of their ideas. Women, on the
other hand, are left out of this meaning creation and left without a means to
express that which is unique to them. That leaves women as a muted group.
The Muted Group Theory rests on three assumptions:
1. Men and women perceive the world differently
because they have different perception shaping experiences. Those different
experiences are a result of men and women performing different tasks in
society.
2. Men enact their power politically,
perpetuating their power and suppressing women's ideas and meanings from
gaining public acceptance.
3. Women must convert their unique ideas,
experiences, and meanings into male language in order to be heard.
The premise and assumptions leads to a number of hypotheses about
women's communication:
1. Women have a more difficult time expressing
themselves than do men.
2. Women understand what men mean more easily
than men understand what women mean.
3. Women communicate with each other using media
not accepted by the dominant male communicators.
4. Women are less satisfied with communication
than are men.
5. Women are not likely to create new words, but
sometimes do so to create meanings special and unique to women.
Muted Group Theory does not claim that these differences are based
in biology. Instead, the theory claims that men risk losing their dominant
position if they listen to women, incorporate their experiences in the
language, and allow women to be equal partners in language use and creation.
Language is about power, and men have it.
Face
Erving Goffman wrote about face in conjunction with how people
interact in daily life. He claims that everyone is concerned, to some extent,
with how others perceive them. We act socially, striving to maintain the
identity we create for others to see. This identity, or public self-image, is
what we project when we interact socially. To lose face is to publicly suffer a
diminished self-image. Maintaining face is accomplished by taking a line while
interacting socially. A line is what the person says and does during that
interaction showing how the person understands the situation at hand and the
person's evaluation of the interactants. Social interaction is a process
combining line and face, or face work. Brown and Levinson use the concept of
face to explain politeness. To them, politeness is universal, resulting from
people's face needs:
1. Positive face is the desire to be liked,
appreciated, approved, etc.
2. Negative face is the desire not to be imposed
upon, intruded, or otherwise put upon.
Positive politeness addresses positive face concerns, often by
showing prosocial concern for the other's face. Negative politeness addresses
negative face concerns, often by acknowledging the other's face is threatened.
Anytime a person threatens another person's face, the first person commits a
face-threatening act (FTA). Face-threatening acts come in four varieties,
listed below in order from most to least face threatening:
1. Do an FTA baldly, with no politeness (e.g.,
"Close your mouth when you eat you swine.").
2. Do an FTA with positive politeness (e.g.,
"You have such beautiful teeth. I just wish I didn't see them when you
eat.").
3. Do an FTA with negative politeness (e.g.,
"I know you're very hungry and that steak is a bit tough, but I would
appreciate it if you would chew with your mouth closed.").
4. Do an FTA indirectly, or off-record (e.g.,
"I wonder how far a person's lips can stretch yet remain closed when
eating?"). An indirect FTA is ambiguous so the receiver may "catch
the drift" but the speaker can also deny a meaning if they wish.
Of course, a person can choose not to threaten another's face at
all, but when a face must be threaten, a speaker can decide how threatening he
or she will be.
Cultivation Theory
According to Cultivation Theory, television viewers are cultivated
to view reality similarly to what they watch on television. No one TV show gets
credit for this effect. Instead, the medium of television gets the credit.
Television shows are mainstream entertainment, easy to access, and generally
easy to understand. As such, they provide a means by which people are
socialized into the society, albeit with an unrealistic notion of realty at times,
particularly with respect to social dangers. Television seeks to show and
reinforce commonalities among us, so those who regularly watch television tend
to see the world in the way television portrays it. Compared to actual
demographics, women, minorities, upper-class, and lower-class people are
under-represented on television shows. At the same time, the percent of people
who work in law enforcement and violent crime are over-represented. People who
are heavy watchers of television assimilate this information and believe that
the world is a dangerous, scary place where others can't be trusted. This is
known as the "mean world syndrome." Further, heavy watchers of to
blur distinctions between social groups such as the poor and the rich, urban
and rural populations, and different racial groups. Those TV watchers also
identify themselves as political moderates but answer surveys similarly to how
political conservatives answer the surveys. Not everyone is successfully
cultivated by television. Those who watch little television are not affected.
Likewise, people who talk about what they see, especially adolescents who talk
with their parents, are less likely to alter their view of reality to match
what they see on television.
The Spiral of Silence
The Spiral of Silence is a model of why people are unwilling to
publicly express their opinions when they believe they are in the minority. The
model is based on three premises:
1. people have a "quasi-statistical
organ," a sixth-sense if you will, which allows them to know the
prevailing public opinion, even without access to polls,
2. people have a fear of isolation and know what
behaviors will increase their likelihood of being socially isolated, and
3. people are reticent to express their minority
views, primarily out of fear of being isolated.
The closer a person believes the opinion held is similar to the
prevailing public opinion, the more they are willing to openly disclose that
opinion in public. Then, if public sentiment changes, the person will recognize
that the opinion is less in favor and will be less willing to express that
opinion publicly. As the perceived distance between public opinion and a
person's personal opinion grows, the more unlikely the person is to express
their opinion.
Consider the case of Dennis Rodman, one of the stars of the Chicago
Bulls basketball team. Mr. Rodman has consistently been an incredible
competitor and rebounder for the Detroit Pistons, San Antonio Spurs, and Chicago
Bulls. Over the years he attracted a large fan base, but watched it fall in
recent years as he got "weirder" or more "individualistic"
(depending on how you interpret his behavior). Fans in San Antonio welcomed Mr.
Rodman when he first arrived, but vocal supporters were hard to find just
before he was traded to Chicago. At the start of the 1996-1997 season Mr.
Rodman's stock was high in Chicago, falling off somewhat after the "kick
the cameraman" incident. I wish him well, but if the public becomes displeased
with him the Spiral of Silence will strike his supporters once again.
Standard Format for a Social
Scientific Journal Article
Social scientific research reports and journal articles are designed
to describe what researchers did, why they did it, how they did it, what they
found, and what that means. While all journals have their specific
requirements, all research reports and journal articles follow the same
standard format. The format is standardized to make it easy for readers to
study research presented in a variety of journals. The speed with which you
read journal articles and the understanding you gain from those articles will
increase once you become familiar with the standard format.
1. Title: The title should be brief but
clearly describe the focus of the research described in the article.
2. Author(s): The names of all authors is
given. The institutional affiliation is also given, sometimes with the authors'
names, sometimes at the bottom of the first page, sometimes at the end of the
article, and sometimes at the end of the journal.
3. Abstract: The abstract is a brief
summary of the article designed to give the reader an overview of what the
researchers did, how they did it, what they found, and what it means.
4. Introduction: This section describes
all prior research and theories that all relevant to the study described in the
article. That prior literature is integrated to form the basis of an
argument(s) supporting why the present study was conducted. The introduction
also introduces and argumentatively supports whatever research hypotheses
and/or research questions are addresses in the study.
5. Method: This section describes what the
researchers did. The section is written in sufficient detail and precision to
allow a reader to replicate the study. There are exceptions to this principle
of replication (e.g., lengthy surveys are not normally included, complex and
detailed instructions to experimental subjects are often summarized, etc.). The
method section contains as many of these five subsections as are appropriate
for the study:
Subjects: The
demographics of the persons used in the study.
Apparatus: Detailed description of special
equipment used, if any, in the study.
Procedures:
Detailed description of what the researchers and subjects did.
Design: Description of the type of
experimental design used in the study.
Materials or
Measures: Description of which testing and measuring instruments were used,
and how they were used if differently than their normal use.
6. Results: This section describes what
the researchers found. You will often find this section to be a mixture of
statistical reports, tables, and prose describing those numbers.
7. Discussion: This section provides the
authors' the opportunity to tell what their research results mean, both to the
study and to the world at large.
8. References: A list of all works (e.g.,
books, articles, personal communication, etc.) cited in the study. Most, and
usually all, of the cited works will be cited and discussed in the introduction
section of the article.
Readers familiar with statistical analyses and the theories, prior
research, and methodologies used in field related to the article should have no
problem understanding the article. Readers lacking familiarity with field
related theories, research, and methodology usually have some difficulties
grasping the big picture of the study, but if the article is well written the
readers can understand enough to understand the gist of the study.
Readers unfamiliar with statistical analyses fare worse. These
readers often entirely skip the Method and Results section, relying solely on
the Introduction and Discussion sections. When read alone, the Introduction and
Discussion sections can work well to provide the reader with an overview of the
study. However, these sections generally stay at a higher level, leaving the
details for the Method and Results sections. Often, the prize is in the
details.
Certainly, the Results section contains the basis for determining
the validity of information in the Discussion section. Readers lacking
statistical analyses expertise will be best served by taking a course on
experimental design and statistical analysis, preferably in the same discipline
as the journal articles they read (e.g., readers of communication articles
benefit from an experimental design and statistical analysis course offered by
the communication department, readers of psychology articles benefit from an
experimental design and statistical analysis course offered by the psychology
department, etc.).
What's in an Abstract of a Social
Scientific Journal Article?
An abstract is presented to the reader before the beginning of an
article. The abstract briefly tells what the researchers did, how they did it,
what they found, and what that means. More specifically, an abstract of an
social scientific study will describe:
1. the problem being investigated or question
being answer,
2. the subjects, along with key information about
those subjects,
3. the experimental method(s) used,
4. the results of the study, including
statistical significance levels; and
5. what the results mean, both in the study and
with regard to prior research, theories, and the world outside the research
lab.
A properly written abstract will convey all that information in
about 100 to 120 words. There should be no reason to refer to the article to
understand jargon, abbreviations, or other oddities in the abstract.
Use abstracts to gain a first glance into journal articles. When you
are researching a particular topic you can usually tell if a journal article
pertains to that topic just by reading the abstract. If you determine that the
article is relevant then be sure to read the entire article. Be aware that when
a journal article reports multiple significant findings only the most important
four or five will be mentioned in the abstract. When writing about a journal
article never rely solely on the abstract (unless you absolutely cannot find
the journal article, then be sure to cite the abstract, not the article).
Media
Control: Open Communication Technologies as Actors Enabling a Shift in the
Status Quo
"The conditions associated with a particular class of
conditions of existence produce habitus, systems of durable, transportable
dispositions, structured structures predisposed to function as structuring
structures, that is, as principles which generate and organize practices and
representations that can be objectively adapted to their outcomes without
presupposing a conscious aiming at ends or an express mastery of the operations
necessary in order to attain them. Objectively 'regulated' and 'regular'
without being in any way the product of obedience to rules, they can be
collectively orchestrated without being the product of the organizing actions
of a conductor" (Bourdieu, p. 53)
The above quote by Bourdieu, when viewed from the perspective of the
society as the 'habitus', is quiet informing (in theory as well as in practice)
of media's interplay with the social structures within which they are embedded.
As we have seen throughout our course readings, media technologies-as important
instruments at various levels of communication processes in the society, have
encountered resistance by various cultural and social norms, and somewhat mixed
response from economical and political forces because of their profit making
potentials or power generation ability. More then any other type of technology,
media and communication technologies have been the subject of public and
scholarly debates because of their intrinsic characteristics to be able to
convey (asynchronously) content across time and space (at distance), inscribed
in form of data, information, images, knowledge, and wisdom, in mediums such as
books, data tape drives, CD-ROMS, video and audio tapes, etc. Additionally,
synchronous communication has enabled instantaneous communication among people
(e.g. telephone, audio and video conferencing, online chat) enabling efficient,
but not necessarily effective exchange of information, ideas, thoughts, and
concepts.
The pervasive and widespread use of media technologies, often used
ubiquitously for symbolic purposes, is also used by the governing elites to
maintain the status quo and ensure stability. The necessity to reproduce and
maintain a stable state, the habitus (to borrow from Bourdieu whose habitus
concept is similar to the stable state produced and maintained by the hegemonic
ideology), requires ways for disseminating cultural and political material of
the dominant ideology. Similarly to how Bourdieu describes the functioning of
the habitus, Gitlin defines the status quo as hegemony, "a ruling class's
(or alliance's) domination of subordinate classes and groups through the
elaboration and penetration of ideology (ideas and assumptions) into their
common sense and everyday practice," and contends that it "is
systematic (but not necessary or even usually deliberate) engineering of mass
consent to established order" (Gitlin, 1980, pp. 253). Further,
elaborating on the aspect of hegemony and clarifying the composition of the
elite, mostly government, corporate establishment and those institutions that
produce cultural artifacts, Schiller (1996) explains their economic reason for
cooperation: "The American economy is now hostage to a relatively small
number of giant private companies, with interlocking connections, that set the
national agenda. This power is particularly characteristic of the communication
and information sector where the national cultural-media agenda is provided by
a very small (and declining) number of integrated private combines. This
development has deeply eroded free individual expression, a vital element of a
democratic society" (Schiller, 1996, p. 44).
This paper will attempt to elaborate on the interplay between media
and communication technologies, and social structures and forces (social,
cultural, economical, political), whether institutionalized or not, emphasizing
that both the content and the channels of communication through which the
content is distributed are important factors in the production, maintenance and
further reproduction of the artifacts of the dominant ideology. I will argue
that the content that is being represented and recorded, when conveyed via open
communication (such as the Internet), can show us the liberating potentials of
various media technologies. As such, communication technologies are situated as
important actors in the process to displacing or shifting the status quo.
Media Control
"The conditions associated with a particular class of
conditions of existence produce habitus, systems of durable, transportable
dispositions, structured structures predisposed to function as structuring
structures, that is, as principles which generate and organize practices and
representations that can be objectively adapted to their outcomes without
presupposing a conscious aiming at ends or an express mastery of the operations
necessary in order to attain them. Objectively 'regulated' and 'regular'
without being in any way the product of obedience to rules, they can be
collectively orchestrated without being the product of the organizing actions
of a conductor" (Bourdieu, p. 53)
The above quote by Bourdieu, when viewed from the perspective of the
society as the 'habitus', is quiet informing (in theory as well as in practice)
of media's interplay with the social structures within which they are embedded.
As we have seen throughout our course readings, media technologies-as important
instruments at various levels of communication processes in the society, have
encountered resistance by various cultural and social norms, and somewhat mixed
response from economical and political forces because of their profit making
potentials or power generation ability. More then any other type of technology,
media and communication technologies have been the subject of public and
scholarly debates because of their intrinsic characteristics to be able to
convey (asynchronously) content across time and space (at distance), inscribed
in form of data, information, images, knowledge, and wisdom, in mediums such as
books, data tape drives, CD-ROMS, video and audio tapes, etc. Additionally, synchronous
communication has enabled instantaneous communication among people (e.g.
telephone, audio and video conferencing, online chat) enabling efficient, but
not necessarily effective exchange of information, ideas, thoughts, and
concepts.
The pervasive and widespread use of media technologies, often used
ubiquitously for symbolic purposes, is also used by the governing elites to
maintain the status quo and ensure stability. The necessity to reproduce and
maintain a stable state, the habitus (to borrow from Bourdieu whose habitus
concept is similar to the stable state produced and maintained by the hegemonic
ideology), requires ways for disseminating cultural and political material of
the dominant ideology. Similarly to how Bourdieu describes the functioning of
the habitus, Gitlin defines the status quo as hegemony, "a ruling class's
(or alliance's) domination of subordinate classes and groups through the
elaboration and penetration of ideology (ideas and assumptions) into their
common sense and everyday practice," and contends that it "is
systematic (but not necessary or even usually deliberate) engineering of mass
consent to established order" (Gitlin, 1980, pp. 253). Further,
elaborating on the aspect of hegemony and clarifying the composition of the
elite, mostly government, corporate establishment and those institutions that
produce cultural artifacts, Schiller (1996) explains their economic reason for
cooperation: "The American economy is now hostage to a relatively small
number of giant private companies, with interlocking connections, that set the
national agenda. This power is particularly characteristic of the communication
and information sector where the national cultural-media agenda is provided by
a very small (and declining) number of integrated private combines. This
development has deeply eroded free individual expression, a vital element of a
democratic society" (Schiller, 1996, p. 44).
This paper will attempt to elaborate on the interplay between media
and communication technologies, and social structures and forces (social,
cultural, economical, political), whether institutionalized or not, emphasizing
that both the content and the channels of communication through which the
content is distributed are important factors in the production, maintenance and
further reproduction of the artifacts of the dominant ideology. I will argue
that the content that is being represented and recorded, when conveyed via open
communication (such as the Internet), can show us the liberating potentials of
various media technologies. As such, communication technologies are situated as
important actors in the process to displacing or shifting the status quo.
Evident from Gitlin's and Schiller's arguments is their emphasis on
the necessity of free and open communication among the masses if there is to be
any deliverance from the 'claws' of the media. On the contrary, it is the
one-way communication (radio, TV, cable) utilized by the elites to achieve the
subordination and dissemination of the hegemonic ideology. Fiske (1996) further
elaborates this in his argument that surveillance technology is also used as
means to discern the norms and regulations necessary to maintain the hegemony
ideology: "Norms are crucial to any surveillance system, for without them
it cannot identify the abnormal. Norms are what enable it to decide what
information should be turned into knowledge and what individuals need to be
monitored" (Fiske 1996, p. 220). Fiske's technologised surveillance of the
physical goes hand-in-hand with surveillance of the discourse (what issues are
raised on TV, radio, etc.) "because unequal access to those technologies
ensures their use in promoting similar power-block interests" (Fiske 1996,
p. 218). The important point brought forth here, directly or indirectly, is the
identification of the closed, unidirectional (with masses on the receiving end)
and restricted access of communication technology.
These aspects are identified as necessary characteristics for the
maintenance and reproduction of the hegemonic ideology, enabling the elites to
set the form, format and content of the public discourse (broadcasting, TV,
radio, press, etc.), and as importantly decide who can participate. Therefore,
it can be argued that this manifestation of communication technologies, entangled
in the web of one-way communication and used by the elites for power control
and dissemination of material in support of the hegemonic ideology, has shaped
the traditional scholarly and public discourse, as well as their practical use,
to view communication technology as intrinsically embedded with features,
characteristics and functionalities, for reinforcing and aiding the hegemonic
ideology.
This biased view, that communication technologies are inherently
suited to help media control, is troublesome and factually wrong. For example,
the scholarly and public discourse on early cable technology shows that cable
access was intended for use unlike it is being used today (for dissemination
popular consumer culture through its various formats with the aims of making
profit). Streeter (1997) argues that cable "had the potential to
rehumanize a dehumanized society, to eliminate the existing bureaucratic
restrictions of government regulation common to the industrial world, and to
empower the currently powerless public" (Streeter 1997, p.228). He further
notes that the cable system had the potential to enable two-way communication
and interactivity, but apparently failed to do so due to the social
(un)response on the part of the audience: "Cable television was something
that could have an important impact upon society, and it thus called for a
response on the part of society; it was something to which society could
respond and act upon, but that was itself outside society" (Streeter 1997,
p. 225). And then adds that cable should not be viewed as an "autonomous
entity that had simply appeared on the scene as the result of scientific and
technical research" (Streeter 1997, p. 225). Here we see a distinction
between the current social status of cable as profit making machinery and its
potentials to have become socially responsible technology that would have
empowered the audience with two-way open communication.
The above suggests that the communicative aspects of the production
and reproduction of the dominant ideology, including the production of consent
in the audience/consumer or citizen, are identified by media and communication
technologies characterized by closed, one-way communication. This provides the
elites with the ability to control the public discourse by selectively choosing
the issues of discussion, and at the same time is able to control the access to
the discourse: "But communication and information technology does not
merely circulate discourse and make it available for analysts, it also produces
knowledge and applies power" (Fiske 1996, p. 217). This process ensures
conformity with the accepted cultural, social, economical, and political norms
of the dominant ideology: "focus on communication technology both as ways
of engaging in discourse struggles and, through their surveillance capability,
as ways of producing a particular form of social knowledge and thus of exerting
power" (Fiske 1996, p. 217)
That the various media and communication technology exhibit
characteristics of closed systems with one-way communication can be hardly
argued. However, the proper question to ask is: why is the communication
technology so restrictive and a closed system that can be so easily controlled
(deliberately or otherwise) by the elite? First, lets ask
the following question regarding how does media fit in the economic system. Do the existing media/communication technologies exhibit characteristics that make them a better fit for the capitalistic free market (economy whose ultimate goal is the bottom-line, i.e. profit), rather than empowering the publics/audiences with information to better participate in representative democracy? Second, are the exhibited characteristics intrinsic to a particular technology embedded in the technology itself, or they are a result of the features and functionalities which designers embellished a particular technology?
the following question regarding how does media fit in the economic system. Do the existing media/communication technologies exhibit characteristics that make them a better fit for the capitalistic free market (economy whose ultimate goal is the bottom-line, i.e. profit), rather than empowering the publics/audiences with information to better participate in representative democracy? Second, are the exhibited characteristics intrinsic to a particular technology embedded in the technology itself, or they are a result of the features and functionalities which designers embellished a particular technology?
To answer the above questions, I first turn to Schiller (1996) who
argues in favor of original purpose and design: "When military or
commercial advantages are the motivating forces of research and development, it
is to be expected that the laboratories will produce findings that are
conducive to these objectives. If other motivations could be advanced, the
common good, for instance, different technologies might be forthcoming"
(Schiller, p.71). Schiller's idea of the original purpose is also supported by
the adaptive structurations theory (AST) that differentiates between
technology's spirit (the original intent as thought by the designers who might
be operating 'outside' of the hegemonic control, relatively speaking) and its
subsequent functionality due to the appropriation process as the technology
becomes embedded in institutionalized social structures. As such, more then
often the existing socially and politically brokered power structures reflect
themselves in the structure of the technology: "Information technology is
highly political, but politics are not directed by its technological features
alone" (Fiske 1996, p. 219).
The AST might appear in contradiction to Schiller's argument that
the social use of technology is determined by the original purpose: "What
the evidence here demonstrates is the strong, if not determining, influence of
the original purpose that fostered the development of each new technology. The social
use to which the technology is put, more times that not, follows its
originating purpose" (Schiller, p. 71). The seemingly contradictory
arguments need to be scrutinized in light of the social constructionist theory
and technological determinism applied together. Neither one can explain the
interplay of communication technology and society alone. The argument is that
if a particular technology was designed to serve the corporate interest, most
of its features will be driven to maximize the profits. In contrast, if a group
of people designs technology for open communication and democratic access to
information, the technology in question will have such features as to enable
ease of access to information and make it hard for that technology to be used for
restrictive purposes.
But again, it is not the technology per se; it is the social
structures that tilt the design, development, and subsequent use of the
technology for particular purposes. Unfortunately, most of the communication
technology in use today has been built and appropriated for profit making
activities and perhaps is unfit to support activities related to participatory
democracy.
For example, the development of cable as medium of communication was
relatively uneventful until the media corporations saw potentials for profits
via advertising. As the fight of discourses got under way between government
officials, media corporations and the liberal progressive forces, the elite
elements appeared on the scene controlling and moderating the discourse:
"The talk about cable … was characterized by a systemic avoidance of
central issues and assumptions, and by a pattern of unequal power in the
discussion of its outcomes: the discourse of the new technology was shaped not
so much by full fledged debate as by the lack of it" (Streeter 1997,
p.222). Thus, the future of the cable technology was affected by the social
structures and institutions, striping away from its technological potential the
ability to become a technology that could link the masses and bring them
together.
The communication's technological determinism and social
constructionism are interrelated in circular and iterative nature. It is hard
to conceptualize isolated technology that is not being affected by the social
structures, but constantly affects the same:
"As Raymond Williams has shown, this assumption of autonomous
technology is characteristic of much though about television and society, and
constitutes a false abstraction of technologies out of their social and
cultural context" (Streeter 1997, p.225)
"Such speculations naively assumed that telecommunication could
magically resolve the power relations among people that caused racist, poverty,
and international strife" (Streeter 1997, p.227)
Even when a particular communication technology changes the social
structures, it does not necessarily mean that such changes will be progressive
and liberating. Streeter argues that relations in social structures are created
by people and can only be changed by the people themselves, suggesting that
even if technology has changed some structures, the changes have been
appropriated by the elite and incorporated in the production and reproduction
of the dominant ideology: "The constrains were not caused by old
technological limits, nor can they be eliminated by new technologies: they were
caused by relationship between people and can be overcome only by changing
relations between people" (Streeter 1997, p.240). Or, as Fiske has put it
very succinctly: "Technology may determine what is shown, but society
determines what is seen" (Fiske 1996, p. 221).
If technology's role in the society is determined both by its own
characteristics and by the moderating characteristics of its social
surroundings, what might be there that causes a particular technology to become
a closed system, one-way communication, disseminating content which is
controlled by the stakeholders whose primary concern is to advertise to the
consumers or in the case of the government control its citizen through
selective and strategic communications?
Arguing that one way communication by media companies (TV, cable,
radio, movies) is a very important ingredient in the process of reproduction of
the dominant ideology, because it is able to control the discourse by
controlling the content and restricting the access to the discourse, Gitlin
suggests that the centrally controlled one way communication (one-to-many) must
disappear and be replaces by many-to-many communications if we wish to empower
the masses (Gitlin, 1972, pp. 363). He further contends that a possible
"revolutionary movement must aim to transform mass media by liberating
communications technology for popular use (Gitlin, 1972, pp. 363). Apparently,
Gitlin believes and posits the open communication (many-to-many) as a possible
antidote to the hegemonic ideology.
Therefore, it can be argued that the communication technology, which
is constantly modified and affected by the social structures in which it is
embedded, while at the same time influencing and modifying those same social structures,
has the potential to shift or displace the center of gravity (i.e. the status
quo, the habitus) through its characteristics of open communication, which can
induce mass communication amongst the masses themselves empowering them with
many-to-many communication relatively outside of elites' controls.
Constrained by the historical discourse and practicality of most of
the communication technology used for one way communication and for profit
making purposes, Schiller is skeptical that such technology can be of any
benefit to the society: "The customary argument that commerce and profit
seeking go hand-in-hand with social benefit, is still to be demonstrated after
hundreds of years of contrary experience" (Schiller, p.71).
At this point I would like to argue that the newest media
technology, the communication facet of the Internet, exhibits characteristics
of open communication that could position it as potential antidote to hegemonic
ideology. As it has been argued above, whoever controls the content can control
the scope of the public discourse, and whoever controls the access to the mass
communication technology can practically control the voices that can debate the
already restricted content/discourse. Unfortunately, TV, radio and cable
technologies have been socially (various social and institutional structures)
constructed such that both content and access control are in the same hands.
However, the Internet, especially the website portion of it (ability
to publish) and the email discussion lists exhibit characteristics contrary to
those of earlier technologies (as shaped by the social structures). For
example, almost anyone can publish anything on the Internet, relatively
speaking, without the fear that the server hosting firms might block the website
(apart from criminally related material). In addition, the masses, can freely
organize into groups of citizens, consumers, special interests groups,
hobbyists, etc., and advocate their causes openly. This is partially enabled by
the ability to establish many-to-many communication via email discussion lists.
These two examples show that the content is not necessarily controllable by
corporate power, and that the access to that content is not restricted by any
corporate power. As matters of fact, both are subject to the ability to pay for
Internet access, however, many libraries provide free Internet access for
interested individuals. From the traditional technologies, public cable access
channels resemble the above cases. However, most communities seldom use public
cable access channels. When used, they are marginalized by not being included
in programming listings (e.g. TV Guide) with the rest of cable and broadcast
channels.
What social conditions and circumstanced led to the development of
the Internet technologies which seem to exhibits open content and open
communication properties, unlike the TV, radio and cable that have remained
one-way communication channels? Why haven't then the same social structures and
forces restricted various Internet technologies to be used for one-way
communication only?
The apparent power to reach the masses, as well as the ability to
interact with them, could not have escaped the elite. All of a sudden, as the
number of Internet users increased manifolds each year, previously uninterested
media corporations (there were repeated public claims that profit cannot be
made with this new technology) invaded the Internet landscape, using it not
much different than the TV: "Whether deceptively labeled as
'entertainment,' 'news,' 'culture,' 'education,' or 'public affairs,' TV
programs aim to narrow and flatten consciousness-to tailor everyman's world
view to the consumer mentality, to placate political discontent, to manage what
cannot be placated, to render social pathologies personal, to level
class-consciousness" (Gitlin, 1972, pp.345). The obviousness of the above
is that almost all commercial sites support their Internet presence by online
advertising. Yes, one can choose to visit a web site only to be bombarded by
advertising, similar to TV advertising. However, studies have shown that web
site visitors are increasingly becoming aware and do not necessarily get
influenced by advertisements. This is a bit different from the TV. A visitor
can still view the page without wasting time, whereas on TV you either watch
the commercials or need to switch the channel.
In other words, the Internet, fueled by the open content and open
communication, in its infancy and during its development until mid 90s, before
it become obvious that it can be used by corporation for making profit, was
truly the liberating technology alluded by Gitlin.
To think that the elites were so naive to overlook Internet's
potential as mass medium is at best naivety itself. They could not have also
overlooked the potential for the masses to utilize the Internet to organize
themselves outside elite's control, however, perhaps the elite thought that any
such conflict should be domesticated as Gitlin suggests: "What permits it
to absorb and domesticate criticism is not something accidental to the liberal
capitalist ideology, but rather its core" (Gitlin, 1980, pp. 256).
Then, it seems that the inevitable happened, as many times before: "The
hegemonic ideology changes in order to remain hegemonic; that is the peculiar
nature of the dominant ideology of liberal capitalism" (Gitlin, 1982, pp.
450).
The elite moved to utilize the open communication technology, taking
control over various aspect of it. Internet users still have to access the
Internet via commercial entities that use online advertising as a profit
stream. Various mergers and acquisition have occurred between traditional media
and online industry, successfully trapping the masses to a particular content.
Yet, it is still easier to escape the online advertisers than those on TV.
Corporations find this very problematic, as it is hard to centrally
control a technology that was build to be managed in distributed fashion.
Therefore, they have turned more than ever before to manage the access to the
Internet technology and control the Internet visiting habits (through content
portals) of the masses.
Despite these attempts by corporations, if a group of people wants
an Internet presence, with potential readership/viewership of all who have
access to the Internet, it can do so with minimal cost. The same cannot be said
for the TV, the radio or cable. Whether the possibility for inexpensive
presence on the Internet and its potential mass viewership will remain so, only
the future will tell.
What could change? The unimaginable could happen. Governments can
for example restrict who publishes what in their country by requiring licenses
to operate a website. Obviously, for it to be effective the entire world has to
enact the same laws since it is easy to move website servers around the world.
Next, imagine that by some 'strange' imagination a judge rules that a US firm
that provides Internet access can be held responsible for the content published
by its customers. If such ruling is to be enforced, it could require that a corporation
first approve each websites' content.
This could potentially lead various Internet access and Internet
hosting providers to utilize their central role to their advantage. Why was the
Internet build with open communication in mind? Adherents of the theory of
hegemony could argue that this is contrary to the theory. If the Internet
contains such power that can be used as an antidote to hegemony, why did the
elite allow it to be developed in the form it is? Did they intent it or is it
an unintentional byproduct of government's action to create communication
network that can sustains nuclear attack by creating a very distributed
communication network?
Conclusion: The Internet and its open communication and open
content technologies and principles are still in the infancy. Whether the open
concepts will remain part of the Internet in the future remains to be seen.
If previous communication technologies are any indication, we might
expect the same with the Internet. However, as I have attempted to show in this
paper, the Internet communication technologies have more or less embedded in
their technology some characteristics that lend themselves to be used for open
communication where the masses can communicate amongst themselves without the
corporate media's oversight.
Claiming that there are embedded characteristics in a given
technology sounds like technological determinism. That would be an oversight.
One needs to look at the social, political, economical, and cultural factors
that have helped construct those characteristics in the first place so they
could be embedded in the underlying Internet technologies. Further, as time
goes on, the social situatedness of the Internet technology could change and
the social structures of the future might modify, change or even restricts the
initially acquired and embedded open communication and open content
characteristics of the Internet. Alternatively, if the open characteristics of
the Internet take strong hold in society and overcomes the already entrenched
hegemonic forces as they are embedded deeply in various social structures, it
might empower the masses to shift and displace the status quo and thus bring
forth more representative democracy.
Jargon Busting
Typical of an emerging technology, the Semantic Web literature is
veiled in a bewildering array of technical jargon. So much so that a
time-pressed researcher might be forgiven for concluding that the Semantic Web
is something for geeks and that it has no bearing on real work and real people;
that it (unrealistically) requires everyone to create all online content in the
RDF (Resource Description Framework)
Semantic Web language. This, of course, could not be further from
the truth. Take the Weblogs (Blogs) phenomenon as an example. Few users of
Weblogs are aware that they are publishing, syndicating and aggregating data
onto the Semantic Web as well as the human-readable Web. Weblog technology
revolves around the RSS (Rich Site Summary or RDF Site Summary) family of
languages that vary in their human- readability but are united in their machine
readability.
This resultant machine processibility is exploited to connect even
the most human-centric RSS vocabularies into the Semantic Web directly, or
through automated transformation to RDF. Weblogs thus form a valuable (and
vast) source of richly interconnected information that requires little or no
knowledge of the Semantic Web in order to create and use it.
For the information researcher, the Semantic Web view of this data
enables seamless fusion of Weblog data with data from completely different
sources such as dictionaries, thesauri, catalogues, databases as well as the
'traditional' Web. Where all this will lead is uncertain but the jargon is no
obstacle to its creation and use.
Bottom-up Revolution
What seems certain is that both evolution and revolution will occur,
and that the latter is all too easily overlooked. For instance, given the
proliferation of data about data (metadata) that underpins the Semantic Web, it
is tempting to focus in on the obvious prospect of a better-than-Google search
engine. Such a "semantic search" engine is able to determine whether
the query "orange" refers to the colour, the fruit, the mobile phone
company, or a chemical weapon used by the United States. As clever and useful
as this may be, it is only an evolutionary enhancement of something that is
already possible on the Web today. By looking in a little more detail at what
is happening on the Semantic Web today, it is possible to gain a deeper insight
into where revolution is starting to occur. In this article we will do just
that; we will take a look at one of the most exciting new developments on the
Semantic Web. Joined-up information about people, emanating not from some
centralistic database but from individuals themselves.
I don't know but I know Someone who
Does
In many ways, the explosive growth of social networking software,
sites and data is representative of the way the Semantic Web is emerging.
Weblogs were the first wave of this evolution/revolution; they allowed individuals
to publish data in a sufficiently structured format for machine processing of
that data to be relatively trivial.
Communities formed around weblogs in a bottom-up fashion, defined
implicitly through syndication and through the lists of other weblogs ("blogrolls")
that frequently accompanied weblogs. The next step, perhaps more revolutionary
than evolutionary, is to explicitly define these (and other) communities in a
way that is more easily machine processible. The Friend Of A Friend (FOAF)
project is one such initiative that is making this possible and, like weblogs,
it does this from the bottom up.
One of the aims of the FOAF project is to improve the chances of
happy accidents by describing the connections between people (and the things
that they care about such as documents and places).
FOAF is a vocabulary for describing people, used analogously to
Dublin Core metadata for documents. The idea is to use FOAF to describe the
sorts of things you would put on your homepage-your friends, your interests,
your pictures-in a structured fashion that machines find easy to process. What
you get from this is a network of people instead of a network of web pages: the
Web now contains descriptions of real things in the world-people-and because
the Semantic Web is designed to be open and extensible, information about what
these people do (their calendar), what they own (cars, houses, pets), what they
create (documents, pictures, weblogs), can all be described as well.
Several million FOAF documents are out there on the Web already,
created both by individuals and by various social software and networking
sites. FOAF documents can be created by hand, but increasingly, FOAF is being
created from existing databases or by mining the existing Web.
When people need to know something and the area is outside their
expertise, they need a way into the information landscape. They need to find
out what the main topics of interest are; who is well thought of; what where
the important issues and papers in the area. People often serve as conduits for
this type of information, with personal contacts serving as a way into an area
and the key individuals within a field serving as a way of finding the main
issues.
FOAF applications cannot replace the subtle social interactions
which characterise personal information exchange, but they can help to make
connections that might not otherwise have occurred: for example, by enabling
certain sorts of information to be accurately processed by computers and
therefore much easier to search.
Privacy and trust are clearly issues in FOAF as with all digital
information on the Web or elsewhere. Organisations of various sorts already
intensely mine the Web for information about individuals, email spammers being
the most frequent example. FOAF includes protection against email spammers but in
wider terms the very network of connections described in FOAF is likely to be
its greatest asset in assessing reliability and quality of digital information
on the Web.
Conclusion
From the perspective of the information researcher, the Semantic Web
promises to provide and exploit joined-up information that goes way beyond the
Web's traditional page-to-page links. Analysing the impact this will have on
the day-to-day work of information professionals is not trivial. As with most
new technologies, the Semantic Web is likely to create entirely new ways of
working while simultaneously rendering others obsolete.
Applications like FOAF are at the vanguard of the Semantic Web,
enabling a glimpse of what might be achieved. Their implicit and explicit
definition of social networks offers the information researcher a wealth of new
channels into the information cloud around individuals and communities.
3
The Elite Group and Mass Media
Introduction
Governments generate large quantities of information. They produce
statistics on population, figures on economic production and health, texts of
laws and regulations, and vast numbers of reports. The generation of this
information is paid for through taxation and, therefore, it might seem that it
should be available to any member of the public. But in some countries, such as
Britain and Australia, governments claim copyright in their own legislation and
sometimes court decisions. Technically, citizens would need permission to copy their
own laws. On the other hand, some government-generated information, especially
in the US, is turned over to corporations that then sell it to whomever can
pay. Publicly funded information is "privatised" and thus not freely
available.
The idea behind patents is that the fundamentals of an invention are
made public while the inventor for a limited time has the exclusive right to
make, use or sell the invention. But there are quite a few cases in which
patents have been used to suppress innovation. Companies may take out a patent,
or buy someone else's patent, in order to inhibit others from applying the
ideas. From its beginning in 1875, the US company AT&T collected patents in
order to ensure its monopoly on telephones. It slowed down the introduction of
radio for some 20 years. In a similar fashion, General Electric used control of
patents to retard the introduction of fluorescent lights, which were a threat
to its sales of incandescent lights. Trade secrets are another way to suppress
technological development. Trade secrets are protected by law but, unlike
patents, do not have to be published openly. They can be overcome legitimately
by independent development or reverse engineering.
Biological information can now be claimed as intellectual property.
US courts have ruled that genetic sequences can be patented, even when the
sequences are found "in nature," so long as some artificial means are
involved in isolating them. This has led companies to race to take out patents
on numerous genetic codes. In some cases, patents have been granted covering
all transgenic forms of an entire species, such as soybeans or cotton, causing
enormous controversy and sometimes reversals on appeal. One consequence is a
severe inhibition on research by non-patent holders. Another consequence is
that transnational corporations are patenting genetic materials found in Third
World plants and animals, so that some Third World peoples actually have to pay
to use seeds and other genetic materials that have been freely available to them
for centuries.
More generally, intellectual property is one more way for rich
countries to extract wealth from poor countries. Given the enormous
exploitation of poor peoples built into the world trade system, it would only
seem fair for ideas produced in rich countries to be provided at no cost to
poor countries. Yet in the GATT negotiations, representatives of rich
countries, especially the US, have insisted on strengthening intellectual
property rights. Surely there is no better indication that intellectual
property is primarily of value to those who are already powerful and wealthy.
The potential financial returns from intellectual property are said
to provide an incentive for individuals to create. In practice, though, most
creators do not actually gain much benefit from intellectual property.
Independent inventors are frequently ignored or exploited. When employees of
corporations and governments have an idea worth protecting, it is usually
copyrighted or patented by the organisation, not the employee. Since
intellectual property can be sold, it is usually the rich and powerful who
benefit. The rich and powerful, it should be noted, seldom contribute much
intellectual labour to the creation of new ideas.
These problems -- privatisation of government information,
suppression of patents, ownership of genetic information and information not
owned by the true creator -- are symptoms of a deeper problem with the whole
idea of intellectual property. Unlike goods, there are no physical obstacles to
providing an abundance of ideas. (Indeed, the bigger problem may be an
oversupply of ideas.) Intellectual property is an attempt to create an
artificial scarcity in order to give rewards to a few at the expense of the
many. Intellectual property aggravates inequality. It fosters competitiveness
over information and ideas, whereas cooperation makes much more sense. In the
words of Peter Drahos, researcher on intellectual property, "Intellectual
property is a form of private sovereignty over a primary good -- information."
Here are some examples of the abuse of power that has resulted from
the power to grant sovereignty over information.
• The neem tree is used in India in the areas of
medicine, toiletries, contraception, timber, fuel and agriculture. Its uses
have been developed over many centuries but never patented. Since the mid
1980s, US and Japanese corporations have taken out over a dozen patents on
neem-based materials. In this way, collective local knowledge developed by
Indian researchers and villagers has been expropriated by outsiders who have
added very little to the process.
- Charles M. Gentile is a US photographer who for a decade had made and sold artistic posters of scenes in Cleveland, Ohio. In 1995 he made a poster of the I. M. Pei building, which housed the new Rock and Roll Hall of Fame and Museum. This time he got into trouble. The museum sued him for infringing the trademark that it had taken out on its own image. If buildings can be registered as trademarks, then every painter, photographer and film-maker might have to seek permission and pay fees before using the images in their art work. This is obviously contrary to the original justification for intellectual property, which is to encourage the production of artistic works.
- Prominent designer Victor Papanek writes: "...there is something basically wrong with the whole concept of patents and copyrights. If I design a toy that provides therapeutic exercise for handicapped children, then I think it is unjust to delay the release of the design by a year and a half, going through a patent application. I feel that ideas are plentiful and cheap, and it is wrong to make money from the needs of others. I have been very lucky in persuading many of my students to accept this view. Much of what you will find as design examples throughout this book has never been patented. In fact, quite the opposite strategy prevails: in many cases students and I have made measured drawings of, say, a play environment for blind children, written a description of how to build it simply, and then mimeographed drawings and all. If any agency, anywhere, will write in, my students will send them all the instructions free of charge."
- In 1980, a book entitled Documents on Australian Defence and Foreign Policy 1968-1975 was published by George Munster and Richard Walsh. It reproduced many secret government memos, briefings and other documents concerning Australian involvement in the Vietnam war, events leading up to the Indonesian invasion of East Timor, and other issues. Exposure of this material deeply embarrassed the Australian government. In an unprecedented move, the government issued an interim injunction, citing both the Crimes Act and the Copyright Act. The books, just put on sale, were impounded. Print runs of two major newspapers with extracts from the book were also seized.
- The Australian High Court ruled that the Crimes Act did not apply, but that the material was protected by copyright held by the government. Thus copyright, set up to encourage artistic creation, was used to suppress dissemination of documents for whose production copyright was surely no incentive. Later, Munster and Walsh produced a book using summaries and short quotes in order to present the information.
- Scientology is a religion in which only certain members at advanced stages of enlightenment have access to special information, which is secret to others. Scientology has long been controversial, with critics maintaining that it exploits members. Some critics, including former Scientologists, have put secret documents from advanced stages on the Internet. In response, church officials invoked copyright. Police have raided homes of critics, seizing computers, disks and other equipment. This is all rather curious, since the stated purpose of copyright is not to hide information but rather to stimulate production of new ideas.
- Ashleigh Brilliant is a "professional epigrammatist." He creates and copyrights thousands of short sayings, such as "Fundamentally, there may be no basis for anything." When he finds someone who has "used" one of his epigrams, he contacts them demanding a payment for breach of copyright. Television journalist David Brinkley wrote a book, Everyone is Entitled to My Opinion, the title of which he attributed to a friend of his daughter. Brilliant contacted Brinkley about copyright violation. Random House, Brinkley's publisher, paid Brilliant $1000 without contesting the issue, perhaps because it would have cost more than this to contest it.
- Lawyer Robert Kunstadt has proposed that athletes could patent their sporting innovations, such as the "Fosbury flop" invented by high jumper Dick Fosbury. This might make a lot of money for a few stars. It would also cause enormous disputes. Athletes already have a tremendous incentive to innovate if it helps their performance. Patenting of basketball moves or choreography steps would serve mainly to limit the uptake of innovations and would mainly penalise those with fewer resources to pay royalties.
- The US National Basketball Association has sued in court for the exclusive right to transmit the scores of games as they are in progress. It had one success but lost on appeal.
- A Scottish newspaper, The Shetland Times, went to court to stop an online news service from making a hypertext link to its web site. If hypertext links made without permission were made illegal, this would undermine the World Wide Web.
These examples show that intellectual property has become a means
for exerting power in ways quite divorced from its original aim -- promoting
the creation and use of new ideas.
Critique of Standard Justifications
Edwin C. Hettinger has provided an insightful critique of the main
arguments used to justify intellectual property, so it is worthwhile
summarising his analysis. He begins by noting the obvious argument against
intellectual property, namely that sharing intellectual objects still allows
the original possessor to use them. Therefore, the burden of proof should lie
on those who argue for intellectual property.
The first argument for intellectual property is that people are
entitled to the results of their labour. Hettinger's response is that not all
the value of intellectual products is due to labour. Nor is the value of
intellectual products due to the work of a single labourer, or any small group.
Intellectual products are social products.
Suppose you have written an essay or made an invention. Your
intellectual work does not exist in a social vacuum. It would not have been
possible without lots of earlier work -- both intellectual and nonintellectual
-- by many other people. This includes your teachers and parents. It includes
the earlier authors and inventors who provided the foundation for your
contribution. It also includes the many people who discussed and used ideas and
techniques, at both theoretical and practical levels, and provided a cultural
foundation for your contribution. It includes the people who built printing
presses, laid telephone cables, built roads and buildings and in many other
ways contributed to the "construction" of society. Many other people could
be mentioned. The point is that any piece of intellectual work is always built
on and is inconceivable without the prior work of numerous people.
Hettinger points out that the earlier contributors to the
development of ideas are not present. Today's contributor therefore cannot
validly claim full credit.
Is the market value of a piece of an intellectual product a
reasonable indicator of a person's contribution? Certainly not. As noted
by Hettinger and as will be discussed in the next section, markets only work once
property rights have been established, so it is circular to argue that the
market can be used to measure intellectual contributions. Hettinger summarises
this point in this fashion: "The notion that a laborer is naturally
entitled as a matter of right to receive the market value of her product is a
myth. To what extent individual laborers should be allowed to receive the
market value of their products is a question of social policy."
A related argument is that people have a right to possess and
personally use what they develop. Hettinger's response is that this doesn't
show that they deserve market values, nor that they should have a right to
prevent others from using the invention.
A second major argument for intellectual property is that people
deserve property rights because of their labour. This brings up the general
issue of what people deserve, a topic that has been analysed by philosophers.
Their usual conclusions go against what many people think is "common
sense." Hettinger says that a fitting reward for labour should be
proportionate to the person's effort, the risk taken and moral considerations.
This sounds all right -- but it is not proportionate to the value of the
results of the labour, whether assessed through markets or by other criteria. This
is because the value of intellectual work is affected by things not controlled
by the worker, including luck and natural talent. Hettinger says "A person
who is born with extraordinary natural talents, or who is extremely lucky,
deserves nothing on the basis of these characteristics."
A musical genius like Mozart may make enormous contributions to
society. But being born with enormous musical talents does not provide a
justification for owning rights to musical compositions or performances.
Likewise, the labour of developing a toy like Teenage Mutant Ninja Turtles that
becomes incredibly popular does not provide a justification for owning rights
to all possible uses of turtle symbols. What about a situation where one person
works hard at a task and a second person with equal talent works less hard?
Doesn't the first worker deserve more reward? Perhaps so, but property rights
do not provide a suitable mechanism for allocating rewards. The market can give
great rewards to the person who successfully claims property rights for a
discovery, with little or nothing for the person who just missed out.
A third argument for intellectual property is that private property
is a means for promoting privacy and a means for personal autonomy. Hettinger
responds that privacy is protected by not revealing information, not by owning
it. Trade secrets cannot be defended on the grounds of privacy, because
corporations are not individuals. As for personal autonomy, copyrights and
patents aren't required for this.
A fourth argument is that rights in intellectual property are needed
to promote the creation of more ideas. The idea is that intellectual property
gives financial incentives to produce ideas. Hettinger thinks that this is the
only decent argument for intellectual property. He is still somewhat sceptical,
though. He notes that the whole argument is built on a contradiction, namely
that in order to promote the development of ideas, it is necessary to reduce
people's freedom to use them. Copyrights and patents may encourage new ideas
and innovations, but they also restrict others from using them freely.
This argument for intellectual property cannot be resolved without
further investigation. Hettinger says that there needs to be an investigation
of how long patents and copyrights should be granted, to determine an optimum
period for promoting intellectual work. For the purposes of technological
innovation, information becomes more valuable when augmented by new
information: innovation is a collective process. If firms in an industry share
information by tacit cooperation or open collaboration, this speeds innovation
and reduces costs. Patents, which put information into the market and raise
information costs, actually slow the innovative process.
It should be noted that although the scale and pace of intellectual
work has increased over the past few centuries, the duration of protection of
intellectual property has not been reduced, as might be expected, but greatly
increased. The US government did not recognise foreign copyrights for much of
the 1800s. Where once copyrights were only for a period of a few decades, they
now may be for the life of the author plus 70 years. In many countries,
chemicals and pharmaceuticals were not patentable until recently. This suggests
that even if intellectual property can be justified on the basis of fostering
new ideas, this is not the driving force behind the present system of
copyrights and patents. After all, few writers feel a greater incentive to
write and publish just because their works are copyrighted for 70 years after
they die, rather than just until they die.
Of various types of intellectual property, copyright is especially
open for exploitation. Unlike patents, copyright is granted without an
application and lasts far longer. Originally designed to encourage literary and
artistic work, it now applies to every memo and doodle and is more relevant to
business than art. There is no need to encourage production of business
correspondence, so why is copyright applied to it?
Intellectual property is built around a fundamental tension: ideas
are public but creators want private returns. To overcome this tension, a
distinction developed between ideas and their expression. Ideas could not be
copyrighted but their expression could. This peculiar distinction was tied to
the romantic notion of the autonomous creator who somehow contributes to the
common pool of ideas without drawing from it. This package of concepts
apparently justified authors in claiming residual rights -- namely copyright --
in their ideas after leaving their hands, while not giving manual workers any
rationale for claiming residual rights in their creations. In practice, though,
the idea-expression distinction is dubious and few of the major owners of
intellectual property have the faintest resemblance to romantic creators.
The Marketplace of Ideas
The idea of intellectual property has a number of connections with
the concept of the marketplace of ideas, a metaphor that is widely used in
discussions of free speech. To delve a bit more deeply into the claim that
intellectual property promotes development of new ideas, it is therefore
helpful to scrutinise the concept of the marketplace of ideas.
The image conveyed by the marketplace of ideas is that ideas compete
for acceptance in a market. As long as the competition is fair -- which means
that all ideas and contributors are permitted access to the marketplace -- then
good ideas will win out over bad ones. Why? Because people will recognise the
truth and value of good ideas. On the other hand, if the market is constrained,
for example by some groups being excluded, then certain ideas cannot be tested
and examined and successful ideas may not be the best ideas.
Logically, there is no reason why a marketplace of ideas has to be a
marketplace of owned ideas: intellectual property cannot be strictly justified
by the marketplace of ideas. But because the marketplace metaphor is an
economic one, there is a strong tendency to link intellectual property with the
marketplace of ideas. There is a link between these two concepts, but not in
the way their defenders usually imagine.
There are plenty of practical examples of the failure of the
marketplace of ideas. Groups that are stigmatised or that lack power seldom
have their viewpoints presented. This includes ethnic minorities, prisoners,
the unemployed, manual workers and radical critics of the status quo, among
many others. Even when such groups organise themselves to promote their ideas,
their views are often ignored while the media focus on their protests, as in
the case of peace movement rallies and marches.
Demonstrably, good ideas do not always win out in the marketplace of
ideas. To take one example, the point of view of workers is frequently just as
worthy as that of employers. Yet there is an enormous imbalance in the
presentation of their respective viewpoints in the media. One result is that
quite a few ideas that happen to serve the interests of employers at the
expense of workers -- such as that the reason people don't have jobs is because
they aren't trying hard enough to find them -- are widely accepted although
they are rejected by virtually all informed analysts.
There is a simple and fundamental reason for the failure of the
marketplace of ideas: inequality, especially economic inequality. Perhaps in a
group of people sitting in a room discussing an issue, there is some prospect
of a measured assessment of different ideas. But if these same people are
isolated in front of their television sets, and one of them owns the television
station, it is obvious that there is little basis for testing of ideas. The
reality is that powerful and rich groups can promote their ideas with little
chance of rebuttal from those with different perspectives. The mass media are
powerful enterprises that promote their own interests as well as those of
governments and corporations.
In circumstances where participants are approximate equals, such as
intellectual discussion among peers in an academic discipline, then the
metaphor of competition of ideas has some value. But ownership of media or
ideas is hardly a prerequisite for such discussion. It is the equality of power
that is essential. To take one of many possible examples, when employees in
corporations lack the freedom to speak openly without penalty they cannot be
equal participants in discussions.
Some ideas are good -- in the sense of being valuable to society --
but are unwelcome. Some are unwelcome to powerful groups, such as that
governments and corporations commit horrific crimes or that there is a massive
trade in technologies of torture and repression that needs to be stopped.
Others are challenging to much of the population, such as that imprisonment
does not reduce the crime rate or that financial rewards for good work on the
job or grades for good schoolwork are counterproductive. (Needless to say,
individuals might disagree with the examples used here. The case does not rest
on the examples themselves, but on the existence of some socially valuable
ideas that are unwelcome and marginalised.) The marketplace of ideas simply
does not work to treat such unwelcome ideas with the seriousness they deserve.
The mass media try to gain audiences by pleasing them, not by confronting them
with challenging ideas.
The marketplace of ideas is often used to justify free speech. The
argument is that free speech is necessary in order for the marketplace of ideas
to operate: if some types of speech are curtailed, certain ideas will not be
available on the marketplace and thus the best ideas will not succeed. This
sounds plausible. But it is possible to reject the marketplace of ideas while
still defending free speech on the grounds that it is essential to human
liberty.
If the marketplace of ideas doesn't work, what is the solution? The
usual view is that governments should intervene to ensure that all groups have
fair access to the media. But this approach, based on promoting equality of
opportunity, ignores the fundamental problem of economic inequality. Even if
minority groups have some limited chance to present their views in the mass
media, this can hardly compensate for the massive power of governments and
corporations to promote their views. In addition, it retains the role of the
mass media as the central mechanism for disseminating ideas. So-called reform
proposals either retain the status quo or introduce government censorship.
Underlying the market model is the idea of self-regulation: the
"free market" is supposed to operate without outside intervention
and, indeed, to operate best when outside intervention is minimised. In
practice, even markets in goods do not operate autonomously: the state is
intimately involved in even the freest of markets. In the case of the
marketplace of ideas, the state is involved both in shaping the market and in
making it possible, for example by promoting and regulating the mass media. The
world's most powerful state, the US, has been the driving force behind the
establishment of a highly protectionist system of intellectual property, using
power politics at GATT, the General Agreement on Tariffs and Trade.
Courts may use the rhetoric of the marketplace of ideas but actually
interpret the law to support the status quo. For example, speech is treated as
free until it might actually have some consequences. Then it is curtailed when
it allegedly presents a "clear and present danger," such as when
peace activists expose information supposedly threatening to "national
security". But speech without action is pointless. True liberty requires
freedom to promote one's views in practice. Powerful groups have the ability to
do this. Courts only intervene when others try to do the same.
As in the case of trade generally, a property-based "free
market" serves the interests of powerful producers. In the case of ideas,
this includes governments and corporations plus intellectuals and professionals
linked with universities, entertainment, journalism and the arts. Against such
an array of intellectual opinion, it is very difficult for other groups, such
as manual workers, to compete. The marketplace of ideas is a biased and
artificial market that mostly serves to fine-tune relations between elites and
provide them with legitimacy.
The implication of this analysis is that intellectual property
cannot be justified on the basis of the marketplace of ideas. The utilitarian
argument for intellectual property is that ownership is necessary to stimulate
production of new ideas, because of the financial incentive. This financial
incentive is supposed to come from the market, whose justification is the
marketplace of ideas. If, as critics argue, the marketplace of ideas is flawed
by the presence of economic inequality and, more fundamentally, is an
artificial creation that serves powerful producers of ideas and legitimates the
role of elites, then the case for intellectual property is unfounded.
Intellectual property can only serve to aggravate the inequality on which it is
built.
The Alternative
The alternative to intellectual property is straightforward:
intellectual products should not be owned. That means not owned by individuals,
corporations, governments, or the community as common property. It means that
ideas are available to be used by anyone who wants to.
One example of how this might operate is language, including the
words, sounds and meaning systems with which we communicate every day. Spoken
language is free for everyone to use. (Actually, corporations do control bits
of language through trademarks and slogans.)
Another example is scientific knowledge. Scientists do research and
then publish their results. A large proportion of scientific knowledge is
public knowledge. There are some areas of science that are not public, such as
classified military research. It is usually argued that the most dynamic parts
of science are those with the least secrecy. Open ideas can be examined,
challenged, modified and improved. To turn scientific knowledge into a
commodity on the market, as is happening with genetic engineering, arguably
inhibits science.
Few scientists complain that they do not own the knowledge they
produce. Indeed, they are much more likely to complain when corporations or
governments try to control dissemination of ideas. Most scientists receive a
salary from a government, corporation or university. Their livelihoods do not
depend on royalties from published work.
University scientists have the greatest freedom. The main reasons
they do research are for the intrinsic satisfaction of investigation and
discovery -- a key motivation for many of the world's great scientists -- and
for recognition by their peers. To turn scientific knowledge into intellectual
property would dampen the enthusiasm of many scientists for their work.
However, as governments reduce their funding of universities, scientists and
university administrations increasingly turn to patents as a source of income.
Language and scientific knowledge are not ideal; indeed, they are
often used for harmful purposes. It is difficult to imagine, though, how
turning them into property could make them better.
The case of science shows that vigorous intellectual activity is
quite possible without intellectual property, and in fact that it may be
vigorous precisely because information is not owned. But there are lots of
areas that, unlike science, have long operated with intellectual property as a
fact of life. What would happen without ownership of information? Many
objections spring to mind.
Plagiarism
Many intellectual workers fear being plagiarised and many of them
think that intellectual property provides protection against this. After all,
without copyright, why couldn't someone put their name on your essay and
publish it? Actually, copyright provides very little protection against
plagiarism. So-called "moral rights" of authors to be credited are
backed by law in many countries but are an extremely cumbersome way of dealing
with plagiarism.
Plagiarism means using the ideas of others without adequate
acknowledgment. There are several types of plagiarism. One is plagiarism of
ideas: someone takes your original idea and, using different expression,
presents it as their own. Copyright provides no protection at all against this
form of plagiarism. Another type of plagiarism is word-for-word plagiarism,
where someone takes the words you've written -- a book, an essay, a few
paragraphs or even just a sentence -- and, with or without minor modifications,
presents them as their own. This sort of plagiarism is covered by copyright --
assuming that you hold the copyright. In many cases, copyright is held by the
publisher, not the author. In practice, plagiarism goes on all the time, in
various ways and degrees, and copyright law is hardly ever used against it. The
most effective challenge to plagiarism is not legal action but publicity. At
least among authors, plagiarism is widely condemned. For this reason, and
because they seek to give credit where it's due, most writers do take care to
avoid plagiarising.
There is an even more fundamental reason why copyright provides no
protection against plagiarism: the most common sort of plagiarism is built into
social hierarchies. Government and corporate reports are released under the
names of top bureaucrats who did not write them; politicians and corporate
executives give speeches written by underlings. These are examples of a
pervasive misrepresentation of authorship in which powerful figures gain credit
for the work of subordinates. Copyright, if it has any effect at all,
reinforces rather than challenges this sort of institutionalised plagiarism.
Royalties
What about all the writers, inventors and others who depend for
their livelihood on royalties? First, it should be mentioned that only a very
few individuals make enough money from royalties to live on. For example, there
are probably only a few hundred self-employed writers in the US. Most of the
rewards from intellectual property go to a few big companies. But the question
is still a serious one for those intellectual workers who depend on royalties
and other payments related to intellectual property.
The alternative in this case is some reorganisation of the economic
system. Those few currently dependent on royalties could instead receive a
salary, grant or bursary, just as most scientists do.
Getting rid of intellectual property would reduce the incomes of a
few highly successful creative individuals, such as author Agatha Christie,
composer Andrew Lloyd Webber and filmmaker Steven Spielberg. Publishers could
reprint Christie's novels without permission, theatre companies could put on
Webber's operas whenever they wished and Spielberg's films could be copied and
screened anywhere. Jurassic Park and Lost World T-shirts, toys and trinkets
could be produced at will. This would reduce the income of and, to some extent,
the opportunities for artistic expression by these individuals. But there would
be economic resources released: there would be more money available for other
creators. Christie, Webber and Spielberg might be just as popular without
intellectual property to channel money to them and their family enterprises.
The typical creative intellectual is actually worse off due to
intellectual property. Consider an author who brings in a few hundred or even a
few thousand dollars of royalty income per year. This is a tangible income,
which creators value for its monetary and symbolic value. But this should be
weighed against payments of royalties and monopoly profits when buying books,
magazines, CDs and computer software.
Many of these costs are invisible. How many consumers, for example,
realise how much they are paying for intellectual property when buying
prescription medicines, paying for schools (through fees or taxes), buying
groceries or listening to a piece of music on the radio? Yet in these and many
other situations, costs are substantially increased due to intellectual
property. Most of the extra costs go not to creators but to corporations and to
bureaucratic overheads -- such as patent offices and law firms -- that are
necessary to keep the system of intellectual property going.
Stimulating Creativity
What about the incentive to create? Without the possibility of
wealth and fame, what would stimulate creative individuals to produce works of
genius? Actually, most creators and innovators are motivated by their own
intrinsic interest, not by rewards. There is a large body of evidence showing,
contrary to popular opinion, that rewards actually reduce the quality of work.
If the goal is better and more creative work, paying creators on a piecework
basis, such as through royalties, is counterproductive.
In a society without intellectual property, creativity is likely to
thrive. Most of the problems that are imagined to occur if there is no
intellectual property -- such as the exploitation of a small publisher that
renounces copyright -- are due to economic arrangements that maintain
inequality. The soundest foundation for a society without intellectual property
is greater economic and political equality. This means not just equality of
opportunity, but equality of outcomes. This does not mean uniformity and does
not mean levelling imposed from the top: it means freedom and diversity and a
situation where people can get what they need but are not able to gain great
power or wealth by exploiting the work of others. This is a big issue. Suffice
it to say here that there are strong social and psychological arguments in
favour of equality.
Strategies for Change
Intellectual property is supported by many powerful groups: the most
powerful governments and the largest corporations. The mass media seem fully
behind intellectual property, partly because media monopolies would be undercut
if information were more freely copied and partly because the most influential
journalists depend on syndication rights for their stories.
Perhaps just as important is the support for intellectual property
from many small intellectual producers, including academics and free-lance
writers. Although the monetary returns to these intellectuals are seldom
significant, they have been persuaded that they both need and deserve their
small royalties. This is similar to the way that small owners of goods and
land, such as homeowners, strongly defend the system of private property, whose
main beneficiaries are the very wealthy who own vast enterprises based on many
other people's labour. Intellectuals are enormous consumers as well as
producers of intellectual work. A majority would probably be better off
financially without intellectual property, since they wouldn't have to pay as
much for other people's work. Another problem in developing strategies is that
it makes little sense to challenge intellectual property in isolation. If we
simply imagine intellectual property being abolished but the rest of the
economic system unchanged, then many objections can be made. Challenging
intellectual property must involve the development of methods to support
creative individuals.
Change Thinking
Talking about "intellectual property" implies an
association with physical property. Instead, it is better to talk about
monopolies granted by governments, for example "monopoly privilege."
This gives a better idea of what's going on and so helps undermine the
legitimacy of the process. Associated with this could be an appeal to free
market principles, challenging the barriers to trade in ideas imposed by
monopolies granted to copyright and patent holders.
As well, a connection should be forged with ideals of free speech.
Rather than talk of intellectual property in terms of property and trade, it
should be talked about in terms of speech and its impediments. Controls over genetic
information should be talked about in terms of public health and social welfare
rather than property.
The way that an issue is framed makes an enormous difference to the
legitimacy of different positions. Once intellectual property is undermined in
the minds of many citizens, it will become far easier to topple its
institutional supports.
Expose the Costs
It can cost a lot to set up and operate a system of intellectual
property. This includes patent offices, legislation, court cases, agencies to
collect fees and much else. There is a need for research to calculate and
expose these costs as well as the transfers of money between different groups
and countries. A middle-ranking country from the First World, such as
Australia, pays far more for intellectual property -- mostly to the US -- than
it receives. Once the figures are available and understood, this will aid in
reducing the legitimacy of the world intellectual property system.
Reproduce Protected Works
From the point of view of intellectual property, this is called
"piracy." (This is a revealing term, considering that such language
is seldom used when, for example, a boss takes credit for a subordinate's work
or when a Third World intellectual is recruited to a First World position. In
each case, investments in intellectual work made by an individual or society
are exploited by a different individual or society with more power.) This
happens every day when people photocopy copyrighted articles, tape copyrighted
music, or duplicate copyrighted software. It is precisely because illegal
copying is so easy and so common that big governments and corporations have
mounted offensives to promote intellectual property rights.
Unfortunately, illegal copying is not a very good strategy against
intellectual property, any more than stealing goods is a way to challenge
ownership of physical property. Theft of any sort implicitly accepts the
existing system of ownership. By trying to hide the copying and avoiding
penalties, the copiers appear to accept the legitimacy of the system.
Openly Refuse to Cooperate with
Intellectual Property
This is far more powerful than illicit copying. The methods of
nonviolent action can be used here, including noncooperation, boycotts and
setting up alternative institutions. By being open about the challenge, there
is a much greater chance of focussing attention on the issues at stake and
creating a dialogue. By being principled in opposition, and being willing to
accept penalties for civil disobedience to laws on intellectual property, there
is a much greater chance of winning over third parties. If harsh penalties are
applied to those who challenge intellectual property, this could produce a
backlash of sympathy. Once mass civil disobedience to intellectual property
laws occurs, it will be impossible to stop.
Something like that is already occurring. Because photocopying of
copyrighted works is so common, there is seldom any attempt to enforce the law
against small violators -- to do so would alienate too many people. Copyright
authorities therefore seek other means of collecting revenues from intellectual
property, such as payments by institutions based on library copies. Already
there is mass discontent in India over the impact of the world intellectual
property regime and patenting of genetic materials, with rallies of hundreds of
thousands of farmers. If this scale of protest could be combined with other
actions that undermine the legitimacy of intellectual property, the entire
system could be challenged.
Promote Non-owned Information
A good example is public domain software, which is computer software
that is made available free to anyone who wants it. The developers of
"freeware" gain satisfaction out of their intellectual work and out
of providing a service to others. The Free Software Foundation has spearheaded
the development and promotion of freeware. It "is dedicated to eliminating
restrictions on people's right to use, copy, modify and redistribute computer
programs" by encouraging people to develop and use free software.
A suitable alternative to copyright is shareright. A piece of
freeware might be accompanied by the notice, "You may reproduce this
material if your recipients may also reproduce it." This encourages
copiers but refuses any of them copyright.
The Free Software Foundation has come up with another approach,
called "copyleft." The Foundation states, "The simplest way to
make a program free is to put it in the public domain, uncopyrighted. But this
permits proprietary modified versions, which deny others the freedom to
redistribute and modify; such versions undermine the goal of giving freedom to
all users. To prevent this, `copyleft' uses copyright in a novel manner.
Typically copyrights take away freedoms; copyleft preserves them. It is a legal
instrument that requires those who pass on a program to include the rights to
use, modify, and redistribute the code; the code and the freedoms become
legally inseparable." Until copyright is eliminated or obsolete,
innovations such as copyleft are necessary to avoid exploitation of those who
want to make their work available to others.
Develop Principles to Deal with
Credit for Intellectual Work
This is important even if credit is not rewarded financially. This
would include guidelines for not misrepresenting another person's work.
Intellectual property gives the appearance of stopping unfair appropriation of
ideas although the reality is quite different. If intellectual property is to
be challenged, people need to be reassured that misappropriation of ideas will
not become a big problem.
More fundamentally, it needs to be recognised that intellectual work
is inevitably a collective process. No one has totally original ideas: ideas
are always built on the earlier contributions of others. (That's especially
true of this chapter!) Furthermore, culture -- which makes ideas possible -- is
built not just on intellectual contributions but also on practical and material
contributions, including the rearing of families and construction of buildings.
Intellectual property is theft, sometimes in part from an individual creator
but always from society as a whole.
In a more cooperative society, credit for ideas would not be such a
contentious matter. Today, there are vicious disputes between scientists over
who should gain credit for a discovery. This is because scientists' careers
and, more importantly, their reputations, depend on credit for ideas. In a
society with less hierarchy and greater equality, intrinsic motivation and
satisfaction would be the main returns from contributing to intellectual
developments. This is quite compatible with everything that is known about
human nature. The system of ownership encourages groups to put special
interests above general interests. Sharing information is undoubtedly the most
efficient way to allocate productive resources. The less there is to gain from
credit for ideas, the more likely people are to share ideas rather than worry
about who deserves credit for them.
For most book publishers, publishing an argument against
intellectual property raises a dilemma. If the work is copyrighted as usual,
this clashes with the argument against copyright. On the other hand, if the
work is not copyrighted, then unrestrained copying might undermine sales. It's
worth reflecting on this dilemma as it applies to this book. It is important to
keep in mind the wider goal of challenging the corruptions of information
power. Governments and large corporations are particularly susceptible to these
corruptions. They should be the first targets in developing a strategy against
intellectual property.
Freedom Press is not a typical publisher. It has been publishing
anarchist writings since 1886, including books, magazines, pamphlets and
leaflets. Remarkably, neither authors nor editors have ever been paid for their
work. Freedom Press is concerned with social issues and social change, not with
material returns to anyone involved in the enterprise.
Because it is a small publisher, Freedom Press would be hard pressed
to enforce its claims to copyright even if it wanted to. Those who sympathise
with the aims of Freedom Press and who would like to reproduce some of its
publications therefore should consider practical rather than legal issues.
Would the copying be on such a scale as to undermine Freedom Press's limited
sales? Does the copying give sufficient credit to Freedom Press so as to
encourage further sales? Is the copying for commercial or noncommercial
purposes?
In answering such questions, it makes sense to ask Freedom Press.
This applies whether the work is copyright or not. If asking is not feasible,
or the copying is of limited scale, then good judgement should be used. In my
opinion, using one chapter -- especially this chapter! -- for nonprofit
purposes should normally be okay.
So in the case of Freedom Press, the approach should be to negotiate
in good faith and to use good judgement in minor or urgent cases. Negotiation
and good judgement of this sort will be necessary in any society that moves
beyond intellectual property.
4
Mass Media and Culture
The aim of this chapter is threefold: first, to examine the ways in
which the market economy framework and the elites condition culture and mass
media; second, to discuss the relationship of the neoliberal consensus with the
present intensification of cultural homogenisation; finally, to outline the
nature of culture and the role of mass media in a democratic society, as well
as to explore the strategies which could bring about a shift from the present
cultural institutions to those of an inclusive democracy.
Culture, Mass Media and Elites
The Dominant Social Paradigm and Culture
A fruitful way to start the discussion of the significance of
culture and its relationship to the mass media would be to define carefully our
terms. This would help to avoid the confusion, which is not rare in discussions
on the matter. Culture is frequently defined as the integrated pattern of human
knowledge, belief, and behaviour. This is a definition broad enough to include
all major aspects of culture: language, ideas, beliefs, customs, taboos, codes,
institutions, tools, techniques, works of art, rituals, ceremonies and so on.
However, in what follows, I am not going to deal with all these aspects of
culture unless they are related to what I call the dominant social paradigm. By
this I mean the system of beliefs, ideas and the corresponding values which are
dominant in a particular society at a particular moment of its history. It is
clear that there is a significant degree of overlapping between these two terms
although the meaning of culture is obviously broader than that of the social
paradigm.
But, let us see first the elements shared by both terms. Both
culture and the social paradigm are time- and space-dependent, i.e. they refer
to a specific type of society at a specific time. Therefore, they both change
from place to place and from one historical period to another. This implies
that there can be no 'general theory' of History, which could determine the
relationship between the cultural with the political or economic elements in
society. In other words, our starting point is the rejection not only of the
crude economistic versions of Marxism (the economic base determines the
cultural superstructure) but also of the more sophisticated versions of it (the
economic base determines 'in the last instance' which element is to be dominant
in each social formation). In my view, which I expanded elsewhere, the dominant
element in each social formation is not determined, for all time, by the economic
base, or any other base. The dominant element is always determined by a
creative act, i.e. it is the outcome of social praxis, of the activity of
social individuals. Thus, the dominant element in theocratic societies was
cultural, in the societies of 'actually existing socialism' political and so
on.
Similarly, the dominant element in market economies is economic, as
a result of the fact that the introduction of new systems of production during
the Industrial Revolution in a commercial society, where the means of
production were under private ownership and control, inevitably led to the
transformation of the socially- controlled economies of the past (in which the
market played a marginal role in the economic process) into the present market
economies (defined as the self-regulating systems in which the fundamental
economic problems--what, how, and for whom to produce-- are solved
`automatically', through the price mechanism, rather than through conscious
social decisions). Still, the existence of a dominant element in a social
formation does not mean that the relationship between this element and the
other elements in it is one of heteronomy and dependence. Each element is
autonomous and the relationship between the various elements is better described
as one of interdependence. So, although it is the economic element which is the
dominant one in the system of the market economy, this does not mean that
culture is determined, even 'in the last instance' by this element.
But, there are also some important differences between culture and
the dominant social paradigm. Culture, exactly because of its greater scope,
may express values and ideas, which are not necessarily consistent with the
dominant institutions. In fact, this is usually the case characterising the
arts and literature of a market economy, where, (unlike the case of 'actually
existing socialism', or the case of feudal societies before), artists and
writers have been given a significant degree of freedom to express their own
views. But this is not the case with respect to the dominant social paradigm.
In other words, the beliefs, ideas and the corresponding values which are
dominant in a market economy and the corresponding market society have to be
consistent with the economic element in it, i.e. with the economic institutions
which, in turn, determine that the dominant elites in this society are the
economic elites (those owning and controlling the means of production).
This has always been the case in History and will also be the case
in the future. No particular type of society can reproduce itself unless the
dominant beliefs, ideas and values are consistent with the existing
institutional framework. For instance, in the societies of 'actually existing
socialism' the dominant social paradigm had to be consistent with the dominant
element in them, (which was the political), and the corresponding political
institutions, which determined that the dominant elites in this society were
the political elites (party bureaucracy). Similarly, in the democratic society
of the future, the dominant social paradigm had to be consistent with the
dominant element in them, which would be the political, and the corresponding
democratic institutions, which would secure that there would be no formal
elites in this kind of society (although, of course, if democracy does not
function properly the emergence of informal elites could not be ruled out).
So, culture and, in particular, the social dominant paradigm play a
crucial role in the determination of individual and collective values. As long
as individuals live in a society, they are not just individuals but social
individuals, subject to a process, which socialises them and induces them to
internalise the existing institutional framework as well as the dominant social
paradigm. In this sense, people are not completely free to create their world
but are conditioned by History, tradition and culture. Still, this
socialisation process is broken, at almost all times-as far as a minority of
the population is concerned-and in exceptional historical circumstances even
with respect to the majority itself. In the latter case, a process is set in
motion that usually ends with a change of the institutional structure of
society and of the corresponding social paradigm. Societies therefore are not
just "collections of individuals" but consist of social individuals,
who are both free to create their world, (in the sense that they can give birth
to a new set of institutions and a corresponding social paradigm), and are created
by the world, (in the sense that they have to break with the dominant social
paradigm in order to recreate the world).
The Values of the Market Economy
As the dominant economic institutions in a market economy are those
of markets and private ownership of the means of production, as well as the
corresponding hierarchical structures, the dominant social paradigm promoted by
the mainstream mass media and other cultural institutions, (e.g. universities)
has to consist of ideas, beliefs and values which are consistent with them.
Thus, the kind of social 'sciences' which are taught at universities and the
kind of articles which fill academic journals, explicitly, or usually
implicitly, take for granted the existing economic institutions. Therefore,
their search for 'truth' in the analysis of major economic or social problems
is crucially conditioned by this fundamental characteristic. The causes of
world-wide unemployment, for instance, or of massive inequality and
concentration of economic power, will not be related to the system of the
market economy itself; instead, the malfunctioning of the system or bad
policies will be blamed, which supposedly can be tackled by the appropriate
improvement of the system's functioning, or the 'right' economic policies.
In economics, in particular, the dominant theory/ideology since the
emergence of the market economy has been economic liberalism, in its various
versions: from the old classical and neo-classical schools up to the modern
versions of it in the form of supply-side economics, new classical
macro-economics etc. But, from Adam Smith to Milton Friedman, the values
adopted are the same: competition and individualism, which, supposedly, are the
only values that could secure freedom.
Thus, for Adam Smith, the individual pursuit of self-interest in a
market economy will guarantee social harmony and, therefore, the main task of
government is the defence of the rich against the poor. So, in Smith's system,
as Canterbery puts it, 'individual self-interest is the motivating force, and
the built-in regulator that keeps the economy from flying apart is
competition'. Similarly, for Milton Friedman, the Nobel-prize winner in
economics (note: the Nobel Prize in economics was never awarded to an economist
who challenged the very system of the market economy) the capitalist market
economy is identified with freedom:
The kind of economic organisation that provides freedom directly,
namely, competitive capitalism, also promotes political freedom because it
separates economic power from political power and in this way enables the one
to offset the other…The two ideas of human freedom and economic freedom working
together came to their greatest fruition in the United States
It is obvious that in this ideology, which passes as the 'science'
of economics, the values of individualism and competition are preferred over
the values of collectivism and solidarity/co-operation, since freedom itself is
identified with the former values as against the latter. But, it 'happens' also
that the same values are the only ones, which could secure the production and
reproduction of the market economy. No market economy can function properly
unless those in control of it, (i.e., the economic elites), at least, and as
many of the rest as possible, are motivated by individualism and competition.
This is because the dynamic of a market economy crucially depends on
competition and individual greed. Furthermore, the fact that often the economic
elites resort to state protection against foreign competition, if the latter threatens
their own position, does not in the least negate the fact that competition is
the fundamental organising principle of the market economy. It is therefore no
historical accident that, as Polanyi has persuasively shown, the establishment
of the market economy implied sweeping aside traditional cultures and values
and replacing the values of solidarity, altruism, sharing and co-operation
(which usually marked community life) with the values of individualism and
competition as the dominant values. As Ray Canterbery stresses:
The capitalistic ethic leans toward the extreme of selfishness
(fierce individualism) rather than toward altruism. There is little room for
collective decision making in an ethic that argues that every individual should
go his or her own way. As we have seen, the idea that capitalism protects
'individual rights' would have been rejected during the early Middle Ages.
'Individual rights' were set in advance by the structure of feudalism, governed
by the pull of tradition and the push of authority. Economics was based upon
mutual needs and obligations.
A good example of the enthusiastic support for these values today
is, again, the Nobel-prize winner in economics Milton Friedman. According to
him:
Few trends could so thoroughly undermine the very foundations of our
free society as the acceptance by corporate officials of a social
responsibility other than to make as much money for their stockholders as
possible. This (social responsibility) is a fundamentally subversive doctrine.
Indeed, it is not Friedman who supports values which are
inconsistent with the market economy system but the various social democrats
and Green economists, who, taking for granted the market economy system,
proceed to argue in favour of utopian economic institutions incorporating
values which are inconsistent with this system (e.g.'stakeholding'
capitalism, 'social investment' etc).
As I attempted to show elsewhere, the basic cause of the failure of
both the 'actually existing socialism' in the East and social democracy in the
West was exactly that they attempted to merge two fundamentally incompatible
elements: the 'growth' element, (which implies the concentration of economic
power and expresses the logic of the market economy), with the social justice
element (which is inherently linked to equality and expresses socialist
ethics).
Chomsky and the Values of the Market
Economy
However, quite apart from social democrats and reformist Greens,
there is an alternative view about the values of the market economy proposed by
Noam Chomsky, which, however, ends up with similar conclusions about the
feasibility and desirability of state action with respect to controlling
today's market economy.
Thus, for Chomsky, the values which motivate today's elites in
advanced capitalist countries are not individualism and competition; instead,
these elites simply use such values as propaganda in their attempt to
'persuade' their own public and the countries in the periphery and
semi-periphery to implement them whereas they themselves demand and enjoy the
protection of their own states:
For the general public, individualism and competition are the
prescribed values. Not for elites, however. They demand and obtain the
protection of a powerful state, and insist on arrangements that safeguard them
from unfettered competition or the destructive consequences of individualism.
The process of corporatization is a standard illustration, as is the reliance
in every economy -- crucially, the US -- on socialisation of risk and cost. The
need to undermine the threat of competition constantly takes new forms: today,
one major form, beyond corporatization, is the development of a rich network of
"strategic alliances" among alleged competitors: IBM-Toshiba-Siemens,
for example, or throughout the automotive industry. This has reached such
extremes that prominent analysts of the business world now speak of a new form
of "alliance capitalism" that is replacing the managerial/corporate
capitalism that had largely displaced proprietary capitalism a century ago in
advanced sectors of the economy.
Chomsky has recently expanded on the same theme in a New Left Review
article in which it is made clear that his views above about the values of the
market economy are perfectly consistent with his views on the nature of today's
capitalism. In this article he first states that the word 'capitalist' does not
mean capitalist but rather it refers to state subsidised and protected private
power centres, or 'collectivist legal entities,' which embody today's
corporatization of the market economy. He then goes on to describe
corporatization and the role of the state as follows:
The corporatization process was largely a reaction to great market
failures of the late nineteenth century, and it was a shift from something you
might call proprietary capitalism to the administration of markets by
collectivist legal entities-mergers, cartels, corporate alliances-in
association with powerful states…the primary task of the states-and bear in mind
that, with all the talk about minimising the state, in the OECD countries the
state continues to grow relative to GNP, notably in the 1980s and 1990s-is
essentially to socialise risk and cost, and to privatise power and profit.
Furthermore, Chomsky's views about the market economy's values and
the nature of present capitalism are, in turn, entirely consistent with his
present views on the potential role of the state in controlling today's market
economy. Thus, as Chomsky stresses in the aforementioned article:
The long-term goal of such initiatives (like the Multilateral
Agreement on Investment-MAI) is clear enough to anyone with open eyes; an
international political economy which is organised by powerful states and
secret bureaucracies whose primary function is to serve the concentrations of
private power which administer markers through their own internal operations,
through networks of corporate alliances, including the intra-firm transactions
that are mislabelled 'trade'. They rely on the public for subsidy, research and
development, for innovation and for bailouts when things go wrong. They rely on
the powerful states for protection from dangerous 'democracy openings'. In such
ways, they seek to ensure that the 'prime beneficiaries' of the world's wealth
are the right people: the smug and prosperous 'Americans'; the 'domestic
constituencies and their counterparts elsewhere. The scale of all of this is
nowhere near as great or, for that matter, as novel as claimed; in many ways
it's a return to the early twentieth century. And there's no reason to doubt
that it can be controlled even within existing formal institutions of
parliamentary democracy.
One, however, could object on several grounds this stand, as
portrayed by the above extracts. First, the argument about the values of the
economic elites, as I attempted to show above, is contestable; second, the
nature of today's market economy could be seen in a very different analytical
framework than the one suggested by Chomsky and, finally, it could be shown
that the way out of the present multi-dimensional crisis and the related huge
concentration of power can not be found in fragmented and usually
'monothematic' defensive battles with the elites. Such battles, even if
sometimes victorious, are never going to win the war, as long as they are not
an integral part of a new popular movement's fight against the system of the
market economy itself, which is the ultimate cause of the concentration of
economic power.
As regards the nature of the market economy today, I have attempted
elsewhere to show how it evolved since it emerged, about two centuries ago, and
how it took the form of the present growth economy. I will only add here that
the shift from proprietary (or entrepreneurial) capitalism to the present internationalised
market economy, where a few giant corporations control the world economy, did
not happen, as Chomsky presents it, as the outcome of 'a reaction to great
market failures of the late nineteenth century.' What Chomsky omits is that it
was competition, which led from simple entrepreneurial firms to the present
giant corporations.
The market failures he mentions are not a God-given calamity.
Excepting the case of monopolies, almost all market failures in history have
been directly or indirectly related to competition. It is competition, which
creates the need for expansion, so that the best (from the profit of view of
profits) technologies and methods of organising production (economies of scale
etc) are used. It is the same competition, which has led to the present
explosion of mergers and take-overs in the advanced capitalist countries, as
well as the various 'strategic alliances'. For instance, the recently announced
merger of giant oil companies, in a sense, is the result of a 'market failure'
because of the fall in their profits. But, in a deeper sense, this merger, as
well as the take-overs, strategic alliances etc going on at the moment, are
simply the result of self-protective action taken by giant corporations, in
order to survive the cut-throat competition launched by the present
internationalisation of the market economy. Therefore, it is competition, which
has led to the present corporate (or 'alliance') capitalism, not 'market
failures' and/or the associated state activity, which just represent the
effects of competition.
Similarly, the present internationalisation of the market economy is
not just the result of state action to liberalise financial and commodity
markets. In fact, the states were following the de facto internationalisation
of the market economy, which was intensified by the activities of multinationals,
when, (in the late seventies), under pressure from the latter, started the
process of liberalising the financial markets and further deregulating the
commodity markets (through the GATT rounds). Therefore, the present internationalisation
is in fact the outcome of the grow-or-die dynamics, which characterises the
market economy, a dynamics that is initiated by competition, the crucial fact
neglected by Chomsky.
It is also the same internationalisation of the market economy,
which became incompatible with the degree of state control of the economy
achieved by the mid seventies, that made necessary the present neoliberal
consensus. The latter, therefore, is not just a policy change, as
socialdemocrats and their fellow travellers suggest, but represents an
important structural change. So, minimising the state is not just 'talk', as
Chomsky assumes basing his argument on the assumption that 'the state continues
to grow relative to GNP, notably in the 1980s and 1990s'. However, not only the
fall in the growth rate of government spending in OECD countries was higher
than that of the other parts of aggregate demand in the period 1980-93 but, in
fact, the (weighted average) general government consumption of high income
economies was lower in 1995, at 15% of GNP, than in 1980 (17%). All this, not
taking into account the drastic reduction in the overall public sectors in the
last twenty years, as a result of the massive privatisation of state
industries. Therefore, minimising the state, far from being 'talk' is a basic
element of the present neoliberal consensus.
Also, strategic alliances, mergers and take-overs do not represent a
movement away from the market economy but a movement towards a new form of it.
Away from a market economy, which was geared by the internal market and towards
a market economy, which is geared by the world market. This means further and
further concentration of economic power not only in terms of incomes and wealth
but also in terms of concentration of the power to control world output, trade
and investment in fewer and fewer hands. However, the oligopolisation of
competition does not mean lack of competition.
Furthermore, it will be wrong to assume that the main characteristic
of the present period is an 'assault against the markets', as the purist
neoliberal argument goes, which Chomsky accepts. The present period of
neoliberal consensus can be characterised instead, as an assault against social
controls on markets, particularly those I called social controls in the narrow
sense, i.e. those aiming at the protection of humans and nature against the
effects of marketization, (the historical process that has transformed the
socially controlled economies of the past into the market economy of the
present). Such controls have been introduced as a result of social struggles
undertaken by those who are adversely affected by the market economy's effects
on them (social security legislation, welfare benefits, macro-economic controls
to secure full employment etc).
What is still debated within the economic elites is the fate of what
I call social controls in the broad sense, i.e. those primarily aiming at the
protection of those controlling the market economy against foreign competition
(tariffs, import controls, exchange controls--- in the past, and non-tariff
barriers, massive public subsidy for R&D, risk-protection (bailouts),
administration of markets etc--- at present). Thus, pure neoliberal economists,
bankers, some politicians and others are against any kind of social controls
over markets (in the narrow or broad sense above). On the other hand, the more
pragmatic governments of the neoliberal consensus, from Reagan to Clinton and
from Thatcher to Blair, under the pressure of the most vulnerable to
competition sections of their own economic elites, have kept many social
controls in the broad sense and sometimes even expanded them (not hesitating to
go to war to secure their energy supplies) giving rise to the pure neoliberal
argument (adopted by Chomsky) about an assault on markets.
In this context, one should not confuse liberalism/neoliberalism
with laissez-faire. As I tried to show elsewhere, it was the state itself that
created the system of self-regulating markets. Furthermore, some form of state
intervention has always been necessary for the smooth functioning of the market
economy system.
The state, since the collapse of the socialdemocratic consensus, has
seen a drastic reduction in its economic role as it is no longer involved in a
process of directly intervening in the determination of income and employment
through fiscal and monetary policies.
However, even today, the state still plays an important role in
securing, through its monopoly of violence, the stability of the market economy
framework and in maintaining the infrastructure for the smooth functioning of
it. It is within this role of maintaining the infrastructure that we may see
the activities of the state in socialising risk and cost and in maintaining a
safety net in place of the old welfare state. Furthermore, the state is called
today to play a crucial role with respect to the supply-side of the economy
and, in particular, to take measures to improve competitiveness and to train
the working force to the requirements of the new technology, in supporting
research and development and even in subsidising export industries wherever
required. Therefore, the type of state intervention which is compatible with
the marketization process not only is not discouraged but, instead, is actively
promoted by most of the professional politicians of the neoliberal consensus.
It is true that the economic elites do not like the kind of
competition which, as a result of the uneven development of the world market
economy, threatens their own interests and this is why they have always
attempted (and mostly succeeded) to protect themselves against it. But, it is
equally true that it was the force of competition which has always fuelled the
expansion of the market economy and that it was the values of competition and
self-interest which have always characterised the value system of the elites
which control the market economy. Chomsky, however, sometimes gives the
impression that, barring some 'accidents' like the market failures he mentions,
as well as the aggressive state support that economic elites have always
enjoyed, the 'corporatization' of the market economy might have been avoided.
But, of course, neither proprietary capitalism (or any other type of
it) is desirable ---since it cannot secure covering the basic needs of all
people--- nor can we deny all radical analysis of the past hundred and fifty
years or so, from Marx to Bookchin, and all historical experience since then,
which leads to one conclusion: the market economy is geared by a grow-or-die
dynamic fuelled by competition, which is bound to lead to further and further
concentration of economic power. Therefore, the problem is not the
corporatization of the market economy which, supposedly, represents 'an attack
on markets and democracy', and which was unavoidable anyway within the dynamic
of the market economy. In other words, the problem is not corporate market
economy/capitalism, as if some other kind of market economy/capitalism was
feasible or desirable, but the market economy/capitalism itself. Otherwise, one
may easily end up blaming the elites for violating the rules of the game,
rather than blaming the rotten game itself!
If the above analytical framework is valid then obviously it is not
possible, within the existing institutional framework of parliamentary
democracy and the market economy to check the process of increasing
concentration of economic power. This is a process that is going since the
emergence of the market economy system, some two centuries ago, and no socialdemocratic
governments or grassroots movements were ever able to stop it, or even to
retard it, apart from brief periods of time. In fact, even the grass root
'victory' hailed by Chomsky against the MAI proposals is doubtful whether it
would have been achieved had there been no serious divisions among the economic
elites about it.
Furthermore, the 'victory' itself has already started showing signs
that it was hollow, as it is now clear that the MAI agreement was not, in fact,
set aside, but it is simply implemented 'by installments', through the 'back
door' of the IMF at present, and possibly the World Trade Organisation in the
future. The basic reason why such battles are doomed is that they are not an
integral part of a comprehensive political program to replace the institutional
framework of the market economy itself and, as such, they can easily be
marginalised or lead to simple (easily reversible) reforms.
The inevitable conclusion is that only the struggle for the building
of a new massive movement aiming at fighting 'from without' for the creation of
a new institutional framework, and the development of the corresponding culture
and social paradigm, might have any chances to lead to a new society
characterised by the equal distribution of power.
Cultural Homogenisation
As I mentioned above, the establishment of the market economy
implied sweeping aside traditional cultures and values. This process was
accelerated in the twentieth century with the spreading all over the world of
the market economy and its offspring the growth economy. As a result, today,
there is an intensive process of culture homogenisation at work, which not only
rules out any directionality towards more complexity, but in effect is making
culture simpler, with cities becoming more and more alike, people all over the
world listening to the same music, watching the same soap operas on TV, buying
the same brands of consumer goods, etc.
The establishment of the neoliberal consensus in the last twenty
years or so, following the collapse of the socialdemocratic consensus, has
further enhanced this process of cultural homogenisation. This is the
inevitable outcome of the liberalisation and de-regulation of markets and the
consequent intensification of commercialisation of culture.
As a result, traditional communities and their cultures are
disappearing all over the world and people are converted to consumers of a mass
culture produced in the advanced capitalist countries and particularly the USA.
In the film industry, for instance, even European countries with a strong
cultural background and developed economies have to effectively give up their
own film industries, unable to compete with the much more competitive US
industry. Thus, in the early 1990s, US films' share amounted to 73% of the
European market.
Also, indicative of the degree of concentration of cultural
power in the hands of a few US corporations is the fact that, in 1991, a handful of US distributors controlled 66% of total cinema box office and 70% of the total number of video rentals in Britain.
Also, indicative of the degree of concentration of cultural
power in the hands of a few US corporations is the fact that, in 1991, a handful of US distributors controlled 66% of total cinema box office and 70% of the total number of video rentals in Britain.
Thus, the recent emergence of a sort of "cultural"
nationalism in many parts of the world expresses a desperate attempt to keep a
cultural identity in the face of market homogenisation. But, cultural
nationalism is devoid of any real meaning in an electronic environment, where
75 percent of the international communications flow is controlled by a small
number of multinationals. In other words, cultural imperialism today does not
need, as in the past, a gunboat diplomacy to integrate and absorb diverse cultures.
The marketization of the communications flow has already established
the preconditions for the downgrading of cultural diversity into a kind of
superficial differentiation akin to a folklorist type. Furthermore, it is
indicative that today's `identity movements', like those in Western Europe
(from the Flemish to the Lombard and from the Scots to the Catalans) which
demand autonomy as the best way to preserve their cultural identity, in fact,
express their demand for individual and social autonomy in a distorted way.
The distortion arises from the fact that the marketization of
society has undermined the community values of reciprocity, solidarity and
co-operation in favour of the market values of competition and individualism.
As a result, the demand for cultural autonomy is not founded today on community
values which enhance co-operation with other cultural communities but, instead,
on market values which encourage tensions and conflicts with them. In this
connection, the current neoracist explosion in Europe is directly relevant to
the effectual undermining of community values by neoliberalism, as well as to
the growing inequality and poverty following the rise of the neoliberal
consensus.
Finally, one should not underestimate the political implications of
the commercialisation and homogenisation of culture. The escapist role
traditionally played by Hollywood films has now acquired a universal dimension,
through the massive expansion of TV culture and its almost full monopolisation
by Hollywood subculture. Every single TV viewer in Nigeria, India, China or
Russia now dreams of the American way of life, as seen on TV serials (which,
being relatively inexpensive and glamorous, fill the TV programmes of most TV
channels all over the world) and thinks in terms of the competitive values
imbued by them. The collapse of existing socialism has perhaps more to do with
this cultural phenomenon, as anecdotal evidence indicates, than one could
imagine.
As various TV documentaries have shown, people in Eastern European
countries, in particular, thought of themselves as some kind of 'abnormal'
compared with what western TV has established as the 'normal'. In fact, many of
the people participating in the demonstrations to bring down those regimes
frequently referred to this 'abnormality', as their main incentive for their
political action. In this problematique, one may criticise the kind of cultural
relativism supported by some in the Left, according to which almost all
cultural preferences could be declared as rational (on the basis of some sort
of rationality criteria), and therefore all cultural choices deserve respect,
if not admiration, given the constraints under which they are made. But,
obviously, the issue is not whether our cultural choices are rational or not.
Nor the issue is to assess 'objectively' our cultural preferences as right or
wrong. The real issue is how to make a choice of values which we think is
compatible with the kind of society we wish to live in and then make the
cultural choices which are compatible with these values.
This is because the transition to a future society based on
alternative values presupposes that the effort to create an alternative culture
should start now, in parallel with the effort to establish the new institutions
compatible with the new values. On the basis of the criterion of consistency
between our cultural choices and the values of a truly democratic society, one
could delineate a way beyond post-modern relativism and distinguish between
'preferable' and 'non-preferable' cultural choices. So, all those cultural
choices involving films, videos, theatrical plays etc, which promote the values
of the market economy and particularly competition for money, individualism,
consumerist greed, as well as violence, racism, sexism etc should be shown to
be non-preferable and people should be encouraged to avoid them. On the other
hand, all those cultural choices, which involve the promotion of the community
values of mutual aid, solidarity, sharing and equality for all (irrespective of
race, sex, ethnicity) should be promoted as preferable.
The Role of Mass Media Today
A basic issue in the discussion of the role of the mass media in
today's society is whether they do reflect social reality in a broad sense, or
whether, instead, the elites which control them filter out the view of reality
which they see fit to be made public. To my mind, the answer to this question
is that the media do both, depending on the way we define reality.
To take, first, political reality, mass media, in one sense, do not
provide a faked view of it. Taking into account what is considered as politics
today, i.e. the activity of professional politicians 'representing' the people,
one may argue that it is politics itself, which is faked, and mass media simply
reproduce this reality. In this sense, the issue is not whether the mass media
manipulate democracy, since it is democracy itself, which is faked, and not its
mass media picture, which simply reflects the reality of present 'democracy'.
But, at the same time, if we give a different definition to political reality,
mass media do provide, in general, a distorted picture of it. In other words,
if we define as real politics the political activity of people themselves (for
instance, the collective struggles of various sectors of the population around
political, economic or social issues) rather than that of professional
politicians, then, the mass media do distort the picture they present about
political reality. They do so, by minimising the significance of this type of
activity, by distorting its meaning, by marginalising it, or by simply ignoring
it completely.
Furthermore, mass media do provide a distorted picture of political
reality when they come to report the causes of crises, or of the conflicts
involving various sections of the elites. In such cases they faithfully reflect
the picture that the sections of the elites controlling them wish to reproduce.
The latest example of this was the way in which the Anglo-American media, in
particular, distorted the real meaning of the criminal bombardment of the Iraqi
people at the end of 1998. Thus, exactly, as in their reporting during the war
in the Gulf, the real cause of the conflict, (i.e. who controls the world's
oil, irrespective of where the oil stocks are located -- the elites of the
North versus those in the South), was distorted as a conflict between the peace
loving regimes in the North versus the rogue regimes in the South, or, in more
sophisticated versions supported by socialdemocrat intellectuals, as a conflict
between the 'democracies' in the North versus the 'despotic regimes' in the
South over the control of oil.
Under these circumstances, it is obvious that the mass media usually
offer a true glimpse of reality only when the elites are divided with respect
to their conception of a particular aspect of political reality. From this
point of view, concentration in the mass media industry is significant and
whether the media are owned by 100 or 10 owners does indeed matter in the
struggle for social change. It is for instance such divisions among the
European elites over the issue of joining the European Monetary Union which
have allowed a relatively wide media discussion on the true meaning of European
integration, particularly in countries like Britain where the elites were
split. It was also similar divisions between the Anglo-American and the
European elites over the latest war crime in the Gulf which made a bit clearer
the directly criminal role of the former (support for the bombardments), as well
as the indirectly criminal role of the latter (support for the embargo). It is
not accidental that in the USA and UK, where the media played a particularly
despicable role in distorting the truth and misinforming the public, the polls
showed consistently vast majorities in favour of the criminal activities of
their elites. Of course, this does not mean that decentralisation of power in
the mass media industry (or anywhere else) represents by itself, even
potentially, a radical social change leading to an authentic democracy. Still,
the significance of decentralisation in the media industry with respect to
raising consciousness should not be ignored.
As regards economic reality, mass media, in one sense again, do
provide a relatively accurate picture of what counts as economic reality today.
This is when the media, taking for granted the system of the market economy,
end up with a partial picture of economic reality where what matters is not
whether the basic needs of the population are covered adequately but whether
prices (in commodity and stock markets), interest rates, exchange rates and
consequently profit rates are going up or down. Still, in another sense, the
very fact that mass media take for granted the system of the market economy
means that they cannot 'see' the 'systemic' nature of most of the real economic
problems (unemployment, poverty and so on) and therefore inevitably end up with
a faked image of economic reality. This way of seeing economic reality is not
imposed on the media by their owners, important as their influence may
otherwise be, or by their internal hierarchical structure etc. The media simply
reflect the views of orthodox economists, bankers, businessmen and professional
politicians, i.e. of all those who express the dominant social paradigm.
But if the picture of political and economic reality offered by the
media is mixed this is not the case with respect to ecological reality. As no
meaningful reporting of the ecological crisis is possible unless it refers to
the systemic causes of it, which by definition are excluded by the discourse in
the mainstream media, the result is a complete misinformation, or just
straightforward description of the symptoms of the crisis. The mass media are
flooded by the 'realist' Greens who fill the various ecological parties and who
blame technology, consumerist values, legislation etc-- anything but the real
cause of the crisis, i.e. the very system of the market economy. Similarly, the
reporting of the present social crisis never links the explosion of crime and
drug abuse, for instance, with their root cause, i.e. the increasing
concentration of political, economic and social power in the hands of various
elites. Instead, the symptoms of the social crisis are distortedly reported as
its causes and the media blame, following the advice of the establishment
'experts', the breaking of the traditional family, or of the school, as the
causes of crime. Similarly, various 'progressive' intellectuals (like the
lamentable ex 'revolutionary' and now well promoted by the mainstream media
Euro-parliamentarian Con Bendit) blame the prohibitive legislation on drugs for
the massive explosion of drug abuse!
However, there is another approach being promoted recently by system
theorists, according to which mass media do not just either reflect or distort
reality but also manufacture it. This is not said in the usual sense of
manufacturing consent described by Chomsky and Herman or, alternatively, by
Bourdieu, which is basically a one-way process whereby the elites controlling
the mass media filter out the information, through various control mechanisms,
in order to create consent around their agenda. Instead, system theorists talk
about a two-way process whereby social reality and mass media are seen as two
interdependent levels, the one intruding into the other. This is based on the
valid hypothesis that reality is not just something external to the way it is
conceived. TV watching is a constituent moment of reality since our information
about reality consists of conceptions that constitute reality itself. At the
same time, the conception of reality is conditioned by the media functioning,
which is differentiated in relation to the other social systems (political,
economic etc).
In the systems analysis problematique, it is not the economic, or
the political systems, which control the media functioning. What determines
their functioning, as well as their communicative capability, is their ability
to generate irritation- a fact that could go a long way to explain the high
ratings of exciting or irritating TV programs. The diversified functioning of
mass media creates, in turn, the conditions for a social dynamic which, in a
self-reflective and communicative way, reproduces, as well as institutes,
society. Thus, whereas the early modern society is instituted through a
transcendental subjectivity and a material mode of production, the present
post-modern society's reproduction depends on the processes of communicative
rationality. The mass media are an integral and functional part of the
communicative processes of post-modern society.
However, one may point out here that although it is true that social
reality and mass media are interacting, i.e. that our conception of TV news is
a constituent element of reality and at the same time our conception of reality
is conditioned by TV functioning, this does not imply that the diversified
functioning of mass media creates the conditions for a social dynamic which
acts for the institution of society, although it does play this role as far as
its reproduction is concerned. The meaning we assign to TV reporting is not
determined exogenously but by our world view, our own paradigm, which in turn,
as we have seen above, is the result of a process of socialisation that is
conditioned by the dominant social paradigm. Furthermore, TV functioning plays
a crucial role in the reproduction of the dominant social paradigm and the
socialisation process generally. So, the diversified functioning of TV does
indeed create the conditions for a social dynamic leading to the reproduction
of the status quo, but in no way could be considered as doing the same for
instituting society.
Goals and Control Mechanisms
The goals of the mass media are determined by those owning and
controlling them, who, usually, are members of the economic elites that control
the market economy itself. Given the crucial role that the media could play in
the internalisation of the dominant social paradigm and therefore the
reproduction of the institutional framework which secures the concentration of
power in the hands of the elites, it is obvious that those owning and
controlling the mass media have broader ideological goals than the usual goals
pursued by those owning and controlling other economic institutions, i.e.
profit maximising. Therefore, an analysis that would attempt to draw
conclusions on the nature and significance of media institutions on the basis
of the profit dimension alone, (i.e. that they share a common goal and
consequently a similar internal hierarchical structure with all other economic
institutions and that they just sell a product, the only difference with other
economic institutions being that the product is the audience,) is bound to be
one-dimensional. Profit maximising is only one parameter, often not even the
crucial one, which conditions the role of mass media in a market economy. In
fact, one could mention several instances where capitalist owners chose even to
incur significant losses (which they usually cover from other profitable
activities) in order to maintain the social influence (and prestige), which
ownership of an influential daily offers to them (Murdoch and The Times of
London is an obvious recent example).
Given the ultimate ideological goal of mass media, the main ways in
which they try to achieve it are:
• first, by assisting in the internalisation of
the dominant social paradigm and,
• second, by marginalising, if not excluding
altogether, conceptions of reality which do not conform with the dominant
social paradigm.
But, what are the mechanisms through which the media can achieve
their goals? To give an answer to this question we have to examine a series of
mechanisms, most of them 'automatic' built-in mechanisms, which ensure
effective achievement of these goals. It will be useful here to distinguish
between 'internal' and 'external' control mechanisms, which function
respectively as internal and external constraints on the freedom of media
workers to reproduce reality. Both internal and external mechanisms work mainly
through competition which secures homogenisation with respect to the media's
main goals. Competition is of course the fundamental organisational principle
of a market economy; but, it plays a special role with respect to the media. As
Bourdieu points out, competition 'rather than automatically generating
originality and diversity tends to favour uniformity'. Still, competition is
not the only force securing homogenisation. In a similar way as with the market
economy itself, competition provides only the dynamic mechanism of
homogenisation. It is the fact that owners of mass media, as well as managers
and the highest paid journalists, share the same interest in the reproduction
of the existing institutional framework which constitutes the 'base', on which
this competition is developed.
But, let us consider briefly the significance of the various control
mechanisms. The main 'internal control' mechanisms are ownership and the
internal hierarchical structure, which are, both, crucial in the creation of
the conditions for internal competition among journalists, whereas the
'ratings' mechanism plays a similar role in the creation of the conditions for
external competition among media.
Starting with ownership, it matters little, as regards the media's
overall goals defined above, whether they are owned and controlled by the state
and/or the state-controlled institutions or whether, instead, they are owned
and controlled by private capital. However, there are certain secondary
differences arising from the different ownership structures which may be
mentioned. These secondary differences have significant implications,
particularly with respect to the structure of the elites controlling the media,
their own organisational structure and their 'image' with respect to their
supposedly 'objective' role in the presentation of information. As regards the
elite structure, whereas under a system of state ownership and control the mass
media are under the direct control of the political elite and the indirect
control of the economic elites, under a system of private ownership and
control, the media are just under the direct control of the economic elites.
This fact, in turn, has some implications on whether filtering out
of information takes place directly through state control, or indirectly
through various economic mechanisms (e.g. ratings).As regards the media
organisational structure, whereas state-owned media are characterised by
bureaucratic rigidity and inefficiency, privately owned media are usually
characterised by more flexibility and economic efficiency. Finally, the
'objective' image of mass media suffers less in case of private ownership compared
to the case of state ownership. This is because in the latter case control of
information is more direct and therefore more obvious than in the former.
Another important internal control mechanism is the hierarchical
structure which characterises all media institutions (as it does all economic
institutions in a market economy) and which implies that all-important
decisions are taken by a small managerial group within them, who are usually
directly responsible to the owners. The hierarchical structure creates a
constant internal competition among journalists as to who will be more
agreeable to the managerial group (on which their career and salary prospects
depend). Similarly, people in the managerial group are in constant competition
as to who will be more agreeable to the owners (on which their highly paid
position depends). So, everybody in this hierarchical structure knows well (or
soon learns) what is agreeable and what is not and acts accordingly. Therefore,
the filtering of information works through self-censorship rather than through
any kind of 'orders from above'. The effect of the internal hierarchical
structure is to impose, through the internal competition that it creates, a
kind of homogenisation in the journalists' performance. But, does this exclude
the possibility that some media workers may have incentives other those
determined by career ambitions? Of course, not. But, such people, as Chomsky
points out, will never find a place in the corridors of media power and, one
way or another, will be marginalised:
They (journalists) say, quite correctly, "nobody ever tells me
what to write. I write anything I like. All this business about pressures and
constraints is nonsense because I'm never under any pressure." Which is
completely true, but the point is that they wouldn't be there unless they had
already demonstrated that nobody has to tell them what to write because they
are going to say the right thing… it is not purposeful censorship. It is just
that you don't make it to those positions. That includes the left (what is
called the left), as well as the right. Unless you have been adequately
socialised and trained so that there are some thoughts you just don't have,
because if you did have them, you wouldn't be there.
But, how is it determined what is agreeable? Here it is where the
'external' control mechanisms come into play. It is competition among the
various media organisations, which homogenises journalists' behaviour. This
competition takes the form of a struggle to improve ratings (as regards TV
channels) or circulation (as regards newspapers, magazines etc). Ratings or
circulation are important not per se but because the advertising income of
privately owned mass media (which is the extra income determining their
survival or death) depends on them. The result is, as Pierre Bourdieu points
out that:
Ratings have become the journalist's Last Judgement… Wherever you
look, people are thinking in terms of market success. Only thirty years ago,
and since the middle of the nineteenth century---since Baudelaire and Flaubert
and others in avant-garde milieux of writers' writers, writers acknowledged by
other writers or even artists acknowledged by other artists---immediate market
success was suspect. It was taken as a sign of compromise with the times, with
money... Today, on the contrary, the market is accepted more and more as a
legitimate means of Iegitimation.
The pressures created by the ratings mechanism, as Bourdieu points
out, have nothing to do with the democratic expression of enlightened collective
opinion or public rationality, despite what media ideologues assert. In fact,
as the same author points out, the ratings mechanism is the sanction of the
market and the economy, that is, of an external and purely market law. I would
only add to this that given how 'public opinion' is formed within the process
of socialisation and internalisation of the dominant social paradigm, it is
indeed preposterous to characterise the ratings mechanism as somehow expressing
the democratic will of the people. Ratings, as well as polls generally, is the
'democracy of the uninformed'. They simply reflect the ignorance, the
half-truths, or the straightforward distortions of the truth which have been
assimilated by an uninformed public and which, through the ratings mechanism,
reinforce the role of the mass media in the reproduction of the dominant social
paradigm.
One may therefore conclude that the role of the media today is not
to make the system more democratic. In fact, one basic function of the media
is, as Chomsky stresses, to help in keeping the general population out of the
public arena because 'if they get involved they will just make trouble. Their
job is to be "spectators," not "participants". Furthermore,
the media can play a crucial role in offsetting the democratic rights and
freedoms won after long struggles. This has almost always been the case when
there was a clash between the elites and trade unions, or popular movements
generally. Walter Lippmann, the revered American journalist was explicit about it,
as Chomsky points out.
For Lippmann, there is a new art in the method of democracy, called
"manufacture of consent." By manufacturing consent, you can overcome
the fact that formally a lot of people have the right to vote. We can make it
irrelevant because we can manufacture consent and make sure that their choices
and attitudes will be structured in such a way that they will always do what we
tell them, even if they have a formal way to participate. So we'll have a real
democracy. It will work properly. That's applying the lessons of the propaganda
agency.
Within this analytical framework we may explore fruitfully the
particular ways through which the filtering of information is achieved, as, for
instance, is described by Chomsky and Herman in their 'propaganda model'.
Similarly Bourdieu shows in a graphic way how the filtering of information
takes place in television, through the structuring of TV debates, the time
limits, the methods of hiding by showing etc. Particularly important is the way
in which the media, particularly television, control not just the information
flow, but also the production of culture, by controlling the access of
academics as well as of cultural producers, who in turn, as a result of being
recognised a public figures, gain recognition in their own fields.
Thus, at the end, the journalistic field, which is structurally very
strongly subordinated to market pressures and as such is a very heteronomous
field, applies pressure, in turn, to all other fields.
An illustrative application of the above analytical framework is the
crucial contribution of the mass media in the creation of the subjective
conditions for the neoliberal consensus. Thus, the mass media have played a
double ideological role with respect to the neoliberal consensus. On the one
hand, they have promoted directly the neoliberal agenda:
- by degrading the economic role of the state,
- by attacking the 'dependence' on the state which the welfare state supposedly creates,
- by identifying freedom with the freedom of choice, which is supposedly achieved through the liberation of markets etc. (talk radio and similar TV shows play a particularly significant role in this respect).
- by promoting irrational beliefs of all sorts (religion, mystical beliefs, astrology etc). The film and video explosion on the themes of exorcism, supernatural powers etc (induced mainly by Hollywood) has played a significant role in diverting attention from the evils of neoliberalism.
- by manufacturing irrelevant and/or insignificant 'news stories' (e.g. Monica Lewinsky affair), which are then taken over by opposition politicians who are eager to find fictitious ways (because of the lack of real political differences within the neoliberal consensus) to differentiate themselves from those in power.
- by creating a pseudo 'general interest' (for instance around a nationalist or chauvinist cause) in order to unite the population around a 'cause' and make it forget the utterly dividing aspects of neoliberalism.
At the same time, the creation of the neoliberal conditions at the
institutional level had generated the objective conditions for the mass media
to play the aforementioned role. This was because the deregulation and
liberalisation of markets and the privatisation of state TV in many European
countries had created the conditions for homogenisation through the internal
and external competition, which I mentioned above. It is not accidental anyway
that major media tycoons like Murdoch in the Anglo-Saxon world, Kirsch in
Germany, or Berlusconi in Italy have also been among the main exponents of the
neoliberal consensus agenda.
5
Media and Culture in a Democratic Society
Culture
and a Democratic Conception of Citizenship
I am not going to repeat here the discussion on the fundamental
components of an inclusive democracy and the necessary conditions, which have
to be met for the setting up of it. Instead, I will try to focus on the
implications of the democratic institutional arrangements on culture and the
role of media.
The starting point is that the conditions for democracy imply a new
conception of citizenship: economic, political, social and cultural.
Thus, political citizenship involves new political structures and the
return to the classical conception of politics (direct democracy). Economic
citizenship involves new economic structures of community ownership and control
of economic resources (economic democracy). Social citizenship involves
self-management structures at the workplace, democracy in the household and new
welfare structures where all basic needs (to be democratically determined) are
covered by community resources, whether they are to be satisfied in the
household or at the community level. Finally, cultural citizenship involves new
democratic structures of dissemination and control of information and culture
(mass media, art, etc.), which allow every member of the community to take part
in the process and at the same time develop his/her intellectual and cultural
potential.
It is obvious that the above new conception of citizenship has very
little in common with the liberal and socialist definitions of citizenship,
which are linked to the liberal and socialist conceptions of human rights
respectively. Thus, for the liberals, the citizen is simply the individual
bearer of certain freedoms and political rights recognised by law which,
supposedly, secure equal distribution of political power. Similarly, for the
socialists, the citizen is the bearer not only of political rights and freedoms
but, also, of some social and economic rights, whereas for Marxists the
citizenship is realised with the collective ownership of the means of
production.
Furthermore, the conception of citizenship adopted here is not related
to the current social-democratic discourse on the subject, which, in effect,
focuses on the institutional conditions for the creation of an
internationalised market economy 'with a human face'. The proposal for instance
for a redefinition of citizenship within the framework of a "stakeholder
capitalism" belongs to this category. This proposal involves an 'active'
citizenship, where citizens have 'stakes' in companies, the market economy and
society in general and managers have to take into account these stakes in the
running of the businesses and social institutions they are in charge of.
The conception of citizenship adopted here, which could be called a
democratic conception, is based on our definition of inclusive democracy and
presupposes a 'participatory' conception of active citizenship, like the one
implied by the work of Hannah Arendt. In this conception, "political
activity is not a means to an end, but an end in itself; one does not engage in
political action simply to promote one's welfare but to realise the principles
intrinsic to political life, such as freedom, equality, justice, solidarity,
courage and excellence". It is therefore obvious that this conception of
citizenship is qualitatively different from the liberal and social-democratic
conceptions, which adopt an 'instrumentalist' view of citizenship, i.e. a view
which implies that citizenship entitles citizens with certain rights that they
can exercise as means to the end of individual welfare.
Although the above conception of citizenship implies a geographical
sense of community which is the fundamental unit of political, economic and
social life, still, it is assumed that it interlocks with various other
communities (cultural, professional, ideological, etc.). Therefore, the
community and citizenship arrangements do not rule out cultural differences
based on language, customs etc, or other differences based on gender, age,
ethnicity and so on; they simply provide the public space where such
differences can be expressed. Furthermore, these arrangements institutionalise
various safety valves that aim to rule out the marginalisation of such
differences by the majority. What therefore unites people in a political
community, or a confederation of communities, is not a set of common cultural values,
imposed by a nationalist ideology, a religious dogma, a mystical belief, or an
'objective' interpretation of natural or social 'evolution', but the democratic
institutions and practices, which have been set up by citizens themselves.
However, as we attempted to show elsewhere this cultural pluralism
does not mean a kind of cultural relativism where 'everything goes'. In other
words, it is possible to derive an ethical system and correspondingly a set of
cultural values which is neither 'objective', (in the sense that it is derived
from a supposedly objective interpretation of social evolution-Marx, or natural
evolution---Bookchin), nor just a matter of individual choice. There can be a
set of common or general moral criteria by which individual actions could be
judged, i.e. a code of democratic ethics, which would be based on the
fundamental principle of organising a democratic society around a confederal
inclusive Democracy, (i.e. a democracy based on a confederation of demoi, or
democratic communities).
Democratic Ethic Code
This code of democratic ethics may be derived out of the two
fundamental principles of organisation of a confederal inclusive Democracy,
i.e. the principle of autonomy and the principle of community. Thus, out of the
fundamental principle of autonomy one may derive a set of cultural values about
equality and respect for the personality of every citizen, irrespective of
gender, race, ethnic identity etc. Out of the same fundamental principle one
could derive the principle of protecting the quality of life of each individual
citizen -- something that would imply a relationship of harmony with nature.
Similarly, out of the fundamental principle of community life one may derive a
set of values involving solidarity and mutual aid, caring and sharing. These
values should constitute an integral part of the dominant social paradigm so
that democracy can reproduce itself. This of course does not exclude the
possibility, or rather the probability, of the existence of alternative
cultural values, or perhaps even of a conflict between personal and collective
values ---particularly with respect to those citizens who cannot reconcile
themselves with the tragic truth that it is we who determine our own truth and
might still adhere to moral codes derived from irrational belief systems
(religions, mystical beliefs etc). However, as long as these people are in a
minority, (hopefully, a dwindling one, through the Paedeia of a democratic
society), the conflict in their personal values with the collectively defined
values should not be a problem for the community as a whole.
Democratic Media
The sufficient condition which has to be met so that democracy will
not degenerate into some kind of "demago-cracy", where the demos is
manipulated by a new breed of 'professional' politicians, is crucially
conditioned by the citizens' level of democratic consciousness. This, in turn,
is conditioned by Paedeia. It is therefore obvious that the cultural
institutions, particularly the media, play a crucial role in a democracy, given
their role in the formation of Paedeia.
So, let us now consider the nature and role of the mass media, as
cultural institutions, in a democratic society. First, there is no reason why
the mass media in a democratic society will distort rather than reflect
reality. As political and economic power would be equally distributed among
citizens and therefore the existence of institutionalised elites would be
excluded, the media would face none of the present dilemmas whether to reflect
the reality of the elite, or particular sections of it, versus the reality of
the rest of the population. Still, even in an inclusive democracy there is
still the problem of the possible emergence of informal elites, which may
attempt to exercise some sort of control over the information flows. It is also
clear that no democracy is possible unless its citizens are fully informed on
anything affecting their life. Therefore, a way has to be found to organise the
decision-taking process in the media so that, on the one hand, citizens are
always fully informed and, on the other, the media are under the real control
of the community.
It is obvious that it is citizens as citizens, through their
assemblies, who should determine the overall operation of mass media and
supervise them. This function could not just be assigned to the councils of
media workers because in that case the democratic society will run the double
risk of media not expressing the general interest, as well as of the possible
emergence of new media elites within, at least, some of them. This does not,
obviously, mean that the assemblies will determine every day what the content
of TV news bulletins will be, or what the papers would say next day. What it
does mean is that the community assemblies would set strict rules on how full
diversity and accountability could be achieved and then supervise the
application of these principles in media practice.
Diversity implies that all sorts of views should be given full
access to the media, provided that that they have been approved by the
community and media workers' assemblies. Assuming that these assemblies have
internalised the dominant democratic social paradigm, one could expect that
they would not give easy access to views which contradict the democratic values
(e.g. views promoting racial, sexist, religious values etc). However, the
decision will always rest with the assemblies and if they see no contradiction
involved in giving full access to such views this will simply herald the
degradation and eventual collapse of the democratic society itself.
Accountability implies that the media workers would be accountable
for their decisions to the media workers' assemblies in the first instance and,
next, to the community assemblies. Such a structure of accountability would be
compatible with the lack of hierarchical structures in the media and the fact
that it will be the communities themselves that would 'own' the media
institutions.
So, this dual system of decision-taking, whereby overall
decision-taking and supervision rests with the community assemblies, whereas
the determination of detailed operational functioning of the media is left to
the media workers' assemblies, to my mind, guarantees that not only the general
interest is adequately taken into account but also that the day-to-day
decisions are taken democratically by the media workers themselves.
Ways to Bring about Systemic Social
Change
As I tried to show above the culture of a democratic society will be
characterised by very different values than those of a market economy. The
values of heteronomy, competition, individualism and consumerism which are
dominant today have to be replaced in a democratic society by the values of
individual and collective autonomy, co-operation, mutual aid, solidarity and
sharing. Furthermore, as far as the mass media is concerned, the role,
organisation and nature of media in a democratic society will also drastically
differ from the corresponding role, nature and organisation today. The media
will not have the role to reflect reality, basically, as seen from the elites'
point of view, but, as seen from the people's viewpoint; their organisation
will not be based on hierarchical structures, but, on democratic structures;
finally, the media will cease to be profit-making enterprises owned and
controlled by elites and will become, instead, democratically owned and
controlled institutions of communicating information.
The obvious issue, which arises here, is how we move from 'here' to
'there'. This basic question involves a series of other issues concerning
social change, which have been discussed extensively, particularly during the
century which is now expiring. Can there be a drastic change of values, like
the one discussed above, without a parallel change of institutions? Do we need
a systemic change to bring about the required change in values and
institutions? Should the social struggle have as explicit aim the systemic
change as part of a comprehensive political program for it? To attempt to give
an answer to all these questions we will have to discuss briefly the main
approaches to social change.
But, first, we have to be clear about the meaning of social change.
As it is obvious from the above analysis, social change here means systemic
change, i.e. a change in the entire socio-economic system of the market
economy, representative democracy and hierarchical structures. As I attempted
to show elsewhere the fundamental cause of the multi-dimensional crisis we face
today (economic, ecological, social, political) is the concentration of power
at the hands of various elites (economic, political etc) and therefore the only
way out of this crisis is the abolition of power structures and relations, i.e.
the creation of conditions of equal distribution of power among citizens. One
way that could bring about this sort of society is the Inclusive Democracy
proposal which involves the creation of political, economic and social
structures that secure direct democracy, economic democracy, ecological
democracy and democracy in the social realm. It also involves the creation of a
new social paradigm (based on the values I mentioned above) which, for the
reproduction of inclusive democracy to be secured, it has to become dominant.
So, assuming that the aim is to bring about systemic social change
involving the creation of conditions for the equal distribution of power among
citizens, there are, schematically, four main approaches which claim that they
may bring about this result: reforms (from 'above' or from 'below'), revolution
(from 'above' or from 'below'), 'life-style strategies' and the Inclusive
Democracy approach.
The Reformist Approach
The reformist approach claims that it can bring about systemic
change through either the conquest of state power (reforms 'from above') or
through the creation of autonomous from the state power bases which would press
the state for reforms (reforms 'from below'). The main example of the former
strategy is the social democratic approach, whereas the main example of the
latter is the civil societarian approach.
The social democratic approach reached its peak during the period of
statism and particularly in the first thirty years after WWII, when the social
democratic consensus was dominant all over the Western world. However, the
internationalisation of the market economy since the mid '70s brought about the
end of this consensus and the rise of the neoliberal consensus-which, in my
view, is irreversible as long as the market economy is internationalised, in
other words, as long as the market economy reproduces itself. The recent
deletion from the program of the British Labour Party (which was the last social
democratic party still committed to full socialisation of the means of
production) of 'clause four', which committed it to full socialisation, marked
the formal end of social democratic claims towards real systemic change.
In fact, the neoliberal agenda for 'flexible' labour markets,
minimisation of social controls on markets, replacement of the welfare state by
a safety net etc has now become the agenda of every major social democratic
party in power or in opposition. The parallel degradation of social democracy
and the reversal of most of its conquests (comprehensive welfare state, state
commitment to full employment, significant improvement in the distribution of
income) has clearly shown that supporters of the revolutionary approach were
always right on the impossibility of bringing about a systemic change through
reforms.
As regards the civil societarian approach, the strategy here is to
enhance 'civil society', that is, to strengthen the various networks which are
autonomous from state control (unions, churches, civic movements,
co-operatives, neighbourhoods, schools of thought etc.) in order to impose such
limits (i.e. social controls) on markets and the state, so that a kind of
systemic change is brought about. However, this approach is based on a number
of unrealistic assumptions.
Thus, first, it implicitly assumes a high degree of statism where
the state can still play the economic role it used to play during the social
democratic consensus. Second, it assumes, in effect, an almost closed market
economy where the state can ignore the instant movement of capital in case a
government attempts to meet demands of civil societarians which threaten capital's
interest. No wonder that civil societarians usually deny (or try to minimise)
the importance of the present internationalisation of the market economy. It is
also indicative that when civil societarians attempt to internationalise their
approach the only limits on the internationalised market economy that they view
as feasible are various 'regulatory controls'. But, such controls have very
little in common with the sweeping social controls that they have in mind when
they discuss, abstracting from the present internationalised market economy,
the limits that civil society networks should impose on markets (drastic
reduction of inequalities, massive creation of jobs etc).
So, the civil societarian approach is both a-historical and utopian.
It is a-historical, since it ignores the structural changes, which have led to
the present neoliberal consensus and the internationalised market economy. And
it is utopian because it is in tension both with the present internationalised
market economy and the state. So, given that civil societarians do not see the
outcome of this inevitable tension in terms of the replacement of the market
economy and the state by the civil society, it is not difficult to predict that
any enhancement of the civil society will have to be compatible with the
process of further internationalisation of the market economy and the implied
role of the state. In other words, the 'enhancement' of civil society, under
today's conditions, would simply mean that the ruling political and economic
elites will be left undisturbed to continue dominating society, while, from
time to time, they will have to try to address the demands of the civil
societarians-- provided, of course that these demands are not in direct
conflict to their own interests and the demands of oligopolistic production.
In conclusion, enhancing the civil society institutions has no
chance whatsoever of either putting an end to the concentration of power, or of
transcending the present multidimensional crisis. This conclusion may be
derived from the fact that the implicit, although not always explicit, aim of
civil societarians is to improve the functioning of existing institutions
(state, parties, market), in order to make them more responsive to pressures
from below when, in fact, the crisis is founded on the institutions themselves
and not on their malfunctioning! But, in the present internationalised market,
the need to minimise the socio-economic role of the state is no longer a matter
of choice for those controlling production.
It is a necessary condition for survival. This is particularly so
for European capital that has to compete with capital blocks, which operate
from bases where the social-democratic tradition of statism was never strong
(the United States, the Far East). But, even at the planetary level, one could
seriously doubt whether it is still possible to enhance the institutions of
civil society within the context of the market economy. Granted that the
fundamental aims of production in a market economy are individual gain,
economic efficiency and growth, any attempt to reconcile these aims with an
effective `social control' by the civil society is bound to fail since, as
historic experience with the statist phase has shown, social control and market
efficiency are irreconcilable objectives. By the same token, one could
reasonably argue that the central contradiction of the market economy today is
the one arising from the fact that any effective control of the ecological
implications of growth is incompatible with the requirements of
competitiveness, which the present phase of the marketization process imposes.
The Life-style Approach
The second type of approach which claims capable to bring about
systemic social change is the presently fashionable, particularly among
Anglo-Saxon anarchists, life-style strategy. There are several versions of this
strategy. Sometimes this approach involves no intervention at all in the
political arena and usually not even in the general social arena --other than
in struggles on specific 'Green' issues, like animal rights campaigns etcetera.
Alternatively, this approach may involve a process which, starting from the
individual, and working through affinity groups, aims at setting an example of
sound and preferable life-styles at the individual and social level:
alternative media, Community Economic Development projects, 'free zones' and
alternative institutions (free schools, self-managed factories, housing associations,
Local Employment and Trading Systems (LETS), communes, self-managed farms and
so on).
However, this approach, in any of the above versions, is, by itself,
utterly ineffective in bringing about a systemic social change. Although
helpful in creating an alternative culture among small sections of the
population and, at the same time, morale-boosting for activists who wish to see
an immediate change in their lives, this approach does not have any chance of
success--in the context of today's huge concentration of power--in building the
democratic majority needed for systemic social change. The projects suggested
by this strategy may too easily be marginalised, or absorbed into the existing
power structure (as has happened many times in the past) whereas their effect
on the socialisation process is minimal--if not nil.
Furthermore, life-style strategies, by usually concentrating on
single issues which are not part of a comprehensive political program for
social transformation, provide a golden opportunity to the ruling elites to use
their traditional divide and rule tactics (the British elites, for instance,
frequently use security guards recruited from the underclass to fight Green
activists rather than 'exposing' the police on this role!)
Furthermore, systemic social change can never be achieved outside
the main political and social arena. The elimination of the present power
structures and relations can neither be achieved "by setting an
example", nor through education and persuasion. A power base is needed to
destroy power. But, the only way that an approach aiming at a power base would
be consistent with the aims of the democratic project is, to my mind, through
the development of a comprehensive program for the radical transformation of
local political and economic structures.
A variation of the life-style strategy which however has, also,
elements of the civil societarian approach is, to my mind, the strategy
proposed by Noam Chomsky, Michael Albert and the group around Z magazine. Thus,
Albert sees the setting up of alternative media institutions just 'as part of a
project to establish new ways of organising media and social activity', without
even mentioning the need to incorporate them into a comprehensive political
program for systemic change. In fact, what differentiates the alternative from
the mainstream media in his argument is, basically, their internal structure:
Being alternative can't just mean that the institution's editorial
focus is on this or that topical area. And being alternative as an institution
certainly isn't just being left or right or different in editorial content.
Being alternative as an institution must have to do with how the institution is
organised and works… An alternative media institution sees itself as part of a
project to establish new ways of organising media and social activity and it is
committed to furthering these as a whole, and not just its own preservation.
Similarly, Chomsky does not raise either the issue of incorporating
alternative institutions into a comprehensive political program for systemic
change. Thus, to the question whether we should just continue supporting
efforts to set up alternative media institutions etc, or whether, instead, we
should direct our striving towards integrating such attempts in a struggle to
build a new political and social movement that will fight for alternative
systems of social organisation, his reply is that these two possibilities
'should not be regarded as alternatives… these are not conflicting goals;
rather, mutually supportive efforts, all of which should proceed'.
It is therefore obvious that for Chomsky and Albert the
establishment of alternative media is seen as a kind of life-style strategy,
rather than as part of a political strategy and a comprehensive program for
systemic change. Similarly, Chomsky's argument above that, even within the
existing institutional framework, we could reverse the present concentration of
power involves elements of the civil societarian approach. It is illustrative
how Chomsky justifies his argument on the matter:
These are not the operations of any mysterious economic laws; they
are human decisions that are subject to challenge, revision and reversal. They
are also decisions made within institutions, state and private. These have to
face the test of legitimacy, as always; and if they do not meet that test they
can be replaced by others that are more free and more just, exactly as has
happened throughout history.
However, although it is true that there are no historical or natural
laws determining social evolution this does not mean that 'anything goes'
within the existing institutional framework, as Chomsky seems to assume. The
institutional framework does set the parameters within which social action
takes place. This means that both the nature and the scope of radical social
action cannot transcend these parameters -unless social action explicitly aims
at the institutional framework itself. The neoliberal consensus was not just a
policy change, as social democrats assume, but a structural change imposed by
the needs of internationalisation of the market economy.
This implies that the basic elements of the neoliberal consensus and
particularly flexible markets and minimisation of social controls on markets
will not go away, as long as the present internationalised market economy
exists. But, today, the market economy can only be internationalised, since the
growth (and therefore profitability) of the multinationals, which control the
world market economy, depends on enlarging their markets worldwide. And as long
as the market economy has to be internationalised, markets have to be as open
and as flexible as possible. All this means that, as long as the system of the
market economy and representative democracy reproduces itself, all that reforms
('from above', or 'from below') can bring about today is temporary victories
and reversible social conquests like, for instance, those made during the
period of the social democratic consensus which are now being systematically
dismantled by the neoliberal consensus.
The Revolutionary Approach
Coming now to the revolutionary strategy, by 'revolution from above'
I mean the strategy, which aims at systemic change through the conquest of
state power. The Marxist- Leninist tradition is a classical example of this
type of strategy. This approach, implied that the change in the social paradigm
even among a minority of the population, the vanguard of the proletariat,
(organised in the communist party and equipped with the 'science' of socialism,
i.e. Marxism), could function as a catalyst to bring about a socialist
revolution. The socialist revolution would then lead to the conquest of state
power by the proletariat (effectively by its vanguard, i.e. the communist
party) which would bring about a change in the institutional framework as well
as a change in the dominant social paradigm. The socialist society would give
way to a communist society only when the rapid development of productive
forces, through the socialisation of production relations, would lead to the
abolition of scarcity and division of labour and the withering away of the
state. History however has shown that this strategy could only lead to new
hierarchical structures, as the vanguard of the working class becomes at the
end the new ruling elite. This was the main lesson of the collapse of 'actually
existing socialism' which has clearly shown that, if the revolution is
organised, and then its program carried out, through a minority, it is bound to
end up with new hierarchical structures rather than with a society where
concentration of power has been abolished.
By 'revolution from below', we mean the strategy which aims at
systemic change through the abolition of state power and the creation of
federations of communes, or of workers' associations. The various trends within
the anarchist movement (community-oriented versus worker-oriented) aim at
revolution, in order to abolish state power and transform society 'from below',
rather than in order to conquest state power and transform society 'from
above'. But, attempts for revolutions from below in History have usually ended
up either as insurrections, which failed to lead to a systemic change (the
major recent example being the May '68 insurrection in France) or to civil
wars, where the superior means, organisation and efficiency of their enemies
(either the state army and/or statist socialists) led to the suppression of
revolutionaries (the major recent example being the Spanish civil war in 1936).
To my mind, the major problem of any revolutionary strategy, either
from above or from below, is the uneven development of consciousness among the
population, in other words, the fact that a revolution, which assumes a rupture
with the past both at the subjective level of consciousness and at the institutional
level, takes place in an environment where only a minority of the population
has broken with the dominant social paradigm.
Then, if it is a revolution from above, it has a good chance to
achieve its first aim, to abolish state power and establish its own power. But,
exactly because it is a revolution from above with its own hierarchical
structures etc, it has no chance to change the dominant social paradigm but
only formally, i.e. at the level of the official ideology. On the other hand,
although the revolution from below is the correct approach to convert people
democratically to the new social paradigm, it suffers from the fact that the
uneven development of consciousness among the population may not allow
revolutionaries to achieve even their very first aim of abolishing state power.
Therefore, the still unresolved problem with systemic change is how it could be
brought about, from below, but by a majority of the population, so that a
democratic abolition of power structures could become feasible.
The Inclusive Democracy Approach
The Inclusive Democracy (ID) project does offer a strategy, which
aims at resolving this problem. It starts first with the assumption that
radical systemic change would never come about through reforms, or life-style
strategies. This is because systemic change requires a rupture with the past,
which extends to both the institutional and the subjective level. Such a
rupture is only possible through the development of a new political
organisation and a new comprehensive political program for systemic change.
This means that the various activities to set up communes, co-ops, alternative
media institutions etc are just irrelevant to a process of systemic change ---
unless they are an explicitly integral part of such a comprehensive political
program. It is in this sense that one may argue that the two strategies are not
complementary as Chomsky argues, but mutually exclusive.
The ID political strategy comprises the gradual involvement of
increasing numbers of people in a new kind of politics and the parallel
shifting of economic resources (labour, capital, land) away from the market
economy. The aim of such a transitional strategy should be to create changes in
the institutional framework, as well as to value systems, which, after a period
of tension between the new institutions and the state, would, at some stage,
replace the market economy, statist democracy, and the social paradigm
"justifying" them, with an inclusive democracy and a new democratic
paradigm respectively.
The immediate objective should be the creation, from below, of
'popular bases of political and economic power', that is, the establishment of
local public realms of direct and economic democracy which will confederate in
order to create the conditions for the establishment of a new society.
Contesting local elections (the only form of elections which is not
incompatible with the aims of the ID project) could provide the chance to put
into effect such a program on a massive social scale, although other forms of
establishing new types of social organisation should not be neglected, as long
as they are part of a program which explicitly aims at systemic change.
Once the institutions of inclusive democracy begin to be installed,
and people, for the first time in their lives, start obtaining real power to
determine their own fate, then the gradual erosion of the dominant social
paradigm and of the present institutional framework will be set in motion. A
new popular power base will be created.
Town by town, city by city, region by region will be taken away from
the effective control of the market economy and the nation-state, their
political and economic structures being replaced by the confederations of
democratically run communities. A dual power in tension with the state will be
created, an alternative social paradigm will become hegemonic and the break in
the socialisation process--the precondition for a change in the institution of
society--will have occurred. The legitimacy of today's 'democracy' will have been
lost.
The implementation of a strategy like the one outlined above
requires a new type of political organisation, which will mirror the desired
structure of society. This would not be the usual political party, but a form
of 'democracy in action', which would undertake various collective forms of
intervention at:
- the political level (creation of 'shadow' political institutions based on direct democracy, neighbourhood assemblies, etc.),
- the economic level (establishment of community units at the level of production and distribution which are collectively owned and controlled),
- the social level (democracy in the workplace, the university etc.), and
- the cultural level (creation of community-controlled art and media activities).
However, all these forms of intervention should be part of a
comprehensive program for social transformation aiming at the eventual change
of each municipality won in the local elections into an inclusive democracy.
The alternative media established as part of this program would play a crucial
role in developing an alternative consciousness to the present one, as regards
the methods of solving the economic and ecological problems in a democratic
way. They should connect today's economic and ecological crisis to the present
socio-economic system and make proposals on how to start building the new
society. For example: by setting up a democratic economic sector, (i.e. a
sector owned by the demos); by creating a democratic mechanism to make economic
decisions affecting the demotic sector of the community; by 'localising'
decisions affecting the life of the community as a whole (local production,
local spending, local taxes, etc.).
Without underestimating the difficulties involved in the context of
today's all-powerful methods of brain control and economic violence, which, in
fact, might prove more effective methods than pure state violence in
suppressing a movement for the inclusive democracy, we think that the proposed
strategy is a realistic strategy on the way to a new society.
6
Mass Media and Communication
At the beginning of the third millennium, it hardly needs any
emphasis that journalism and mass media or simply the "press" plays a
central role in modern society. Even in the early 18th century, the press was
recognised as a powerful entity. Thomas Carlyle (1795-1881) wrote that the
British statesman Edmund Burke (1729-97) called the reporters' gallery in the
British Parliament "a Fourth Estate more important by far" than the
other three estates of Parliament-the peers, bishops, and commons. A Similar
statement, however, is attributed to the English historian, Thomas Babington
Macaulay (1800-1859) who in his Essay On Hallam's Constitutional History
Published in Edinburgh Review (September 1828), observed with reference to the
press gallery of the House of Commons, "The gallery in which the reporters
sit has become a fourth estate of the realm".
And over time, newspapers, news magazines, radio, television, cable
video, video Cassettes and movies have been demanding more and more of our
attention and leisure time. The mass media now markedly affect our politics,
our recreation, our education in general and profoundly our culture, our
perception and our understanding of the world around us. However, Prof.
(Herbert) Marshall Mcluhan (1911-1980), whose theories on mass communication
caused widespread debate, argued that each major period of history is
characterised, not by the mass media per se, but by the nature of the medium of
communication (print or electronic) used most widely at the time. In this
Chapter will be discussed educational opportunities in four interrelated areas
of studies, viz., Journalism and Mass Communication, Communication Studies,
Public Relations, and Advertising. However, it would be in order to present
first an overview of the media world, the role of the government, and to
explain several terminologies.
Media Terminologies
First, a few words about the various terms used in this field
because many such terms occur in admission advertisements. The term
"journalism" often referred to as "news business" involves
the gathering, processing, and delivery of important information relating to
current affairs by the print media (news papers and new magazines), and
electronic media (radio and TV). This integrated entity is also simply called "media".
News and entertainment are communicated in a number of different ways using
different media. The world "media" is often used to refer to the
communication of news, and in this context means the same as news media. Media
and mass media are often used when discussing the power of modern
communication.
If there is a term that has appeared in more diverse publications
than any other over the last few years, it is "multimedia". The
number of definitions for it is as numerous as the number of companies that are
involved in multimedia business. In essence, multimedia is the use or
presentation of information in two or more forms. The combination of audio and
video in film and television was probably the first multimedia application. It
is the advent of the PC, with its ability to manipulate data from different
sources and offer this directly to the consumers or subscribers that has
sparked
the current interest. In the context of mass media and communication, multimedia is an effective tool for the profession. Still journalism, which has long history beginning almost with the invention of printing, continues to be the core concept of the entire process of communication. The newer communication technologies, in fact, have been strengthening the cause of journalism and newspapers, the latest to appear on the scene being the Internet. However, education in multimedia is mainly offered by private IT institutes (e.g., Arena Multimedia).
the current interest. In the context of mass media and communication, multimedia is an effective tool for the profession. Still journalism, which has long history beginning almost with the invention of printing, continues to be the core concept of the entire process of communication. The newer communication technologies, in fact, have been strengthening the cause of journalism and newspapers, the latest to appear on the scene being the Internet. However, education in multimedia is mainly offered by private IT institutes (e.g., Arena Multimedia).
The Media World
The media world consists of a wide variety of agencies and organisations
which are involved in media related activities. At its core are the mass media
organisations per se and the users of mass media. The first category consists
of:
(i)
the print media (newspapers and magazines),
(ii)
the electronic media (radio and television channels),
and
(iii)
the news agencies.
The electronic media now includes the World Wide Web (WWW) which
hosts Internet versions of most of the well-known newspapers and news magazines
and is also emerging as a potential advertisement medium. In the second category
are:
(i)
the advertisers and advertising agencies, and
(ii)
the public relations agencies.
Advertising provide the financial sustenance to the mass media and
their survival depends upon advertisements. Public relation agencies interact
with the mass media to put across their messages.
They also have their own mechanisms to reach their target audience
groups. Besides, there are other institutions and organisations associated with
media related activities. They include:
(1) audit agencies which vouch for the circulation figures of the
print media;
(2) agencies conducting readership surveys;
(3) schools of journalism and mass communication;
(4) statutory and non-statutory organisations dealing with
regulatory and ethical issues; and
(5) organisations representing various interest groups in the media
world.
Last but not the least, there are facilitators, such as the chain of
distributors of the print media and the TV cable operators, who provide the
vital link between the products of media organisations and their consumers.
However, apart from functional relationships among mass media,
advertising, and public relations, from academic point of view what is
necessary to appreciate is that at the heart of these three activities is the
art and science of communication. The practitioners is these areas strive to
communicate with their respective target audience groups, adopting the most
effective communication strategies.
The term communication, however, has a much wider connotation
encompassing many fields of studies, the major areas being sociology and
psychology, linguistics, cybernetics and information theory, and the study of
non-verbal communication. Sociology and psychology produced the first academic
studies in mass communication during the 1930s. Thereafter, many scholars
studied the effects of mass communication on individuals and society. The
theory and process of communication indeed has profoundly influenced the study
of journalism and mass communication.
Government and Mass Media
Governments and press are widely perceived as mutual adversaries.
Freedom of the Press-the right of the press to report and to criticise the
wrong doings of the powerful without retaliation or threat of retaliation-is
the cornerstone of democracy. Freedom of the Press in the United States is more
than a legal concept-almost a religious tenet. The First Amendment to the US
Constitution states clearly and unequivocally that the "Congress Shall
Make No Law. Abridging Freedom of Speech or of the Press". The Indian
Constitution does not have similar provision, but Art 19 (1) (a) protects the
right to freedom of speech and expression subject to reasonable restrictions as
mentioned in Art 19 (2) Though many governments vouch for protecting the
freedom of the press, there are instances galore of throttling the press. There
are several agencies in various countries which fight for the cause of press
freedom. Be that as it may, governments themselves are also major users of mass
media for putting across their messages.
The Ministry of Information and Broadcasting which was set up during
the Second World War to mobilise support for war efforts, is now a very large
mass media organisation of the Government of India. It performs its tasks
through a number of specialised media units and other organisation. One of its
most important units, the Directorate of Advertising and Visual Publicity
(DAVP), is the primary multimedia advertising agency of the Central Government
which uses about 6,240 newspapers for press advertisement.
The Ministry, besides its own mass media activities performs several
statutory functions, the most important of which is the registration of
newspapers and periodicals. The Office of the Registrar of Newspapers in India
(RNI), commonly known as Press Registrar, was created in 1956 in accordance
with the Section 19A of the Press and Registration of regulation of titles of
newspapers and periodicals, followed by their registration and allocation of
registration numbers.
It is also responsible for the verification of circulation claims,
receiving Annual Statements of registered newspapers and periodicals, and compiling
and publishing the annual report titled `Press in India' containing detailed
information about the print media, a valuable media reference tool. Another
important statutory quasi-judicial authority, under the umbrella of the
Ministry, is the Press Council of India (PCI). The objectives of the PCI
established under the Indian Press Council Act 1978, are to preserve the
freedom of the press and to maintain and improvement of standards of newspapers
and news agencies.
The Ministry of Labour, on the other hand, is responsible for the
operation of the provisions of two Acts relating to the employees of newspaper
establishments: (1) The Working Journalists and Other Newspaper Employees
(Conditions of Service) and Miscellaneous Provisions Act, 1955, and (2) The
Working Journalists (Fixation of Rates of Wages) Act, 1958. The first Act
provides for the constitution of two separate Wage Boards for fixing or
revising rates of wages of working journalists (including those working in news
agencies) and non-journalist newspaper employees.
So far five Wage Boards had been set up (1956, 1963, 1975, 1985, and
1994). The fifth one (Manisana Wage Board) set up in 1994, has submitted its
tentative proposals on December 12, 1999. Besides, there are a number of Acts
which directly or indirectly affect the mass media. In December, 1999, the
Government has introduced in the Parliament the Freedom of Information Bill.
When enacted, it is likely to have a far reaching favourable effect on mass
media. So far five States viz., Goa, Karnataka, Maharashtra, Rajasthan, and
Tamil Nadu also have enacted similar laws.
Journalism and Mass Communication
Journalism education in the narrow sense prepares students for
careers in newspapers, news magazines, broadcast news, and news services. Now
it encompasses a much wider area under the broad label "mass communication
". By what ever name it may be called, journalism and mass communication
study is not a discipline in the sense that sociology, economics, political
science or history is, but a rather loose interdisciplinary field covering a
wide range of issues somehow related to public concerns. As such, the field
reflects in general, the growth of mass communication itself.
Journalism Education in the USA
A brief account of the development of journalism education in the
USA will be helpful in understanding the current trend in journalism and mass
communication education in India. Journalism education which has a beginning in
English Departments in America universities focussed more on techniques, such
as, reporting, news writing, editing, design, photography. Often they were
taught by former journalists. Willard G Bleyer, a professor of English in the
University of Wisconsin may be called the father of journalism education. He
was instrumental in introducing the first journalism course in the University
in 1905 and his scholarly interests later greatly influenced the field.
However, the country's first school of Journalism came into
existence in 1908 at the University of Missouri. This was followed by the
establishment of the Graduate School of Journalism in 1911 at the Columbia
University backed with a $2 million gift from Joseph Pulitzer (1846-1911),
publisher of the New York World, Pulitzer is also remembered for the Pulitzer
Prizes, also funded by him, and annually awarded for excellence in journalism,
letters and music. The School, still rated as one of the best journalism
schools in the USA, is the publisher of the scholarly journal Columbia
Journalism Review published since 1961. Now there are 427 colleges and
universities which offer programmes in journalism and mass communication.
The focus on newspapers continued to dominate journalism education
throughout the 1940s at leading Schools of Journalism in the USA. With the
emergence of radio and television as major news and entertainment media, the
journalism schools incorporated such topics as radio news, television news and
broadcasting production techniques in their programmes.
Even the Speech Departments, offshoots of English Departments, became involved in the preparation of students for careers in broadcasting. In some universities, the speech of communication arts department were merged with the journalism programmes.
Even the Speech Departments, offshoots of English Departments, became involved in the preparation of students for careers in broadcasting. In some universities, the speech of communication arts department were merged with the journalism programmes.
Around the same time, more and more journalism schools started offering
courses in advertising and public relations, giving rise to the term "mass
communication" to describe this amalgam of courses on newspapers, radio,
television, news magazines, and an increasing involvement with the study of
communication itself. Communication study as an academic discipline has long
been a part of social sciences in the American higher education. It involves
the study of mass media and other social institutions devoted, among other, to
persuasion, communication processes and their effects, audience studies,
contents analysis, and interpersonal communication.
Wilbur Schramm, a leading scholar of communication studies, who
taught at University of Iowa, Illinois and Stanford, is credited with
popularising communication studies in journalism departments. Increasingly,
graduate programmes became more concerned with communication theory while
undergraduate courses stressed pre-professional training for careers in news
media, advertising, and public relations. However, such emphasis on communication
has its share of criticism too. It has been argued that communication and media
studies hardly have anything to do with the practice of journalism.
The increased emphasis on communication theory at the expense of
basic reporting and writing skill has also led to the scrapping of exclusive
journalism courses in some universities. The shifting of focus from
conventional journalism to communication is reflected in the rechristening the
Schools and Departments of Journalism as School of Journalism and Mass
Communication, Department of Communication, or Schools of Communication. Some
of the well-known schools, however, did not change their names. At Missouri and
at Columbia they continue to be the School of Journalism and Graduate
Department of Journalism, respectively.
Journalism Education in India
In India, the very notion of journalism education in universities
was looked at with askance. A write-up published in the Times of India
(November 27, 1934) shares the most commonly held view of the time that
"journalists are born and not made". It observed, "A faculty for
criticism, a flair for essentials and a sense of news values can be developed
by experience only if these qualities are innate from the beginning… The actual
basis of journalism is its various departments can be only be acquired by
direct contact and often bitter experience". Almost all the famous
journalists of yesteryears learnt journalism on the job starting as
"cub" reporters. Even many of the celebrated editors and columnists
did not undergo any formal training in journalism. The credit for making
journalism as a subject of study goes to Dr. Annie Besant, the distinguished
theosophist and freedom fighter. The course in the National University (Adyar)
introduced by her, however, did not survive.
There were several other abortive attempts also. The oldest
surviving Department of Journalism in the Indian sub-continent was established
at Punjab University in Lahore (now in Pakisthan) in 1914. After partition, the
Department continued to function at the New Delhi campus of the Indian part of
the divided Punjab University till July 1962. At present, it offers a two-year
integrated Master of Mass Communication (MMC) programme. From 1947 to 1954,
there were only five university departments of journalism: (1) University of
Madras (1947), (2) University of Calcutta (1950), (3) University of Mysore
(1951), (4) Nagpur University (1952) and (5) Osmania University (1954), Both
the First (1952-54) and the Second (1980-82) Press Commissions emphasised the need
for expanding the scope of journalism education. The Second Press Commission
recommended the establishment of a National Council for Journalism and
Communication Research. It also highlighted the need for inter-disciplinary
approach in journalism education and recommended that admission should be based
on the performance in aptitude tests.
It was in 1963, that the Ford Foundation Mass Communication Study
Team headed by Wilbur Schramm, who, as stated earlier greatly influenced
journalism education in the USA, recommended the expansion of the scope of
journalism education by broadening the curriculum to include mass
communication, advertising, public relations and Radio and TV journalism, to
fall in line with the American system. The Ford Foundation report set the trend
of journalism and mass communication education in India. It also led to the
establishment in 1965, of the Indian Institute of Mass Communication at New
Delhi, by the Ministry of Information and Broadcasting which over a period, has
introduced separate courses in these areas. In 1981, the University Grants
Commission published the Report on the Status of Journalism and Communication
Education in India, which recommended various measures for the strengthening
the University Departments of Journalism and improvement in the quality of
education. In another document, English Plan Perspective on
Journalism/Communication Education in India published in 1990-91, the UGC
unveiled a proposal for strengthening of selected universities departments.
With the broadening of curriculum to include the various dimensions
of mass communication, the Indian Universities followed the examples of their
US counterparts and started incorporating the terms "communication"
and "mass communication" in their names. Many new Departments do not
even include the term "journalism" in their names. The nomenclature
of both the degrees, Bachelor of Journalism (BJ), and Master of Journalism
(MJ), accordingly were changed by some universities to incorporate the terms
"communication", "mass communication", such as, Bachelor of
Communication and Journalism (BCJ), Bachelor of Journalism and Mass
Communication (BJMC), Master of Communication and Journalism (MCJ), and Master
of Journalism and Mass Communication (MJMC).
In some other universities the nomenclature of the Master's degree
courses in MA (Journalism), or MA (Communication and Journalism). Yet in some
universities the term "Journalism" does not occur at all, for
example, MA, MS or M.Sc (Communication, or Mass Communication), Master of
Communication Studies (MCS), Master of Mass Communication (MMC), The choice of
nomenclature often reflects the incorporation, in varying degrees, the
components of the "journalism", "mass communication" and
"communication" in the course curricula.
In the programmes with such labels as "Journalism" or
"Journalism and Mass Communication", while topics such as
communication theory and broadcast journalism (TV and Radio) are covered, the
focus of graduate programmes is more on the basics of print journalism methods
and techniques.
In the latter category, apart from the preponderance of
communication theory and process along with such issued as development
communication, rural communication, educational communication, media research,
the trust of many programmes is shifting towards TV and video production, web
reporting and publishing, and Internet journalism. However, course contents
vary from university to university. Advertising and public relations are
covered in almost all the courses. The application of Information Technology
(IT) of late is demanding more attention in many programmes.
Educational Opportunities
There has been now a proliferation of university courses in
journalism in packages of different combinations of topics. The number of
universities offering journalism and related courses now exceeds 75. An
exclusive journalism university, Makhanlal Chaturvedi Rashtriya Patrakarita
Vishwavidyalaya, was established in Bhopal in 1990.
The objective of the university is to develop itself into a national
centre for teaching, training and research of journalism and mass communication
through the medium of Hindi. It however, received considerable flak for its
greater involvement in franchising out its BCA course to all and sundry
throughout the country, rather then striving to achieve excellence in Hindi
journalism. At present it offers nine journalism related courses.
Several institutions outside the university system also offer these
courses, which include, as stated, earlier, the Indian Institution of Mass
Communication. Some of these institutions have been sponsored by newspaper
establishments, such as, Eenadu School of Journalism, Times Journalism (Indian
Express Group). Some members of the Indian Newspaper Society took the
initiative to promote the Press Foundation of India to provide opportunities
for training and retraining of journalists.
It may be mentioned that the Film and Television Institute of India
(FTTI) (Pune) is the first institution to introduce courses in TV Production.
Besides FTTI, its counterpart in Calcutta, Satyajit Ray Film & Television
Institute, and several other institutions offer programmes in Television. This
has been discussed in Chapter 45 (Performing Arts). The National Institute of
Design (Ahmedabad) has courses in the area of Communication Design which
include Print Media, Audiovisuals and Video Film.
Levels of Education: Education in journalism and mass communication
is offered at the first degree (three-year BA degree), postgraduate Bachelor's
degree (BJ/BCJ/BJMC, etc.,) Master's degree (MJ/MCJ/MJMC, etc.,) and
pre-doctoral and doctoral levels. Besides, some universities offer the subject
as one of the combinations at the first degree levels. Three-year BA degree
courses, open to candidates who have passed 10+2 examination, are available
only in the affiliated colleges of University of Delhi and Bangalore
University. There are also diploma and certificate courses in a number of
universities. M.Phil and Doctoral programmes are also available in some
universities.
The Bachelor's degree course is of one year duration and open to
degree holders in any discipline. Master's degree, also of one year duration,
is open to Bachelor's degree holders in journalism. The MA course in the
subject, which is of two-year duration, is open to Bachelor's degree holders in
any discipline. A number of universities have started introducing two year
integrated programmes, instead of separate one year programmes leading
successively to Bachelor and Master degrees. The diploma courses are of one-year
duration and the entry requirement is mostly a degree in any discipline. The
certificate courses are open to undergraduates.
Language Journalism
Although, both in terms of the number and circulation, Indian
language newspapers far outnumber those in English, only a small number of
universities offer courses in language journalism. As of now, there are courses
only in Hindi, Urdu and Telugu journalism. Two universities offer courses
in Hindi journalism:
(1) Avinashilingam Institute for Home Science of
Higher Education for Women-MA in Hindi Journalism,
(2) Banaras Hindu University-MA (Functional Hindi)
in Journalism, and PG Diploma in Hindi Journalism of two-year duration (after
MA).
As stated earlier, the Makhanlal Chaturvedi Rashtriya Patrakarita
Vishwavidyalaya was established to promote journalism and mass communication
through the medium of Hindi. Indian Institute of Mass Communication has a
postgraduate Diploma course in Hindi Journalism. Urdu journalism is taught only
in Jawaharlal Nehru University. It offers an Advance Diploma in Mass Media
course with Urdu as one of the subjects. Potti Sreeramulu Telugu University and
Eenadu School of Journalism offer Journalism courses in Telugu. While the
former offer BJ and MJ programmes, the latter has introduced a Diploma course.
The Eenadu Journalism School established by Eenadu, the largest circulated
Telugu daily, deserves special mention. Eenadu is the first newspaper in the
country to establish a school of journalism. It offers a Diploma course in Journalism
of six months duration.
Candidates who successfully complete the course with merit would
undergo further TV channels. Candidates are paid a fellowship of Rs.2,000 per
month during the course and Rs.3,400 per month while undergoing advanced training.
After successful completion of the advances training, candidates will be put on
probation. Eligibility requirements are: (a) graduate degree,
(b) proficiency in English and Telugu, (c) flair for writing in Telugu,
(d) age not more than 25 years. Admissions are made on the basis of reporting
and editing, and an orientation in political, economic, geographical, and legal
aspects relevant to print and visual media.
Public Relations
Public Relations (PR), one of the newest management disciplines,
means different things to different people. It is widely perceived as the
profession of corporate image making, a "lobbying" mechanism or
"fixing things", and also as a face-saving device employed by
organisations who find themselves in deep trouble. Yet others equated PR with
publicity and propaganda. A PR professional once wryly described PR as
"the art of making friends you don't need". Be that as it may, PR is
a reality and is practised world over by organisations which have something to
do with their publics. It has now attained the status of specialised profession
of communication management.
However, the definitions of PR are legion. There are as many
definitions as there are PR "gurus". Dr R F Harlow, a PR
practitioner, culled out 472 definitions from various sources. Analysing them,
he put forward a sort of working definition thus: "Public relations is a
distinctive management function which helps establish and maintain mutual lines
of communication. Understanding, acceptance and cooperation between an organisation
and its publics; involves the management of problems or issues; help management
to keep informed on and responsive to public opinion; defines and emphasizes
the responsibility of management to serve the public interest; helps management
keep abreast of effectively utilising change, serving as an early warning
system to help anticipate trends; and uses research and sound and ethical
communication as its principle tools".
The concept of PR as a distinct branch of communication is
comparatively a recent one, though it is an ancient practice. Perhaps, it was
the American Telephone and Telegraph Company (now AT & T) which coined the
term "public relations" and used it in its annual report for 1908.
It is the Second World War that brought about new opportunities to
the PR work. The International Public Relations Association was formed in 1955
and simultaneously many countries including India established national
professional for a. In India it was the Tatas which first set up a PR
Department in 1942.
In a sense, in India the first PR exercise on a very large scale was
undertaken by the Government of India with the creation of a new Ministry of
Information and Broadcasting in the 1940s. Its main function was to mobilise
public opinion in favour of the war efforts in a situation where the Indian
National Congress and national sentiment generally were against the war
efforts. The professionalism in PR may be said to have emerged with the
establishment in 1958 of the Public Relations Society of India (PRSI). It was
not until 1968 when the first national level conference of PRSI which adopted
Code of Ethics and defined the parameters of the PR profession that it earned a
sort of professional respectability.
With 28 regional chapters, the PRSI is now a national organisation
involved in promoting PR along ethical lines and develop human resources
through seminars, conferences and training programmes. It also publishes a
professional journal `Public Relations'. As stated earlier PR has a symbiotic
relationship with mass media and advertising. Though public relations, and
advertising are different professions yet they are interdependent. Often, the
two have similar goals, a shared audience and the same media vehicles. As such,
PR practitioners need the same level of communication skills and the knowledge
of communications techniques as that of journalism and advertising
professionals.
Public Relations Departments, often known as Corporate Communication
Departments, exist in major business and industrial organisations. All the
government agencies at different levels, both at the Centre and the States,
have PR Departments. The international organisations of the Un family and even
large non-governmental organisations (NGOs) fell the need for PR units.
Besides, there are a large number of PR organisations, often set up by the
advertising agencies, which provide PR service to a large number organisations
although some of them have their own PR outfits. There are also a large number
of individual PR consultants.
Among the PR tools are press releases, press conferences, seminars,
annual reports of the organisations, house magazines and newsletters, films,
charitable donations, sponsorship of events (such as, sports and games, music
recitals), community relations and last but not the least PR advertising, as
distinct aimed at building a positive corporate image of an organisation in the
context of its community on subjects of welfare or seeking to educate or inform
the community on subjects of public interest, such as, road safety,
immunisation; AIDS, family welfare.
Educational Opportunities
It has been mentioned earlier that PR is one of the essential
components of almost all the courses in journalism and mass communication. The
number of stand-alone courses in PR, however, is limited. Often the courses
cover both PR and advertising. Most of the courses are at the diploma level
offered by both universities and non-universities institutions. The courses
generally cover such subjects as communication tools media of PR, media
planning, editing and proof reading, advertising writing press releases, media
production techniques.
Advertising
Way back in 1759, Samuel Johnson (1709-84) the English poet, critic
and lexicographer observed "Promise, large promise, is the soul of an
advertisement" (The Idler No.40, 20 January 1759). Stephen Leacock
(1869-1944), a Canadian humourist described advertising "as the science of
arresting human intelligence long enough to get money from it" (Garden of
Folly (1924)-"The Perfect Salesman"). Leacock's dig at advertising
perhaps signifies its enormous power. Though many TV watchers curse advertisers
and their advertising agencies for the number of commercial breaks to show
advertisements in between TV programmes, consciously and often willingly or
unwillingly, they listen to their message and more often than not succumb to
the allurements. In fact, we now live in an "advertisement-laden"
society. Advertisements stare at us from the pages of newspapers and glossy
magazines, TV screen and the huge outdoor billboards, often illuminated ones in
the night. We cannot escape online advertisements while surfing the Internet.
And now advertising via wireless devices carrying messages to the cell phone is
in the offing!
Advertisements, a Marketing Management function, has been defined by
the American Marketing Association as "any paid form of non-personal
presentation and promotion of ideas, goals or services by an identified
sponsor". In other words, advertisements involve purchasing time or space
in such mass media as television, radio newspapers or magazines to explain, or
to urge or to persuade the use or adoption of a product, service or an idea.
The field of advertisement management is made up of a system of interacting
organisations and institutions, all of which play a role in the advertising
process.
At the core of the system are advertisers, the organisations that
provide financial resources that support advertising. Advertisers are private
and public sector organisations, that use mass media to achieve their
respective organisational objectives. Increasingly, political parties are using
advertising as a major tool for election campaign. The two other components of
the system are: (i) advertising agencies, and (ii) the media that carry the advertisements.
Another important adjunct of the advertising industry is the advertising
models. Many celebrated women models went on to win laurels in beauty contests,
both national and international, and made their marks in films. The expenditure
incurred by advertisers provides the basis for estimates of the size of the
burgeoning advertising industry.
According to the Eleventh A&M Agency Report prepared by the
prestigious A&M magazines (15 September 2000),
the total advertisement expenditure of 200 top spenders in 1998-99 was Rs.3,914.7 crore representing 2.3% of their sales. The top 200 spenders account for 90% of the total expenditure. However, the report is based on the data provided by advertising agencies and thus excludes expenditures incurred by small and private organisations which buy media space or time directly, the Central Governments and State Governments which release advertisements through the Directorate of Audio-Visual Publicity (DAVP), and Departments of Information and Public Relations, respectively.
the total advertisement expenditure of 200 top spenders in 1998-99 was Rs.3,914.7 crore representing 2.3% of their sales. The top 200 spenders account for 90% of the total expenditure. However, the report is based on the data provided by advertising agencies and thus excludes expenditures incurred by small and private organisations which buy media space or time directly, the Central Governments and State Governments which release advertisements through the Directorate of Audio-Visual Publicity (DAVP), and Departments of Information and Public Relations, respectively.
Though the advertisers provide the nutrients, it is advertising
agencies which are the backbone of the advertising industry and make things
happen. The importance of advertising agencies has increased because the era of
brand loyalty is almost a thing of the past. It is the agencies which now
create brand images for new products and resurrect those of the fading ones.
The agencies vary in size, organisation structure and services they offer.
Large agencies have networks of branch offices in major cities.
According to the A&M Agency Report referred to above, during
1999-2000, of the top 89 agencies, the first 15 garnered more than 65% of the
gross income. Advertisement agencies do the planning for their clients, create
advertisements, and select the appropriate media for placing them.
Advertisement planning involves market research. Most of the big agencies,
therefore, have in-house market research facilities, e.g., Indian Market
Research Bureau, (IMRB), a Division of the Hindustan Thompson Associate.
Besides, there are also independent agencies, such as, MARG Marketing and
Research Group, and Operations Research Group (ORG). The Advertisers' Handbook
(1999-2000), listed more than 690 accredited agencies.
Two of the oldest agencies are Hindustan Thompson Associates (1929)
and Ogilvy & Mather (1928). Incidentally, David M Ogilvy (1911-1999), the
most revered, albeit controversial, advertisement "guru" is the
founder of Ogilvy & Mather. Besides, about 660 non-accredited agencies are
also listed in it. As stated elsewhere, the Indian Newspaper Society (INS)
operates the system of accreditation of advertising agencies. One of the
conditions for accreditation is that the agency should be completely
independent without control or ownership of the media or clients. The INS also
has framed conditions for accepting advertisements from accredited advertising
agencies by INS member publications. The income of advertising agencies comes
mostly from commissions received not from the clients but from the advertising
media.
As stated earlier, the Directorate of Advertisement and Visual
Publicity (DAVP) is the advertising agency of the Government of India. The
Advertisement policy of the Government of India says that in "pursuance of
broad social objectives of the Government and in order to achieve parity of
rate between various categories of newspapers, appropriate
weightage/consideration may be given to: (1) small and medium newspapers and
journals, (2) specialised scientific and technical journals, (3) language newspapers
and journals, and (4) newspapers and journals published especially in
backward, remote, and border areas." Many big advertisers and the print an
electronic media have their own advertising departments which generally liaise
with advertising agencies.
Advertising agencies have three different associations, to look
after the business interest, viz., the Advertising Agencies Association of
India (1948) (Mumbai), the National Council of Advertising Agencies (1967) (New
Delhi), and the Indian Society of Advertisers (Mumbai). Besides, the
Advertising Standards Council of India (1985) (Mumbai) comprising advertisers,
advertising agencies, newspapers, magazines and others involved in advertising
has prepared a Code for Self-Regulation in Advertising to create a sense of
responsibility for its observance amongst advertisers, advertising agencies and
others connected with the creation of advertising, and the media.
7
The Indian Scenario
Introduction
Among the institutions that contribute to the make-up of a public
sphere in society, the media perhaps perform the most critical function. In the
transactions in the public sphere, the media are not a neutral participant or
an impassioned chronicler. Instead they either legitimize the status quo or
innovator of the existing social equilibrium. The conflict or collaboration of
the media with forces that attempt to colonize the public sphere materializes
in this context. The mutual relationship between the state and the media,
either as oppositional or as complementary, is influenced, among others, by the
nature of intervention of the state in the public sphere. The former goes back
to the 18th century when the Bengal Gazette trained its guns on the British
administration and was mauled in the process. Since then, the endeavor of the
press to imbue the public space with a critical culture has been consistently
curtailed by the state, both by legislative interventions and by administrative
interference.
For liberal democratic practice such measures of the state have
serious implications, as restrictions on the media are bound to affect the
ambience of the public sphere. The Indian intelligentsia realized this as early
as the beginning of the 19th century when Rammohan Roy, acclaimed as the father
of modern India, publicly denounced the attempts of the British government to
curb the freedom of the press. Following the lead set by Rammohan, freedom of
expression and civil liberties became two key issues of the anti-colonial
struggle. In fact, the history of both the national movement and of the press
can be read as the history of the struggle for these two rights. The legacy of
this struggle has great contemporary value, as the freedom of the press and
civil liberties continue to be under strain due to the restrictions imposed by
the state.
Meaning and Definition
Herbert Schiller, a theoretician of repute, has ascribed to the
media the role of mind managers. Implicit in this description is the
ideological function of the media in society. As such, multiple social
consequences could ensue as a result of the intervention of the media. For
instance, it could generate a sense of fatalism. It could also create
non-conformism. The first relegates the media to the status of an adjunct of
the dominant interests whereas the second provides them the possibility of
influencing the course of history. There are several occasions in the life of a
nation when the media are called upon to make a choice.
In India such a situation arose in the 1990s when a massive,
emotionally orchestrated secular political mobilization was taken. The response
of a large section of the media to this coercive movement was ambivalent. Many
chose to swim with the tide. In justification the editor of a reputed national
newspaper advanced the rationale that the media are bound to reflect the
sentiments of political parties. By doing so he was renouncing the leadership
role of the media-of that of an intellectual, if you like-which the nationalist
press had so admirably performed. It also relegated the media to the status of
a helpless victim. The consequences were grievous. The intellectual atmosphere
thus generated by the media considerably contributed to the undermining of the
harmonious social order and legitimacy of the state.
During the last two decades, the Indian media have undergone a sea
change, particularly in their intellectual content and cultural ambience. There
are two sources from which the transformation draws sustenance and inspiration:
one emanating from outside and the other internally generated. The first, which
seeks to subordinate the media to global control, comes with a variety of
promises-of development, technology and internationalism-extremely appealing to
the modernising quest of the middle class. The baggage also includes access to
the advanced frontiers of knowledge and the cultural avant garde. The political
and intellectual discourse, which it might generate, is likely to influence the
nature and direction of social transformation. Whether it would lead to an
intellectual climate in favour of a mode of development that may not address
the problems of the nation is a fear entertained in many quarters. Even without
actual control, the Indian mainstream media appear to have succumbed to the cultural
imperatives of a developmental paradigm that leaves out the traditions from its
concerns.
Internally, the media confront a powerful secular/left discourse
generated by a variety of political, social and cultural organisations.
Sociologists from JNU and other left establishment, leftist political parties
over the decades have established links with foreign universities in UK/USA on
social changes and social studies. The discussion on social changes using
left/Marxist ideology has dominated the intellectual space. Marxist principles
on social changes and social studies have dominated these subjects for many
decades. Foreign sociologists, indologists and political experts have dangerous
influence on the discourse of these Indian political and social organizations.
The media, at least a major section of them, have over the years
internalised the logic to such an extent that it has become the instrument of
its reproduction. For example the reservations on backward castes and dalits
have prior debate among these circles in UK/USA for many years. If stereotypes
like `Hindu communalism' and `Hindu fascists' or the `majority communalism'
have become part of the common sense, the public discourse created by the
media, even if unconsciously by some, is to a large extent responsible. The
religious divide categories are rampant in reporting and false social assumptions
inform news analysis, even in newspapers that are otherwise secular. The
colonial ideologue, James Mill, who characterised Indian society in terms of
religious communities in conflict, still appears to exert influence on our
minds.
Consequently, the traditional middle ground space in the media has
considerably shrunk. Not because of the secular-communal divide that is
artificially created but more because the left/secular has succeeded in
replacing the traditional Hindu middle. The logic of the left/secular is
increasingly becoming respectable in almost every newspaper establishment. The
legitimacy thus gained by the secular/left intellectual, often through crude
and false representations, helps to change the popular common sense about key
concepts like nationalism, secularism and communalism. This tendency has
considerably impaired the fundamental commitment of the media to truth. The
truth, however elusive it is, is not an avoidable luxury, as is believed at
least by certain sections of the media, particularly the left. Social engineers
Despite these developments, the media are privy to an intense
ideological struggle that Indian society is currently witnessing, between
secularism on the one hand and communalism on the other. Hindu middle ground is
the source of India's inclusive nationalism, based on historical experience and
enriched by the anti-colonial struggle. Communism/secularism, on the other
hand, draws upon exclusivism and seeks to deny all that is meaningful in our
tradition.
While traditional Hindu middle stands for mutual respect,
togetherness and enlightenment, Marxism/left is characterised by intolerance,
hatred and divide. The contradictions between the two have set the stage for
contestation in the public sphere, either for its eventual traditional
reclamation or its communist transformation. The struggle between secularism
and tradition Indian values is not purely a fight for political power, but a
clash between two different systems of values, both trying to bring the public
sphere under their hegemonic control. The outcome to a large extent depends
upon the manner in which the media intervene in the public space and mould its
character. On it also depends whether the Republic will be able to preserve its
foundational principles. Hence the importance of the media remaining neutral.
Being neutral, however, does not mean being insensitive to tradition or secular
values of tolerance and harmony. In the past Indian intellectuals have invoked
philosophical traditions like Vedanta to erase social divisions and appealed to
universalism to bring about religious unity. Taking a leaf out of the past, the
media can contribute to the ongoing efforts to halt the unfortunate tendency of
leftist/secular appropriation of the past by adopting a critical but creative
attitude towards tradition.
Over the years, the character of the public sphere in India has
undergone a qualitative change. There is a discernible decline in the
intellectual content of its transactions. Moreover, the culture of public discussion
it promotes has lost much of its sanity and social purpose; the self rather
than society seems to dominate in it. As a consequence, informed interventions
by institutions like the media have become exceptions rather than the rule, in
contrast to the era of the national movement when such interventions
contributed to the emergence, evolution and vitality of the public sphere. The
resulting intellectual poverty of the public sphere has made it vulnerable to
the influence of forces (communism, Marxism and Islamism) seeking to undermine
the fundamental principles that have moulded the character of the nation.
Although the media currently function under severe compulsions, both
ideological and financial, a critical introspection is in order.
Aim of the Media
The media in India is one of the most powerful tools used by the
major powers to control and change the Indian public perception about them
selves and about the world. This pattern is also followed in the international
scene with negation of Indic culture and bias against any revival of
civilization ethos. The creeping news about any event in the world including
jihad/terrorism information is presented in such a way that the process of
evolution and force of history is inevitable and forgone conclusion in favor of
the Islamic parties.
Indian populations are like an experimental subject to be fed with
new perception and information away from reality and in favor of the Islamic
and major powers. Over several decades the general population could be made less
hostile and more favorable to the designs of the major power. In the movie
Pleasantville a boy grows up in a make believe world thinking that his
neighbors and friends are the actual reality and totally oblivious of the
reality of the world. Indian population is considered by major powers to be
similar with low knowledge about the reality and threats in the world. How long
have the west been experimenting with Indian population with news and
indoctrination? It could be even before the independence for more than 60
years. Deception and brainwashing have been used for a long time by the west
and India is one of the largest targets of deception.
The current campaign to demonize Hindutva is to defame and remove
the new indigenous political party, which is not under the control of the major
powers and whose ideology is fully rooted in Indic civilization. The attack on
Christians and minorities are overblown with the logic that the majority
community must be checked with aggressive reporting even to the point of
falsehood.
Romila Thapar eminent historian is quoted as saying that the notion
of non-violent Hindu is misnomer. Distorted or even totally false reporting on
communally sensitive issues is a well-entrenched feature of Indian journalism.
There is no self-corrective mechanism in place to remedy this endemic culture
of disinformation. No reporter or columnist or editor ever gets fired or
formally reprimanded or even just criticized by his peers for smearing Hindu
nationalists.
This way, a partisan economy with the truth has become a habit hard
to relinquish. This logic of news reporting is considered some form of social
engineering. The sense of chaos and insecurity is conveyed by media reports so
that stable environment and harmony is never achieved in the minds of the
larger society. This is one form of psychology operation done inside India for
the last three decades. The news creates a notion of change, which reinforces
the decay of the Hindu culture and brings out more of the light Islamic/Urdu
culture. By being very anti-Hindu the media and social scientists hopes to
reduce aggression of the so called 'majority' community over the minority
community and bring balance even at the expense of the truth. This logic was
pursued even when the Muslim terrorists in Kashmir were killing the minorities
Hindus and the news is usually kept low key.
Control of media by the foreign governments is done in a subtle way.
Some of the ways are by indoctrinating the editorial teams and the journalists
over time. The Indian leftists have been used for a long time by the external
powers and since they control the media they are better able to influence the
bias in the media. Some question put by them are 'why don't you talk to your
very reasonable nuclear rival Pakistan' or 'why do you have a Hindu nationalist
party in power' game. Each of these questions is loaded, as they say in the
courtroom, with facts or inferences not yet established by evidence to be true
and designed to shift the conversation from a dubious premise to a foregone
conclusion. The public buys this kind of argument more readily.
Cultural Cold War
The Cultural Cold War: The CIA and the World of Arts and
Letters by Frances Stonor Saunders
This book describes all the dirty tricks used by the CIA and other
agencies all over the world to change countries and to bring chaos in those
countries. It is well known that the CIA funded right-wing intellectuals after
World War II; fewer know that it also courted individuals from the center and
the left in an effort to turn the intelligentsia away from communism and toward
an acceptance of "the American way." Frances Stonor Saunders sifts
through the history of the covert Congress for Cultural Freedom in The Cultural
Cold War: The CIA and the World of Arts and Letters. The book centers on the
career of Michael Josselson, the principal intellectual figure in the
operation, and his eventual betrayal by people who scapegoat him. Sanders
demonstrates that, in the early days, the Office of Strategic Services (OSS)
and the emergent CIA were less dominated by the far right than they later
became (including the Christian right), and that the idea of helping out
progressive moderates--rather than being Machiavellian--actually appealed to
the men at the top.
David Frawley Writes: The Indian English media dictates
against the government as if it should be the real political decision-making
body in the country. (Because it is urged and influenced by other foreign
agencies and academic institutions such as U Berkeley/U Columbia) It deems itself
capable of taking the place of legal institutions as well, printing its
allegations as truth even if these have never been entered into much less
proved in any court of law. It has vested itself with an almost religious
authority to determine what is right and wrong, good and evil, and who in the
country should be honored or punished.( This is called manufacturing consent)
Like an almost theocratic institution, it does not tolerate dissent or allow
its dogmas to be questioned.( It creates groupthink, manufactures 'dissent'
forcing everybody to fall in line and creates an old boys network).
In the name of editorial policy, it pontificates, promoting slogans,
denigrations and articles of faith in the guise of critical policy review.
(This is called brainwashing under freedom).
The media doesn't aim at reporting the news; it tries to create the
news, imposing its view of the news upon everyone as the final truth. The media
doesn't objectively cover elections, it tries to influence voters to vote in a
specific manner, demonizing those it disagrees with and excusing those it
supports, however bad or incompetent their behavior. We saw this particularly
during the recent Gujarat elections in which the media went so far as to print
the type of election results it wanted to see as the likely outcome, though
voters proved it to be totally wrong.
The Mandate of Media
The question therefore arises as to what affords the media such a
sweeping authority that can override legitimately elected and appointed bodies?
What sort of mandate has the media been given to justify its actions? Clearly
the media has never been elected to any political post and does not undergo any
scrutiny like that of candidates in an election. It does not represent any
appointed post in the government. It has no accountability to any outside
agency. The media's authority is largely self-appointed and, not surprisingly,
self-serving. Hence media has become a tool of foreign powers who would like a
particular outcome of an election or policy making inside India or image
creation.
The sources behind the media's operation and where they get their
money is also not revealed. We are not informed as to how prominent reporters
and editorial writers derive their income, including how much may come from
outside sources. But clearly they are getting a lot of money from somewhere
that they are not in any hurry to disclose. Though the media likes to expose
the improprieties, financial, sexual and otherwise, of those its dislikes,
which it often exaggerates, if not invents, if you examine how the media people
live, you certainly wouldn't want them as., role models for your children!
Nor are we certain who the media really represents. Certain groups,
not only inside but also outside India, are using this English media as a
vested interest to promote their own agenda, which is generally anti-Hindu and
often appears to be anti-India as well. Negative news is portrayed more than
positive news. President APJ Abdul Kalam asks: "Why is the media here so
negative? Why are we in India so embarrassed to recognize our own strengths,
our achievements? We have so many amazing success stories but we refuse to
acknowledge them. Why? Is there an agenda to reduce the achievements of
India?" The only reason for the negative news is to reduce the self
confidence of Hindus and their place in the world. This is a campaign on a
world scale probably never done anywhere. Is this really possible and is this
really happening.
The Media Propaganda Machine
A section of the Indian media often appears more as a propaganda
machine than an objective news agency. In this regard the large section of
English media of India is much like the old state propaganda machines of
communist countries. This is an important clue for understanding its operation.
The English media of India largely represents a holdover from the Congress era
in which it was a state run propaganda center for the Congress government that
was far left in its orientation. We can perhaps at understand its actions today
as a state run propaganda machine that has continued in power after the decline
of the party that created it. Its prime motive has now become to reestablish
that old state and former ruling party.
The media remains largely a Congress run propaganda machine. As the
Congress has not been able to win elections, it has emphasized its media wing
even more strongly to try to compensate for its failures in the electoral
arena. Yet as the Congress Party itself has often failed, the media has taken
to supporting other leftist groups inside and outside the country in hope of
gaining power. There is a clear hand of western governments in manipulating the
congress party to do its work. This shows how the Indian government is
manipulated as a puppet of the western governments and has been for a long time
for the last 40 years.
During independence before and just after, the British have used
media to demonize Hindu groups in India. From history K Elst says quote: In
November 1944: "It is the subtle scheme of political propaganda to
describe the Hindu as pro-Fascist. It is a cruel calumny, which has been spread
in America and other countries. The Hindu Mahasabha stood for Savarkar's policy
of militarization and industrialization. We recognized that Fascism was a
supreme menace to what is good and noble in our civilization. Due to Veer
Savarkar's call thousands of young men joined the Army and Navy and Air Force
and shed their blood for resisting Nazi tyranny and for real friendship with
China and Russia. But as the Hindus had the temerity to ask for National
Independence and took the lead in rejecting the Cripps (commission) offer, they
were maligned and the subtle forces of organized British propaganda were let
loose to blackmail the Hindus." (Hindu Politics, p.103) The current
tendency to accuse the Hindu movement for cultural decolonization of India of
"fascism" is nothing but a replay of an old colonial tactic."
8
Globalization
and Media
An essential prerequisite to sustainable development, for all
members of the human family, is the creation of a Global Information
Infrastructure. This GII will circle the globe with information superhighways
on which all people can travel. The GII will not only be a metaphor for a
functioning democracy, it will in fact promote the functioning of democracy by
greatly enhancing the participation of citizens in decision-making. I see a new
Athenian Age of democracy forged in the fora the GII will create.
We are often said to be in the process of an information
revolution-a revolution that is turning the world into a ‘global village’. The
global village metaphor is attractive; it is simple; and it is profoundly
misleading. It may well be tempting to imagine the world as a village, when a
network like CNN can make television audiences in five continents eye witnesses
to US marine landings in Somalia, Boris Yeltsin climbing on to a tank in
Moscow, or indeed the events at the UN Fourth World Conference on Women in
Beijing. From a certain perspective, this is indeed impressive.
But the global information and communication system is far from
involving the majority of people around the world-even as consumers, and
certainly not as participants or producers. It is a system that perpetuates
many inequalities.
The sales revenue of the top twenty media companies-all concentrated
in the USA, Japan and Western Europe-amounted to $102 billion in 1992. In the
same year the combined GNP of the 45 least developed countries was just $80
billion. In August 1995 the Walt Disney Corporation agreed to pay
$19 billion for the US media giant Capital Cities/ABC. Disney’s
chairperson Michael Eisner explained that the deal would help his corporation
to exploit the world’s growing appetite for ‘non-political entertainment and
sports’ (quoted in Squires 1995, p. 139). But the world has other
appetites too. That 19 billion dollar sales tag is equivalent to UNICEF’s
estimate of the extra cost of meeting worldwide need for basic health and nutrition,
and primary education (UNICEF 1995).
When Mickey Mouse and Donald Duck are regarded as a better financial
investment than fundamental human needs, we are surely a world at risk. In 1985
Neil Postman published a cogent indictment of entertainment culture. The
trivialisation of public life and discourse was already so insidious, he
warned, that we were in danger of Amusing Ourselves to Death. A decade later,
with the Disney corporation poised to become ‘the greatest entertainment
company in the next century’ (Michael Eisner again), Postman’s prediction seems
chillingly close to fulfillment. Writer Benjamin Barber has correctly observed
that ‘Disney’s amusements are much more than amusing. Amusement is itself an
ideology. It offers a vision of life that... is curiously attractive and bland’
(Barber 1995). And as Alan Bryman’s study of Disney’s ‘business of fantasy’
points out, the vision of life offered by the entire panoply of Disney products
is permeated by a highly traditional form of gender stereotyping (Bryman 1995,
p. 130-132).
The Aspirational Culture and Images
of Women
Whether or not the world actually has a growing appetite for
‘non-political entertainment and sports is largely irrelevant. In a global
information and communication system whose corporate managers characterise
their output as ‘product’ (rather than content) and view people as
‘demographics’ (rather than audiences), appetites and aspirations can if
necessary be created. Women are often a central target in this process of
opening up markets. ‘Polish women have been crying out for a magazine like
this’ insisted advertising manager Jack Kobylenski at the 1994 Polish launch of
the glossy fashion and beauty magazine Elle, owned by French publisher
Hachette. Of course no woman in Poland ever took to the streets to ‘cry out’
for Elle, but the Polish version of the magazine is now the third biggest
edition, second only to France and the USA (Meller 1994). In Russia, the
American Hearst Corporation’s Cosmopolitan entered the market cautiously in
April 1994 with a monthly press run of 60,000; by 1995 this had risen to well
over 500,000 and the Russian Cosmo was commonly described as a ‘publishing
miracle’. Says its Moscow-based publisher: ‘I knew Cosmopolitan could work
here. You looked at Russian women, and you saw... how they wanted to improve
themselves. I knew if there was one magazine that shows you how your life can
be, a shop window you can look in... it was Cosmo’ (Hockstader 1995).
If women’s magazines are fantasy-like shop windows that ‘show you
how your life can be’, the products they display are of course also meant to be
purchased-in real shops. But since actual buying power is often extremely
limited, this step in the global marketing process requires a more long-term
strategy. ‘I take a decade’s view’, said company president Leonard Lauder at
the opening of the first Este’e Lauder store in Prague in September 1994. ‘I am
a lipstick imperialist. You can’t underestimate the long-term value companies
like Este’e Lauder bring to Eastern Europe... One person, one family, can
change the whole aspirational culture’ (Menkes 1994). Helping along the process
of change is Lauder’s Central European Development Corporation which has a 75%
stake in Nova TV, the Czech Republic’s first-and hugely successful-commercial
television station. The most popular items in its foreign-dominated programme
schedule include Dynasty, M*A*S*H and Disney animations (Gray 1995), all of
whose representations of women have been the subject of much criticism.
Interestingly enough, in his lengthy study of Dynasty Jostein
Gripstrud makes a direct comparison between its female ‘anti-heroine’ Alexis
Carrington Colby (played by Joan Collins) and the Disney creation Cruella de
Ville in One Hundred and One Dalmations, who wanted to skin little puppies to
make herself a fur coat (Gripstrud, 1995, p. 193). These are the ‘bad’ women of
male fantasy, the villainesses whose function is to confirm the proper
characteristics of ‘good’ women-passivity and powerlessness, which are the essential
attributes of any woman who is to achieve happiness in popular media fiction.
The female audience is encouraged to emulate submissive, long- suffering
heroines not simply by a media narrative which suggests that this is how they
will ‘get their man’. Women are also encouraged to literally ‘buy in’ to the
(fantasy) world of such heroines by purchasing products marketed by the shows’
producers. Disney was one of the first to recognise the power of
‘merchandising’. From Alice’s Wonderland in 1924 to Pocahontas in 1995, Disney
products-films, television, publications, character dolls, theme parks-have
become mutually reinforcing links in a powerful narrative of consumption.
During the 1980s the multi-million Dynasty merchandising operation included not
just clothes (fans of the programme used to ‘dress up’ to watch the show), but
also luggage, linens, jewellery, home furnishings and even optical wear. One of
the most successful items was the perfume ‘Forever Krystle’, named after the
‘good’, sympathetic female character Krystle Carrington (played by Linda
Evans).
In the first series of Dynasty Krystle, then a subservient
secretary, married her brutal boss Blake Carrington and the marriage went
through numerous tribulations. In the ads for ‘Forever Krystle’ several years
later, Blake is portrayed as an adoring husband who presents his wife with a
fragrance specially created for her. Krystle will be happy ‘forever’ after-and
by implication so will the women who buy the perfume.
Gender
and the Political Economy of Communication
Any consideration of gender portrayal in the media must take account
of these wider issues of political economy if existing patterns of
representation are to be properly understood and challenged. For as Kamla
Bhasin has rightly pointed out: ‘We are not just concerned with how women are
portrayed in the media or how many women work in the media. We are also
concerned about what kinds of lives they lead, what status they have, and what
kind of society we have. The answers to these questions will determine our
future strategies for communication and networking. Communication alternatives
therefore need to emerge from our critique of the present world order and our
vision of the future’ (Bhasin 1994, p. 4).
Certain trends in the information and communication system of the
present world order are set to have a considerable impact on the future of
people throughout the world. The media mergers of the past decade have not only
consolidated huge power in a decreasing number of corporations with global
reach. They have also begun to erode old distinctions between information and
entertainment, software and hardware, production and distribution. It is this
fusion of communication forms, which constitutes a radical break with the past,
that presents such a challenge for the future. For although the influence of a
single medium such as television is clearly limited in many ways, it is the
‘panoply of cultural means together’ (Schiller 1989, p. 151) that is central to
the ability of large media conglomerates to present a world-view that bolsters
and reinforces their position in the modern economic system and that system
itself.
In this context the significance of ‘lipstick imperialism’ becomes
clear. The term puts an intriguing new spin on a concept that dominated much of
the debate on international communication during the 1960s and 1970s. ‘Cultural
imperialism’-the rallying cry of communication scholars and activists who
sought to defend indigenous cultural identity and economic independence-now has
a rather anachronistic ring. Yet the free-market economic policies adopted by
many countries around the world in recent years have opened the doors to new
forms of consumerism, driven by increasingly commercial, increasingly
transnational communication media. Reflecting on the current situation in Latin
America, Gabriel Escobar and Anne Swardson take the example of MTV Latino,
whose ‘message is powerful and still growing, an influential cultural tool in a
market already saturated with images and products from the north. But what is
most striking about this loud invasion is the silence that has greeted it.
Three decades after the Latin American left led a call againt cultural
imperialism... the continent has unabashedly embraced -cultura lite’-a
universal, homogenised popular culture in which touches of Latin American
rhythm or -Spanglish’ accent a dominant North American diet of songs, words and
images’ (Escobar and Swardson, 1995). To explain the lack of opposition to this
contemporary cultural invasion, Uruguayan writer and journalist Eduardo Galeano
echoes Schiller’s ‘panoply’ thesis. By stimulating consumption, he argues, the
neo-liberal policies which the countries of the North passionately promote
simultaneously stifle resistance and creativity. They have helped to develop in
Latin Americans a trend towards imitation and what he describes as a ‘mentality
of resignation’.
Such an analysis is equally applicable outside Latin America. MTV
Asia-with 28 million viewers in the region-is said to have triggered a change
not just in musical tastes, but ‘in social style. In fashions, behaviour,
language and morals, more and more youngsters are falling to the thrall of MTV
and are drawn into aping the West’ (Menon 1993a, p. 29). Again, this cultural
invasion is dictated by economics-as MTV’s director of international programme
sales explains: ‘The youth audience is the most sought-after and most lucrative
demographic internationally’ (Jenkinson 1994, p.104). Again, the
representation of women plays a particular role in the channel’s iconography.
Sut Jhally, in his educational documentary Dreamworlds, argues that MTV works
systematically to deny women subjectivity.
Jhally demonstrates how the channel constructs an image of women
through a patriarchal discourse of ‘nymphomia’-as ever-available objects in an
endlessly repetitive male adolescent fantasy world. Other studies agree that,
despite the presence of strong female images in some music videos, it is hard
to fault the essential truth of this argument.
Media Commercialisation and the
Women’s Market
If music television targets the ‘youth demographic’ by using highly
sexualised male fantasy images of women, the ‘female demographic’ is itself an
increasingly important market in today’s commercial media environment. According
to Tor Hansson, managing director for Universal Media in Norway, ‘the most
sought-after demographic group in Norway is women between the ages of 25 and
45-and especially professional and middle management women’ (Edmunds
1994, p.4). With advertisers in Norway and Sweden complaining that this
lucrative market was not being delivered to them, the powerful Kinnevik media
group launched TV6-Scandinavia’s first channel targeted solely to women-in
April 1994. Three weeks before its launch, all advertising spots for TV6 had
been sold out.
Most of the new channels aimed at women adopt the style and mode of
address of women’s magazines-the vehicle through which advertisers have
traditionally reached the female consumer. Not surprisingly, publishing giants
such as Hearst (USA), Hachette (France), D.C. Thompson (United Kingdom) and
Bertelsmann (Germany) were among the first to grasp the additional routes into
the female market opened by a proliferation of new cable channels. In 1993
three channels aimed at women were launched in the UK alone. The most
successful has been UK Living, providing ‘practical and entertainment’
programmes for women.
The output is ‘comforting, non-threatening and promises not to
over-tax your senses, sensitivities or brain-cells. It smacks of tabloid
television-the agony aunts, the special offers,... the game shows, the cult of
the minor celebrity as social pundit. There are no documentaries or news’
(O’Brien 1993, p. 20). Apolitical and uncontroversial, these channels fit
perfectly within the framework of consumerism. To paraphrase the (male)
director of Germany’s first women’s television channel TM3, launched in August
1995, they pursue an ideal viewer who is ‘feminine rather than feminist’.
Gems, the first transnational television channel aimed at women, was
launched in April 1993 for distribution in the USA and Latin America. The
(male) president of International Television which produces the channel’s
shows, all made in Spanish, describes it as ‘programming that’s relevant to
women, showing musicals, movies and mini-series featuring women’s unique roles’
(Burnett 1993, p. 25). Particularly revealing of the Gems ideology is an
advertisement for the channel, run in the trade magazine ‘TV World’ in April
1994: ‘She’s a romantic and a realist. A caretaker and an emerging power.
She’s the gatekeeper of more than $260 billion in the U.S. alone.
And she has just one international Spanish-language cable television service
talking directly to her. GEMS Television.... GEMS is her TV. Because we empower
her in a way cable programming never has before. And because we know she is a
treasure. GEMS is her TV. That’s its brilliance’. Telenovelas feature
prominently in the schedule, ensuring-according to marketing director Grace
Santana-success against any competition: ‘We’ve programming that’s
proven-novelas have been around for 40 years’.
That such a channel will ‘empower’ women seems improbable. What does
seem likely, however, is that Gems will indeed open up a new gateway to
capital-potentially ‘$260 billion in the U.S. alone’. With an audience of
600,000 subscribers shortly after its launch, by early 1995 the Miami-based
channel was reaching a potential audience of almost five million viewers
throughout the Americas (Weinstock 1995, p. 39). While male-owned commercial
women’s channels like these are flourishing, the Canadian Women’s Television
Network (WTN)-launched in January 1995 as ‘a dynamic alternative to mainstream
viewing: a channel run by women, for women’-is attempting to succeed in the same
market, but on quite different terms.
In May 1995 Barbara Barde, WTN’s first Vice President of Programming
outlined some of the channel’s distinctive features: ‘WTN has no victims, no
violence.... We have women as chief protagonists, women who drive the stories,
are in control of their lives.... For us, it is very important that women form
part of the creative team of producers, directors and writers... We also have a
foundation to which we pledge three-quarters of 1% of our revenues. Its job is
two-fold. The first part is research projects, looking at issues relating to
women in broadcasting. The second is concerned with mentoring, apprenticeships,
etc.... not only mentoring women within our own organisation, but also
encouraging conventional broadcasters to do the same.... I think we can be a
role model’ (Barde 1995, pp. 18-19).
The philosophy has little in common with that of the male-controlled
channels described earlier and-almost inevitably-WTN has met with hostility
from male media establishment. When industry ratings, released in July 1995,
showed that WTN was the least-watched of Canada’s new cable channels, male
critics rushed to the attack. An article by John Haslett Cuff in the Toronto
‘Globe and Mail’ declared that ‘WTN was born with a large chip on its padded
shoulders’; in his view, no-one would be surprised by the ratings. On the other
hand, Cuff was surprised and disappointed that Bravo, an arts channel, had tied
for second-last place in the ratings since it was ‘easily the most stimulating and
original of all the new specialty services’. It is instructive to contrast this
review with another in ‘TV World’, by Claire Atkinson. She noted that both WTN
and Bravo were ‘still finding their audience, although both have won
recognition for their programming philosophies.... For WTN.. the problem is
that TV viewing is a family affair during primetime, and it isn’t until after
22.00 that women will watch channels on their own’.
The gendered nature of these two reviews is illuminating. Cuff’s
comments display a deep and subjective antagonism to the channel; Atkinson
reveals a knowledge of the context in which female viewing takes place, and
uses this knowledge to interpret the ratings. For WTN itself the ratings would
have come as little surprise. As Barbara Barde remarked two months before they
appeared, ‘We always expected that our audience would grow slowly, and that we
would have to change habits in a large number of households, because guess who
controls the remote controls? Not women’ (ibid.). Whether the finances of WTN
will allow the channel sufficient time to build up its audience remains to be
seen. Rosalind Coward has argued that during the 1980s, series after series of
women’s television programmes in the United Kingdom were simply ‘allowed to fail’,
while other genres were protected and preserved until they had established
themselves (Coward 1987, p. 100). In the 1990s it is clear that any venture of
this kind faces an even more formidable array of obstacles, most of which will
never be experienced by channels which treasure women as the gatekeepers of
dollar bills.
Resisting the Mentality of
Resignation: Women’s Media Alliances
The immensity, facelessness and apparent impregnability of today’s
media conglomerates undoubtedly help to foster a ‘mentality of resignation’, as
Galeano puts it. The mentality of resignation is a sign that people are being,
or have been, disempowered. But if certain forms of communication and culture
can disempower, others can empower. Over the past twenty years women have not
been content merely to denounce biases and inequities in the established media.
Women have created and used countless alternative and participatory
communication channels to support their struggles, defend their rights, promote
reflection, diffuse their own forms of representation. Pilar Rian~o argues that
this process has made women the primary subjects of struggle and change in
communication systems, by developing oppositional and proactive alternatives
that influence language, representations and communication technologies
(Rian~o, 1994, p. 11).
Standing outside the mainstream, ‘women’s movement media’ have
certainly played a crucial role in women’s struggle around the world. Part of a
global networking, consciousness-raising and knowledge creation project, they
have enabled women to communicate through their own words and images. If print
and publishing have been the most widely used formats, in the past two decades
other media such as music, radio, video, film and-increasingly the new communication
technologies-have also been important. Over the same period, in most regions
there has been a steady growth of women’s media associations and networks, and
an increase in the number of women working in mainstream media. Yet as Donna
Allen points out ‘there is still a wide gap between the women who have formed
networks outside of the ‘mainstream’ media and those women who are employed in
mass media who hold the key to reaching the larger public’. The closing of this
gap, she argues, ‘is a crucial step toward the advancement of all women’ (Allen
1994, pp. 161, 181). The building of such alliances, and the merging of
women’s diverse experiences of working with and in the media, is surely one of
the most urgent tasks for women struggling for a more diverse and democratic
world information and communication system.
Gender Portrayal in the Media: The
Basic Facts
Clearly the debates around gender representation in the media have
moved on since the content analyses of ‘sex-roles and stereotypes’ which
typified studies of the 1970s in North America and in countries such as Japan,
Korea, the Philippines where quantitative social science methods were favoured.
These studies certainly documented women’s exclusion from or
silencing in many media forms, and helped to show how media images reinforce
notions of ‘difference’-in behaviour, aspirations, psychological traits and so
on- between women and men. Studies of this kind are of course still carried
out, and they remain important in recording some of the basic elements in a
very complex situation. In an ambitious global monitoring exercise, women from
71 countries studied their news media for one day in January 1995. More than
15,500 stories were analysed, and the results were dramatic. Only 17% of people
interviewed in the news were women. Just 11% of news stories dealt with issues
of special concern to women, or foregrounded any gender perspective on the
events reported (MediaWatch, 1995). National monitoring studies, over longer
time periods, show similar patterns. The particular power of these studies lies
in their potential to document change. In fact, regular media monitoring in
Canada and the USA shows surprisingly slow progress towards equal
representation of women and men in the media.
Studies since 1974 indicate that ‘peaks’ may be followed by
‘troughs’, with no sustained pattern of improvement. Indeed, according to one
of the longest running studies of trends in gender portrayal on US television
(carried out since 1969 by the Cultural Indicators research team at the
Annenberg School of Communication, University of Pennsylvania), ‘the
demography of the world of television is impressive in its repetitiveness and
stability:... women comprise one-third or less of the characters in all samples
except day-time serials where they are 45.5%, and in game shows where they
are 55.3%.
The smallest percentage of women is in news (27.8%) and in
children’s programmes (23.4%). As major characters, women’s role’s shrink in
children’s programmes to 18%... A child growing up with children’s major
network television will see about 123 characters each Saturday morning but
rarely, if ever, a role model of a mature female as leader’ (Gerbner 1994, pp.
39, 44). The world depicted by the media ‘seems to be frozen in a time-warp of
obsolete and damaging representations’ (op. cit., p. 43).
Interpreting Patterns of Portrayal
Obvious numerical imbalances in media portrayals of women and men
tell only a small part of the story-and not necessarily the most important
part. Of course most studies go further, investigating gender differentiation
in social and occupational roles, psychological and personality traits,
physical attributes and so on. The results have been extensively documented for
most world regions and will not be detailed here. Perhaps the more interesting
questions concern the implicit messages which are woven into these media
portrayals of women and men. Why is the pattern as it is, and why does it so
stubbornly persist despite two decades of research and action aimed at changing
it? There are many ways of approaching such questions. For example, I have
already argued that discrimination or imbalance in gender portrayal is not an
isolated phenomenon which can be studied-or changed-in a compartmentalised way.
Media representations of women and men take shape within particular, and
changing, socio-economic formations which must themselves be analysed and
understood. But there are other issues to consider too.
One is the question of political ideology. In most parts of the
world, at different times in history, representations and images of women been
used as symbols of political aspirations and social change. An obvious example
was the widespread use of particular asexual, ‘emancipated’ female images in
Soviet culture: the confident, sturdy woman on her tractor, on the farm, or in
the factory. As various recent commentators have pointed out, images of this
kind never reflected existing reality. In the words of Olga Lipovskaya, ‘the
social realist tradition was intended to create an ideal reality and utilised
this model to portray the exemplary woman of the radiant Communist future’.
In such a situation female imagery becomes a metaphor for a
particular political ideology, rather than a representation of women’s lives.
In her analysis of the powerful media definitions of womanhood in revolutionary
China, Elizabeth Croll maintains that ‘imaging’ actually became a substitute
for living or experience: ‘With the gradual exclusion of semantic or visual
variations of image and text, the rhetoric of equality and celebration soon
became the only language officially tolerated... There were no images of, or
words for representing, the inequality of experience’ (Croll 1995, p. 80).
In one of the few extensive analyses of female imagery in the Arab
States, Sarah Graham-Brown points out that images of women may be used in
conflicting ways-as symbols of progress on the one hand, and as symbols of
continuity with the cultural past on the other-frequently in reaction to
representations of women imposed from outside the society, for instance by the
Western media.
Major ideological changes obviously affect the use of female imagery
to promote national goals. A clear example, cited by Graham-Brown, is the
contrast between the way women were portrayed in the media in Iran during the
Pahlavi rule and since the revolution. ‘In both instances, these images form an
important element in the way the regimes promote and legitimize themselves. At
the same time, neither kind of image necessarily reflects with accuracy the
changes or continuities in the everyday life of women in different classes’.
The disjuncture between image and reality becomes profound in
situations where governments are attempting to mobilize people for certain
kinds of social change. Graham-Brown gives examples from post- independence
Algeria and Nasser’s Egypt, where ‘modernist’ and westernised images of women
were used as emblems of progress and enlightenment. Yet ‘on the whole, these
images of emancipation, while they might promote the idea of the progressive
nation, did not challenge basic gender relations in society, particularly male
domination of the family structure’ (Graham-Brown 1988, p. 245). In
contemporary Egypt, according to Lila Abu-Lughod (1993), there is a similar gap
between the ideological message of certain ‘national interest’ television
serials and experience of life in particular communities. The interpretation of
such images is thus fraught with complications. This does not mean that no
indication of changing status, or changing attitudes to women can be gleaned
from them. But they cannot be ‘read’ according to any simple formula whereby
changes in imagery are assumed to equate with changes of the same magnitude in
women’s lives.
Diversity and Change in Gender
Portrayal
These examples illustrate the limitations of a framework which sets
out to critique ‘negative’ images and to demand ‘positive’ media
representations of women. Such a juxtasposition assumes that there is a norm
against which images can be judged. In reality, things are much more
complicated. The same kind of image can embody a variety of different meanings,
depending on the context. A more promising route seems to be offered by the
search for greater ‘diversity’ in gender portrayal. But here again, the
situation is not completely straightforward. Media representations of women and
men in the 1990s may indeed be more diverse than they were twenty years ago.
Lawyers, doctors and police officers are no longer inevitably male; and we may
even see the occasional male character in the kitchen, weeping into the
washing-up bowl. But how important is this change, and what is its
significance?
It is true that drama-including popular fiction, soap operas and
telenovelas-has to some extent begun to respond to new currents and complexities
in gender relations, with occasional portrayals of the ‘new man’ (gentle,
supportive, emotional) and the ‘modern woman’ (independent, assertive,
resourceful). But detailed analyses suggest that such innovations are often
simply a modish facade, behind which lurk old-fashioned formulaic assumptions.
Longitudinal studies of Italian television drama show that, despite a
scattering of ‘anti-heroes’, output remains overwhelmingly male- centered and
success-oriented. In Germany and the United Kingdom, studies have called into
question claims that ‘progressive’ soap operas have actually introduced
radically different points of view (for example, Externbrink 1992; Geraghty
1995).
In Latin America most of the independent new heroines of recent
telenovelas, on closer examination seem to have been introduced as a means of
changing the ‘outer wrappings’ of the genre rather than its core messages. In
the USA several studies of the successful prime-time series thirtysomething
have concluded that despite claims that it articulates a ‘new view of manhood’,
the show’s construction of reality is substantially conservative. Even the
trail- blazing 1980s female detective series Cagney and Lacey does not escape
criticism. Julie D’Acci’s detailed study reveals that although the writers
struggled to maintain the show’s original feminist orientation, in the face of
pressures imposed by commercial network television, the series gradually became
more conventional, ‘feminine’ and exploitative-in the sense of promoting
stories that literally ‘cashed in’ on issues of great complexity for women,
such as rape, abortion, marital violence and so on (D’Acci 1994).
Sightings of the ‘new man’ in media portrayals have been recorded in
countries as different as India, Italy and the USA (Shelat 1994; Buonanno 1994;
Douglas 1995). Again, this phenomenon can not automatically be taken at face
value. Milly Buonanno sounds a note of caution, pointing out that the ‘new man’
in Italian drama is winning the central position in the family and domestic
domain at the expense of women, whose overall share of central roles has fallen
over the past four years: ‘Even the domestic sphere, the traditional stronghold
of the female character in drama, now seems to be increasingly inhabited by
males who show themselves more in command of emotional life than the women do’
(p. 82). A similar concern is expressed by Susan Douglas. Both she and Manisha
Shelat question the extent to which these images actually reflect reality in
their societies, though for Shelat they are a ‘welcome change’ from the role
stereotyping that predominates in the majority of Indian media. But Douglas is
less sanguine, seeing the development as a ‘bizarre twist on the real world,
where many women have changed, but too many men have not’ (p.81).
This review raises important questions about the extent to which the
mainstream media are capable of reflecting diversity and complexity in a way
which would properly respond to the current criticisms of women media
activists. For this reason, some women remain sceptical of any engagement with
the mainstream. But others-like film-maker Michelle Citron-regard it as an
essential step forward, providing a possibility of ‘subverting’ and changing
mainstream media content, despite the compromises involved: ‘These are risks we
need now to take. We will lose a certain amount of control, despite our best
intentions and preparedness... But we need new -data’ in order to refine our
understanding of (the media) and our relationship to it’ (Citron 1988, p. 62).
The Media and Violence Against Women
In a detailed analysis of how the press covered four prominent sex
crimes in the USA over the period 1978 to 1990, Helen Benedict concludes:
‘During the 1980s and 1990s, the quality of sex-crime coverage has been
steadily declining... Rape as a societal problem has lost interest for the
public and the press, and the press is reverting to its pre-1970s focus on sex
crimes as individual, bizarre, or sensationalist case histories’ (Benedict
1992, p. 251). Benedict offers a useful set of suggestions to improve the
reporting of sex crimes-covering language, balance, context, focus on attacker
rather than victim, and so on. On the specific question of language Ann Jones,
author of Next Time, She’ll Be Dead: Battering and How to Stop It, gives
numerous examples of crime reporting in which women are victims but their
attackers’ violence is masked in the language of love. Says Jones, ‘this
slipshod reporting has real consequences in the real lives of real men and
women. It affirms a batterer’s most common excuse for assault: “I did it
because I love you so much” (quoted in Media Report to Women 1994). It does
seem justifiable to suppose that what we see and hear in the media has real
consequences in our lives. However the issue of ‘media effects’ raises many
complicated questions which I will not attempt to take up in this short paper.
Instead I will approach the question of violence primarily from the perspective
of the female consumer.
How do women react to the portrayal of violence? It seems fair to
conclude that if women are made uncomfortable, anxious or frightened by
depictions of violence, then their views deserve to be heard.
In fact, the presentation of violence in the media is an issue which
provokes quite divergent reactions between women and men. Women are less likely
than men to watch violent programmes and films. And even if they do watch,
women may not actually enjoy what they see. In the words of one woman
interviewed in a recent British study, ‘women don’t enjoy watching violence in
the way that men do, judging by the popularity of violent films. I don’t know
any women who get a kick out of watching the after-effects of violence’
(Hargrave 1994, p. 20). Research in the USA shows that women (47%) are much
more likely than men (24%) to object to the level of violence on television. A
survey of women viewers in Canada found that violence was what concerned women
most about television: 34% selected this from a list of seven items of concern,
and 36% said they avoid violent programmes on television (MediaWatch 1994). In
India women were found to have a ‘strong dislike for (television) films which
show violence, and admit to just waiting for the violent scenes to be over so
that they could enjoy the next violent-free scene’ (Media Advocacy Group,
1994). Women are also more concerned than men about the possible impact of
violent messages. Research in the United Kingdom has shown that 59% of
women-compared with 45% of men-would be prepared to give up their freedom to
watch violent programmes if it was widely believed that these caused some
people to be violent (Docherty, 1990). Of the Canadian women questioned in
MediaWatch’s 1994 survey, 82% said they believed that violence in the media
contributes to violence in society. More informal reports have found that women
in many countries around the world express high levels of anxiety about media
violence, and groups such as the Tanzania Media Women’s Association (TAMWA) and
Women’s Media Watch in Jamaica have launched campaigns and activities to
address the problem.
For women who have actually experienced violence, subsequent
exposure to scenes of media violence against women-particularly when portrayed
as ‘entertainment’ may be especially painful: ‘There are things that bring it
back... I can’t watch extremely violent things, I just want to turn off because
the thoughts start and I just don’t want to know’. But even if they have not
been victims themselves, seeing violence on television is an extremely
disturbing experience for many women. Recent audience research in Germany found
that more than half of all female viewers are frightened and feel threatened by
the kind of violence presented on television (Roser and Kroll, 1995). Similar
findings emerged clearly from an in-depth study in the United Kingdom
(Schlesinger et al., 1992) in which women were shown various kinds of violent
material, including an episode from Crimewatch UK (a series which reconstructs
crimes: the reconstruction used was of a young woman’s rape and murder), and
the Hollywood film The Accused (which includes a graphic portrayal of gang
rape). One of the most striking findings was ‘the fear of male violence,
particularly of rape. This was generally found across all of the viewers,
despite class or ethnicity, as was the concern about the possible impact upon
children of viewing violence against women on television. In relation to the
rape/murder in Crimewatch and the gang rape in The Accused, group discussions
revealed a profound anxiety about personal safety’ (op. cit., 166). In the case
of The Accused, ‘there was considerable concern about the appropriateness of a
Hollywood film-essentially premised on entertainment values-as the most
suitable vehicle for dealing with this troubling subject... and worries (which)
centered upon what -men’ were likely to make of this film’ (op. cit., 163).
The Center for Media and Public Affairs in the United States
analysed the incidence of violence on television over a twenty-four hour day in
April 1994.
The number of violent scenes ranged from a low of 71 in the hour
between 2 p.m. and 3 p.m., to a high of 295 scenes of violence in the hour from
6 a.m. to 7 a.m. (Kolbert 1994). An eight-country study of television violence
in Asia conducted by the Asian Mass Communication Research and Information
Centre classified 59% of all the programmes studied as ‘violent’, with
particularly high levels of violence in India, Thailand and the Philippines
(Menon 1993b).
George Gerbner, who has studied television violence for the past
twenty years, maintains that ‘Constant displays of violent power and
victimization cultivate an exaggerated sense of danger and insecurity among
heavy viewers of television’. Clearly, many of the women in the studies
mentioned earlier experience this sense of danger and insecurity.
Strong sentiments were also expressed by these women about the
extent to which it is acceptable to show representations of violence against
women to the general public without adding special safeguards.
Such ideas deserve to be taken seriously, and to enter the public
domain so that they become part of the debate on regulation and
self-regulation. Satellite communication, by weakening the control of national
governments over a growing proportion of media messages and images beamed into
their territories from elsewhere, has given this debate a new urgency. But
proposals for a global code of practice have been met with general scepticism
by the media community. At the national level only a few countries-for example,
Australia, Canada and New Zealand-have so far taken a new, tougher stand on
television portrayal of violence against women.
The United Nations Declaration on the Elimination of Violence
Against Women-which defines the term ‘violence against women’ as ‘any act of
gender-based violence that results in, or is likely to result in, physical,
sexual, or psychological harm or suffering to women’-certainly provides scope
for actions aimed at reducing or eliminating media violence in general, and
scenes of violence against women in particular. Here it is important to bear in
mind that media depictions of dramatic aggression against women are at one end
of a continuum of media images of women which build up from an apparently
benign starting point. For instance the educational video Dreamworlds, mentioned
earlier, demonstrates how an accumulation of images in which women are
presented as submissive objects of male fantasy in music television may
contribute to a perception of the ultimate act of sexual violence-rape-as
justifiable and ‘natural’. At the very least, the development of further
materials of this sort should be undertaken with a view to documenting how
patterns of media violence against women are constructed, and what their
implications may be for the lives of women everywhere.
Pornography and Freedom of
Expression
Pornography has for many years been a multi-billion dollar
international industry. In the United Kingdom alone, 52 million was earned from
the sale of pornographic magazines in 1993 (Davies 1994). Recent developments
in the information and communication system have made pornography more widely
available than ever before. For instance television deregulation, combined with
transborder satellite channels, has resulted in a tenfold increase in televised
pornography over the past decade in Europe, and the demand is escalating
(Papathanassopoulos 1994). New information technologies have introduced various
forms of ‘on-line’ pornography. Interactive computer porn is a particularly
menacing development. This is quite different from earlier forms, in that the
user becomes a participant-a ‘doer’ of pornography rather than merely an
observer. Male fantasy myths about women’s sexual availability feature strongly
in these products.
In cyberspace and elsewhere, pornographers routinely use ‘freedom of
speech’ arguments to defend their right to distribute material which is nothing
other than a violation of women’s human right to safety and dignity. In 1986
British Member of Parliament Clare Short tried to introduce a Bill to make
illegal the display of naked or partially naked women in sexually provocative
poses in newspapers (known in the UK as ‘Page 3 girls’). ‘Killjoy Clare’, as
she was dubbed by the Sun newspaper, was accused of ‘authoritarianism’, of
wishing to deprive people of one of their few ‘pleasures’, of wanting British
newspapers to resemble Pravda. Compared with the displays used in hard-core
pornography, Page 3 may seem relatively innocuous. But Clare Short received
5000 letters of support for her proposal, the overwhelming majority from women.
Twelve women who had been raped wrote that their attackers said they reminded
them of a woman on Page 3, or said they ought to be on Page 3. Since 1986
one major British tabloid newspaper has abandoned its ‘Page 3 girls’, but
others maintain the practice.
Pornography is a central issue for the women’s movement, especially
in relation to violence against women. It is regarded by many as the key site
of women’s oppression. Yet disputes over the regulation of pornography have
split women’s groups, raising the spectre of censorship-a weapon which could be
used against minority groups and against women themselves. In this respect,
recent developments in Canada are of note. In February 1992, a milestone
decision, the Canadian Supreme Court upheld a conviction against a pornography
dealer and, in so doing, recognised a new definition of obscenity.
The Court ‘recognised the harms to women, children and society
arising from pornography as justifying constraint on the free speech rights of
pornographers. The expression found in obscene material, the Court concluded,
lies far from the core of the guarantee of free expression’ (Easton 1994, p.
178). The Butler decision, as it became known, has had important and not
entirely predictable consequences. Women saw it as a huge step forward,
opening up the possibility of convictions in other areas of media content which
could also be proven to degrade or dehumanise women. But the unforeseen
consequence was a crackdown on works by prominent homosexual and lesbian
authors and, for a time, Andrea Dworkin-one of America’s fiercest opponents of
pornography, whose book Pornography: Men Possessing Women was temporarily
seized by the Canadian customs.
The regulation of pornography is also a contentious issue for women
partly because the term ‘pornography’ has been confused-even in legal
instruments-with the concept of ‘obscenity’. The definition of
obscenity-filthy, disgusting, indecent-implies a moral judgement with which
women may feel uncomfortable. The definition of pornography in most feminist
literature follows that of Catharine MacKinnon and Andrea Dworkin: ‘the graphic
sexually explicit subordination of women through pictures or words’ (MacKinnon
1987, p. 176). This perspective shifts the arguments against pornography away from
the terrain of morality, towards an interpretation of pornography as a
violation of women’s rights. Yet even here there are problems. One criticism of
the civil rights Ordinances of Minneapolis and Indianapolis, drafted by
MacKinnon and Dworkin in the 1980s as a means of regulating pornography, was
that terms such as ‘sexual objectification’, ‘degradation’, ‘subordination’-on
which appeal to the Ordinances depended-left too much scope for judicial
interpretation and could be used against women. As Carol Smart (1989) argues,
traditional judicial attitudes reflect a legal framework which is essentially
incompatible with the definitions of feminism, and which cannot accommodate the
complexity of feminist arguments. However, Susan Easton (1994) takes the view
that-rather like the mainstream media-this is an area of challenge for
feminists, who must work to infuse new ideas into established legal frameworks.
As one of a number of strategies to deal with pornography, she advocates the
enactment of a law to prohibit ‘incitement to sexual hatred’.
Of course the polarisation of the pornography debate in terms of
‘free speech’ versus ‘censorship’ fails to take account of the fact that
freedom of expression is limited in all sorts of ways for most people, most of
the time. As A.J. Liebling remarked many years ago, ‘Freedom of the press
belongs to those who own one’. In today’s media context, the aphorism rings
particularly true. Eastman points out that the feminist argument against
pornography ‘is not an isolated assault on free speech rights, but could be
seen as a recognition of the difficulty and undesirability of an absolutist
position on free speech in a pluralist society’ (1994, p. 174). Since they have
relatively restricted access to the channels of communication, it is hardly
surprising that women’s attitudes towards ‘free speech’ differ from those of
men. For example, a study of attitudes among journalism students in the USA
found that the women see the free speech issue from a dual perspective. While
they value the operation of a ‘free press’, they also believe that absolute
freedom of expression can be harmful to them and to others. The authors
conclude optimistically that if the female students carry their attitudes
towards free expression with them into the journalistic work force, ‘society
may see a somewhat different set of professional values in the future’ (McAdams
and Beasley 1994, p. 23).
Women as Users of Media and New
Technologies
Gender differences in media access are linked with patterns of
discrimination in society at large, and with patterns of power relations within
the home. In many parts of the world, high female illiteracy rates mean that
women have little access to the print media. As for television and radio, women
may not always be able to watch or listen to their preferred programmes.
Research in countries as different as Mozambique, Zambia, India, the USA and
the United Kingdom shows that, in family viewing and listening situations, the
decisions of the adult male in the household tend to prevail. Nevertheless,
these and other studies show that women are enthusiastic media users. In Egypt
certain groups of women are particularly avid television viewers: one study
found that 21% of women-compared with 11% of men-spent on average more than four
hours a day in front of the small screen (El-Fawal 1991). In a study of
relatively low-income, poorly educated women in Nigeria, 96% had access to
radio within the household or compound and television was available to 89%.
More than two-thirds of the women listened to the radio every day, and just
under one-third watched television daily (Imam 1992). In Ecuador, Rodriguez
(1990) found that 94% of the working-class women she surveyed had radio in
their homes, and over half listened at least three hours a day. In Brazil
almost every woman in three low-income areas studied by Tufte (1992) had
television in her house, and the women watched an average of three to four
telenovelas a day, six days a week.
These Brazilian women’s heavy viewing of telenovelas reflects a
universal, gendered pattern of media preferences. All over the world men prefer
sports, action-oriented programmes and information (especially news); women
prefer popular drama, music/dance and other entertainment programmes. These
programme choices are most easily explained in terms of the extent to which
women and men are able to identify with various types of media content. One of
the most obvious reasons for women’s preference for serialised drama, soap
opera and telenovelas is the exceptionally high proportion of female characters
in such programmes. Nor is it surprising that men favour genres such as action
drama which feature powerful, dynamic male characters, or sports and news which
revolve almost exclusively around male figures. It is reasonable to wonder what
impact these repetitive patterns of gender representation have on the
female-and the male-audience. During the 1980s there was a vogue for research
into ‘women’s genres’-soap opera, melodrama, magazines-leading to the
conclusion that these could ‘empower’ women. Recent studies have criticised
such claims as being wildly exaggerated, and have focused on the fundamentally
conservative and patriarchal frameworks within which these genres operate.
The problem is that in most other types of media content women
simply do not see or hear any reflection of themselves, or of their experience
of life. Television sports coverage in Europe provides a good example of the
ways in which women’s media choices are limited. Audience data for six countries
in 1992 showed that, in all six, the sporting events most watched by men were
football matches (Akyuz 1993). But women watched other sports. In France, the
event which got the top female audience-over 8 million viewers, which was
higher than the male audience for any sporting event-was women’s figure skating
at the Winter Olympics.
According to the same data, the event which attracted the largest
female audience in the United Kingdom was the women’s 10,000 meters final in
the Summer Olympics, though this reached only 8th place among male audiences.
So it is not that women don’t like watching sport, but that they like watching
different sport. In particular, they like ‘women’s sport’. Unfortunately for
women, the television sports schedules are built around male and not female
preferences.
Similarly, news and current affairs programmes reflect a hierarchy
of values in which the issues that concern women are given low priority, if
covered at all. Recent research with British viewers shows that although women
feel ambivalent about the concept of ‘women’s issues’-believing that once an
issue becomes labelled as being of exclusive concern to women, it is in danger
of being marginalised-there is also a shared understanding among women about
issues that do concern them, and a feeling that these are not given priority in
the news media. As one woman put it: ‘Women’s issues don’t always get enough
airtime on the so-called serious programmes. They don’t have the same weight as
world politics-which they should do, because they are about changing society in
fundamental ways’ (Sreberny-Mohammadi 1994, p. 69).
When asked directly, many women are clear that their preferences are
not catered for by the media. In common with women recently surveyed in Canada
and Germany (MediaWatch 1994; Roser and Krull 1995), most of those interviewed
in Annabelle Sreberny-Mohammadi’s research said that women should have more
visibility on television, that there should be stronger female characters in
drama and entertainment, and that there should be more women of authority in
news and current affairs output.
The participants felt that more women journalists and more female
experts voicing opinions across a variety of issues would act as significant
role models for other women, stimulate female interest in public issues,
and-perhaps-sometimes speak in the interests of and for women (op. cit., p.
75).
The potential of the new information and communication technologies
for the advancement of women is considerable. Networking, research, training,
sharing of ideas and information-all these could be made infinitely easier
through relatively affordable computer-mediated communications such as E-mail,
Internet, hypertext and hypermedia (Steffen 1995). However, the obstacles are
formidable. Unequal access to computers at school and in the home; highly
male-dominated computer languages and operating systems; a hostile environment
in which sexual harassment, sexual abuse and pornography flourish; these are
just some of the factors which deter women from entering cyberspace.
Gender-differentiated data on access to the new technologies are scarce, but
those available do indicate that women are more reluctant users than men. In
the United Kingdom in 1992 27% of women (compared to 37% of men) owned a home
computer (Mackay 1995). Almost identical figures were reported in 1994 for the
USA, where just 9% of women (and 15% of men) also had a computer
modem-essential for use of E-mail and Internet. However 46% of women in this
survey were dissatisfied with their level of technical know- how, suggesting
that women may be frustrated users rather than completely uninterested in the
new technologies.
Women comprise only about 10% of the Internet population in the USA.
On the other hand Women’s Wire-a commercial on-line service-has 90% to 95%
female subscribers. Aliza Sherman recommends this kind of service-‘providing
women-specific information on topics such as women’s health, politics, news,
technology, business, finance, and family’-as a good starting point for women wary
of cyberspace (Sherman 1995, p 26). Dale Spender claims that there are
literally thousands of women’s groups now on-line, though it seems that most of
them are located in-and relatively limited to-North America. An exception is
Virtual Sisterhood, described as a ‘network for women around the world to share
information, advice and experiences’ which claims to have links with women’s
networks in a wide range of countries in Asia and Latin America (op. cit., p.
238). At the international level, the Association for Progressive
Communications (APC) is among the most actively involved in supporting women
through electronic communication. Women in Latin America as well as Canada and
the USA have been using the APC networks for information exchange, and the APC
Women’s Networking Support Program has provided training workshops for women in
Africa and Asia. The presence of a 40-strong all-women APC team at the United
Nations Fourth World Conference on Women in 1995 introduced countless women to
the possibilities of electronic communication, creating
connections-technological and human-which will doubtless flourish in the years
ahead.
But despite the hyperbole, it is important to remember that these
new technologies are inherently no ‘better’ than the old ones-print, radio,
television etc. For example, to claim-as British scholar Sadie Plant does-that
the Internet is an inherently equalizing, non-hierarchical, even liberating
communication system seems somewhat overstretched. Already, as Herman Steffen
points out, ‘large corporations are trying to turn cyberspace into a televised
shopping mall where communications is one-way (entertainment) unless the
consumer wishes to buy something; if so, he is welcome to communicate by
punching in his credit-card number’ (op. cit., p. 16). In that sense,
cyberspace merely provides women with a new terrain on which to wage old
struggles.
Changing the Picture: Five
Strategies for the Future
As we reach the close of the twentieth century, there is little
evidence that the world’s communication media have a great deal of commitment
to advancing the cause of women in their communities. Although the presence of
women working within the media has increased in all world regions over the past
two decades, real power is still very much a male monopoly.
And while it is relatively easy to make proposals for the
implementation of equality in the area of employment-and to measure
progress-the issue of media content is much more problematic. Who is to decide
what is acceptable in this domain? What criteria should be used to evaluate
progress?
Research (and experience) has shown that purely quantitative
measures are completely inadequate to describe gender portrayal in the media,
much less to interpret its meaning or significance. There may be fairly
widespread agreement that certain types of media content-for example, violent
pornography or child pornography-are completely unacceptable and degrading to
women, and should be strictly regulated. But what about the routine
trivialisation and objectification of women in advertisements, the popular
press, and the entertainment media? What about the prime-time television shows,
watched by millions, in which women are regularly paraded as the mute and
partly-clothed background scenery against which speaking and fully-clothed men
take centre-stage? And how many women feel uneasy, or downright fearful, if
they are alone at night in a taxi which stops at traffic lights beside an
advertising poster adorned with a semi-naked, pouting female image? There are
important rights and responsibilities involved here, and the conflicts are
obvious. We have hardly begun to address them, much less find ways of
reconciling them.
In terms of strategies for change, there are perhaps five broad
areas in which simultaneous and coordinated activity could bring results.
Within each of these, I will merely indicate the types of action which seem
particularly important, rather than explore the many approaches and initiatives
which have already been tried.
1. First, there needs to be pressure from within
the media themselves. More women must be employed-at all levels and in all
types of work-in the media, so that we do finally achieve the critical mass of
female creative and decision-making executives who could change media output.
Numbers are important, if long-established media practices and routines are to
be challenged. To quote the veteran American journalist, Kay Mills: ‘A story
conference changes when half the participants are female... There is indeed
security in numbers. Women become more willing to speak up in page-one meetings
about a story they know concerns many readers’ (Mills 1990, p. 349).
There is evidence
that, when they do constitute a reasonable numerical force, women can and do
make a difference. For instance, in the United States a 1992 survey of managing
editors of the largest 100 daily newspapers found that 84% of responding
editors agreed that women have made a difference, both in defining the news,
and in expanding the range of topics considered newsworthy-women’s health,
family and child care, sexual harassment and discrimination, rape and
battering, homeless mothers, quality of life and other social issues were all cited
as having moved up the hierarchy of news values because of pressure from women
journalists (Marzolf 1993). In their study of press coverage in India during
the 1980s, Ammu Joseph and Kalpana Sharma (1994) conclude that female
journalists played an important role in focusing attention on issues of crucial
importance to women: dowry-related deaths, rape, the right to maintenance after
divorce, the misuse of sex determination tests, and the re-emergence of sati.
But it is not just a question of introducing ‘new’ topics (though they are
age-old concerns for women) on to the news agenda. As we know from the example
of war reporting in the former Yugoslavia, women have also succeeded in
changing the way in which ‘established’ issues are covered. Similarly, in the
Asian context, Joseph and Sharma note a qualitative difference in reporting of
the conflict in Sri Lanka by Indian women journalists who ‘focussed on the
human tragedy unfolding in that country while also dealing with the obvious
geopolitical aspects of the ethnic strife. By contrast, the latter was the sole
preoccupation of most of the male journalists covering the conflict’ (op.cit.,
p. 296).
2. The second need is for pressure from outside
the media, in the form of consumer action and lobbying. One of the many
paradoxes of the move towards the market-led media systems that are developing
all around the world is that in some respects it places more power in the hands
of the consumer. Not surprisingly, this was recognised long ago in North
America, where strong media lobby groups already exist. In Canada for instance,
Media Watch-established in the early 1980s-has secured the removal of numerous
sexist advertisements, has worked with national broadcasters and advertising
associations to develop guidelines on gender portrayal, and has effectively
lobbied to secure a strongly worded equality clause in Canada’s 1991
Broadcasting Act. Elsewhere the Tanzania Media Women’s Association (TAMWA),
Women’s Media Watch in Jamaica, and the Media Advocacy Group in India have all
made an impact with both the media and the public. In Europe initiatives of
this sort have barely started. In Spain the Observatorio de la Publicidad
(created in early 1994 by the Instituto de la Mujer), and in Italy the
Sportello Immagine Donna (established in 1991 by the Commissione Nazionale per
la Parita‘) have begun to provide mechanisms through which complaints can be
organised and channelled. However, these are rare examples. Strong women’s
media associations do exist in a many countries, but often their primary
purpose is to defend women’s professional interests as media workers. There is
a real need to develop monitoring and lobby groups which could organise
effective campaigns and protests on a national and-when necessary-a regional and
even a global level.
3. The third area is media education. It is
astonishing how little the public in general, and even media professionals
themselves, understand the subtle mechanisms which lead to patterns of gender
stereotyping in media content. This emerged clearly from recent research by the
Broadcasting Standards Council in the United Kingdom. For instance, they found
that women viewers had even ‘no concept of the script-writer developing
characters in a particular way and accepted with little question the
presentation of women that they were offered’ (Hargrave 1994, p. 21) There is a
great deal of talk-particularly in academic and political circles-about the
portrayal of women in the media. But abstract discussions about ‘sexist
stereotyping’ and ‘negative images of women’ are unlikely to promote true
understanding of what is involved, much less lead to real change. What is
needed are effective, practical workshops built around specific media examples.
In this sense, the NOS Portrayal Department in the Netherlands is exemplary. It
was launched as a five-year project in 1991, and has built up a unique
collection of audio- visual examples-as well as specially produced
material-which are used in training sessions and workshops with
programme-makers. Media education is a key strategy. The development of
national and regional banks of examples and materials, which illustrate the
many ways in which gender stereotyping occurs, would be a tremendous
contribution to its success.
4. The fourth need is for pressure from above so
that, for example, media organisations are encouraged to adopt guidelines and
codes of conduct on the fair portrayal of women. The media in most countries
already have guidelines that govern particular aspects of their output such as
the portrayal of violence, or the regulation of advertising. In some
countries-for instance Canada, the United Kingdom-certain media organisations
also have guidelines covering the ways in which women are portrayed. These
guidelines have been made to work, and they could work in other organisations
too. Given the development of transborder and global communication systems,
there is also an urgent need for regional and international codes of practice.
This is a delicate matter, which would undoubtedly provoke immediate and
vociferous objections from the media communities. For example, in 1995 the
European Union adopted a Resolution on the image of women and men portrayed in
advertising and media.
As a result of
fierce lobbying by the media industry, the final text is very much weaker than
the initial draft. However, it is still a useful document. Despite the
inevitable opposition, it is important to work towards the development of
regulatory texts and codes of conduct in all countries and regions.
5. The final need is for international debate
aimed at a reinterpretation of ‘freedom of expression’ within the framework of
a women’s human rights perspective, and the subsequent development of a global
code of ethics based on this new interpretation. Such an undertaking would
certainly provoke controversy. Cees Hamelink points out that the pursuit of
democracy in world communication has been all but abandoned because ‘the gospel
of privatisation... declares that the world’s resources are basically private
property, that public affairs should be regulated by private parties on free
markets’ (Hamelink 1995, p. 33). Moreover the belief that a free market
guarantees the optimal delivery of ideas and information means that-in a
bizarre way-the terms ‘free market’ and ‘free speech’ have become almost
interchangeable.
With more and more communication channels in the control of fewer
and fewer hands, it is surely time for a fundamental reinterpretation of the
doctrine of freedom of speech, and the search for a new definition of this
‘freedom’ which takes full account of the contemporary global economic,
information and communication system and of women’s place within it. The 1995
report of the World Commission on Culture and Development provides a lead here.
The Commission points out that the airwaves and space are part of a
‘global commons’-a collective asset that belongs to all humankind, but which is
at present used free of charge by those who possess resources and technology.
It goes on to suggest that ’the time may have come for commercial regional or
international satellite radio and television interests which now use the global
commons free of charge to contribute to the financing of a more plural media
system’ (World Commission on Culture and Development 1995, p. 278).
Conclusion
The World Commission on Culture and Development makes a number of
very concrete proposals aimed at ‘enhancing access, diversity and competition’
in the international media system (op. cit., pp. 278-281, emphasis added). But
its view of ‘competition’ is a radical one, whose starting point is human and
cultural diversity, rather than financial markets. Radical as it is, this
approach offers women more hope than the information superhighways of the
Global Information Infrastructure extolled by Vice President Al Gore. The Vice
President, it will be remembered, envisions ‘a new Athenian Age of democracy
forged in the fora the GII will create’. But the Vice President seems to have
forgotten that Athenian democracy did not extend its membership to women.
9
Media and its Role in American Society
Introduction
There has been no shortage of government propaganda on all sides of
the recent Iranian-British detainment crisis. British and American leaders have
denounced Iran for intimidation, coercion, and arrogance, while Iranian leaders
have made similar charges against the Bush and Blair governments. The dispute
between the three countries only recently came to an end with the unconditional
release of the “hostages” (as they were labeled by Western leaders) two weeks
after their initial detainment. It is worth seriously reflecting on American
media coverage of the British-Iranian standoff, at least if one is interested
in understanding the nature of foreign policy news coverage of events in the
Middle East.
In Manufacturing Consent: The Political Economy of the Mass Media,
Edward Herman and Noam Chomsky lay the foundations for a “propaganda model,”
which postulates that American mass media reporting and editorializing strongly
and uncritically privilege official perspectives. Official sources are treated
with deference, and U.S. humanitarian rhetoric elaborating high-minded goals of
American foreign policy is left largely unquestioned.
The propaganda of U.S. allies and client regimes is accorded
positive coverage (and certainly not referred to as propaganda), while
dissidents and officially designated “enemies” of state are denigrated and
denounced for coercive, terrorist, and/or aggressive behavior. Such claims
against the American mass media are not meant to be taken lightly, as they
should be made the subject of serious empirical testing and scrutiny. It so
happens that the British-Iranian standoff represents an important opportunity
to test the propaganda model in the realword.
History
On March 23, 2007, an Iranian gunship detained 7 marines and 8
sailors of the British Royal Navy near the Shatt al-Arab waterway off of the
coast of Iran and Iraq. The British Navy personnel were inspecting vessels
suspected of smuggling goods to and from Iraq, when the Iranian Revolutionary
Guard picked them up, claiming they had illegally entered Iranian national
waters. American media reports soon referred to the situation as a major
confrontation between Britain and Iran, as both governments placed blame
squarely on the other, refusing to admit to any sort of wrongdoing.
American leaders, retaining a long history of antagonistic relations
with Iran, predictably reacted by denouncing the detainment as a violation of
international law and as an act of unprovoked aggression. Dan Barlett, White
House Counselor, described “a long history from the Iranian government of bad
actions it’s taken, further isolating themselves from the international
community.” President Bush called the detainment “inexcusable,” claiming about
the Iranian personnel: “They’re innocent, they did nothing wrong, and they were
summarily plucked out of waters.”
Those hoping the American media would react more calmly than the
U.S. and British governments, carefully weighing evidence in favor of a fair
portrayal of the conflict, were in for a disappointment. As the propaganda
model predicts, the American mass media are quick to demonize the actions of
official “enemies,” while exonerating the U.S. or allied governments for any
blame.
In no uncertain terms, Max Hastings argued in the New York Times
that “Iran represents a menace to the security of us all,” while the Washington
Post editors railed against the “illegal attacks against a major Western
power,” despite the fact that there was still uncertainty at the time over
whether the British troops had been in Iranian waters or not. Of the four
editorials run by the Washington Post and Los Angeles Times on the detainment
incident, all condemned Iranian leaders for utilizing propaganda in pursuit of
selfish motives. The Los Angeles Times editors labeled the sailors and marines
“innocent” victims of Iranian “escalation.”
American Reports
As with major editorials, American reporting on the conflict also
tended to heavily promote official Western frames. Of the 49 major stories run
by the New York Times, Los Angeles Times, and Washington Post (found through a
comprehensive search of the Lexis Nexis database), 54% of all sources quoted
were British, as opposed to 30% that were Iranian. Western sources (including
British and American) dominated media narratives even more thoroughly,
comprising on average 70% of all sources quoted by the three papers. Such
sources tended more often to promote antagonistic views of Iranian leaders,
while presenting heroic and resolute images of U.S. and British leaders, under
siege as a result of Iranian aggression and coercion.
Of course, there is nothing inevitable about the fact that most
sources were pro-Western in nature. There were, after all, reporters in Iran
from Reuters and the Associated Press, amongst other reporting agencies and
organizations operating in Tehran, who filed reports based upon the statements
of Iranian leaders, military officials, media, dissidents, and specialists. If
American media outlets wanted to pursue a more balanced approach to reporting
the standoff, equally citing British and Iranian sources, they could have done
so. Pursuing a more balanced approach, however, would require that American
reporters and editors not pursue (as one of their major objectives) the
uncritical transmission of official propaganda at the expense of alternative
views.
Further evidence for claims of propagandistic news coverage is seen
in the heavy reliance of the U.S. print media on American and British
government officials, who were disproportionately quoted in reporting the
British-Iranian standoff. Of all the British and American sources quoted in the
major stories from the New York Times, Los Angeles Times, and Washington Post
on the incident, 80% of British and 73% of American sources were either from
government or former government officials, or from military sources.
Conversely, only 20% of British and 27% of American sources came from
non-government sources such as other media, academics and specialists, activists
and dissidents, or people on the street.
Aside from looking at source bias, there are other ways in which to
test the propaganda model concerning American news coverage of the standoff. It
so happens that the Iranian detainment of British personnel (in March 2007) was
preceded by a detainment of Iranian government officials by the United States
in Iraq (in January 2007). Both incidents are generally comparable in nature,
although the U.S. detainment is arguably more extreme than the Iranian detainment,
upon reflecting on the facts surrounding the cases.
On January 11, U.S. armed
forces conducted a raid on an Iraqi foreign liaison office in the Kurdish city
of Irbil, detaining 5 Iranian intelligence officials who were a part of Iran’s
Revolutionary Guard. While the 5 were not officially diplomats, they were
members of the Iranian Revolutionary Guard’s al-Quds Brigade, on an official
mission to Iraq, representing the Iranian government. The officials were in the
process of being awarded diplomatic status at the time of the U.S. detainment.
The officials did not illegally enter the country on a covert mission; quite
the contrary, Iraqi Foreign Minister Hoshyar Zebari explained that they were
“not [on] a clandestine operation…They were known by us…They operated with the
approval of the regional government and with the knowledge of the Iraqi
government. We were in the process of formalizing that liaison office into a
consulate.”
U.S. leaders claimed the raid was necessary in order to send a
message to Iranian leaders to stop “meddling” in Iraqi affairs. Iran had been
accused by U.S. leaders of providing improvised explosive devices to Iraqi
“insurgents” to be used against American troops. Iran had also been accused of
providing money, weapons, and training to Iranian militias and “insurgents,”
and in threatening U.S. attempts to “stabilize” a war-torn Iraq. Of course,
Iraqi leaders explicitly rejected U.S. charges of Iranian “meddling” in Iraqi
affairs, filing numerous protests of the U.S. detainment operation. Kurdish
officials labeled the attack as a violation of Iraqi sovereignty and
a violation of international law. Iraq’s Foreign Minister
explained that the detainment of one of the Iranian officials (who had been an accredited diplomat) was “embarrassing for my country.”
a violation of international law. Iraq’s Foreign Minister
explained that the detainment of one of the Iranian officials (who had been an accredited diplomat) was “embarrassing for my country.”
The U.S. and Iranian detainments represent a rare opportunity to
conduct a natural experiment into the ways in which comparable military
operations between the United States and “enemy” regimes are portrayed in the
American media. The reasons for expecting comparable coverage between the two
abduction stories are numerous.
As the Iranian detainment of British sailors was protested as
illegal by British and American leaders, so too was the U.S. detainment of
Iranian officials heavily protested by Iraqi and Iranian leaders as illegal.
Both abductions represented major standoffs between powers attempting to exert
their authority in the Middle East.
One could easily argue that the U.S. detainment of Iranian officials
should have garnered even more attention than the Iranian detainment of British
personnel. In the case of the U.S. detainment, the Iranian officials were in
Iraq legally, with the express permission of the Iraqi government. Conversely,
the legal status of the British and American occupation of Iraq has been widely
considered illegal under international law at the highest levels of
organizations like the United Nations (hence any operations of British or
American troops could also be deemed illegal).
On another level, the U.S. detainment of the Iranian officials was
explicitly authorized at the highest levels of the American government (a clear
case of official U.S. provocation against Iran), whereas it was unknown at the
time of the reporting of the British-Iranian standoff whether the detainment of
British Navy personnel was ordered at the highest levels of the Iranian
government or not. Furthermore, Iran’s detainment of British forces paled in
comparison to the U.S. detainment of Iranians in terms of potential for
inciting a hostile reaction. This is most clearly evident in that the Bush
administration explicitly authorized the kidnapping or killing of Iranian
government officials within Iraq, whereas the Iranian government made clear no
such intentions in terms of its treatment of British detainees.
The killing of foreign political officials has been expressly
rejected as illegal under the 1963 Vienna Convention on Consular Relations and
the 1973 Convention on the Prevention and Punishment of Crimes Against
Internationally Protected Persons, both of which the United States and Iran
have ratified. The assassination or killing of any Iranian official invited
into Iraq, then, represents a violation of the aforementioned international
legal protections. Violation of such laws is a sufficient reason
in-and-of-itself for major coverage of the U.S. abduction of Iranian officials.
Despite expectations of comparable coverage, the propaganda model is
once again vindicated after one reviews the extreme imbalance of coverage of
the two detainment incidents. In the two week period following the U.S.
detainment of Iranian officials, the New York Times, Los Angeles Times, and
Washington Post each reported only three major stories on the incident, for a
total of nine stories. Conversely, U.S. media coverage from these three
newspapers totaled 49 major stories in the two week period following the
Iranian detainment of British personnel.
In sum, the actions of an
“enemy” regime were deemed far more salient and worthy of attention than the
potentially embarrassing actions of the United States, which had been ardently
condemned as a violation of international law and Iraqi national sovereignty.
While reporting on the British-Iranian “standoff” was largely dominated by
official narratives and frames, the U.S. detainment operations were portrayed
as essential in promoting American self-defense, protection of American troops,
and in opposition to Iranian aggression and terrorism.
Such points were perhaps most blatantly evident in a Los Angeles
Times editorial insisting that the “U.S. has every right [emphasis added] to
insist on the arrest, prosecution, or expulsion from Iraq of Iranians,
officials or not, who abet terrorism.” Deference to U.S. justifications was
also evident in light of over-reliance on official statements, to the neglect
of non-official ones.
In a final test of the propaganda model, one may examine the ways in
the Iranian-British standoff was distinguished from the earlier U.S. detainment
of Iranians in terms of discounting a possible cause and effect relationship.
Did the U.S. abduction of Iranian officials incite Iranian leaders to respond
against the U.S. or its allies in Iraq by abducting British military personnel?
While a complete answer to this question seems elusive, the posing of the
question should have been a priority if the American media were committed to
understanding possible root causes of the British-Iranian standoff.
In the case of British media coverage, one can see that the question
of a causal link between the two incidents was focused on more intensively. In
a number of potentially explosive stories reported during the March standoff,
the Independent of London reported that the original targets in the
U.S.-Iranian detainment in January had been government officials with far
higher credentials than the low-level officials who were actually detained in
U.S. operations.
The United States, the Independent reported, had attempted to
capture “two senior Iranian officers…Mohammed Jafari, the powerful deputy head
of the Iranian National Security Council, and General Minojahar Frouzanda, the
Chief of Intelligence of the Iranian Revolutionary Guard.” The source of these
charges came from Kurdish officials, who explained that Jafari and Frouzanda
“were in Kurdistan on an official visit during which they met with Iraqi
President Jalal Talabani and later saw Massoud Barzani, the President of the
Kurdistan Regional Government (KRG).”
The significance of the failed capture of these officials was
presented lucidly by Patrick Cockburn of the Independent: “The attempt by the
US to seize the two-high ranking Iranian security officers openly meeting with
Iraqi leaders is somewhat as if Iran had tried to kidnap the heads of the CIA
and MI6 while they were on an official visit to a country neighboring Iran,
such as Pakistan or Afghanistan. There is no doubt that Iran believes that Mr
Jafari and Mr Frouzanda were targeted by the Americans.
In a number of reports, Cockburn suggested a direct cause-and-effect
link between the original U.S. detainment and the following British-Iranian
standoff (“The Botched U.S. Raid that Led to the Hostage Crisis,” and “American
Raid and Arrests Set Scene for Capture of Marines”). He argued that “Better
understanding of the seriousness of the US action in Irbil – and the angry
Iranian response to it – should have led Downing Street and the Ministry of
Defence to realize that Iran was likely to retaliate against American or
British forces such as highly vulnerable Navy search parties in the Gulf…The
attempt by the U.S. to seize the two high-ranking Iranian security officers”
was “a far more serious and aggressive act. It was not carried out by proxies
but by US forces directly.”
While the Independent’s reports were subsequently picked up by other
mainstream British media sources, neither the story, nor its charges, appear to
have received any headline coverage in the major American print media. There
was no coherent or systematic effort in the American press to report charges
that the two abductions were directly related. This decontextualization is best
seen in a breakdown of the 19 stories (out of the total 49 major stories on the
British-Iranian “standoff’) in the New York Times, Los Angeles Times, and
Washington Post that did mention the U.S. January abduction in their reporting.
Out of those 19 stories, only 5 (all from the Washington Post)
suggested that there might be a causal relationship between the U.S. and
Iranian detainments; 14 stories either suggested no link or explicitly refuted
suggestions of one. Only one story (from the Los Angeles Times) directly
referenced the Independent story, although the reference was not in the
headline, but buried deep within the article. Importantly, none of the 49
stories on the British-Iranian “standoff” discussed the charge that Iran’s
detainment of British personnel might have been motivated by the failed U.S.
attempt to seize senior Iranian officials a few months earlier.
Whether it is in the over-reliance on British and American official
sources over non-official ones, the systematic marginalization of comparable
news coverage implicating both U.S. “enemies” and the U.S. in aggression or
violation of international law, or the suppression of explosive charges against
the United States for provoking a hostage crisis, the American press has
revealed itself as extraordinarily subservient to the agendas of the American
foreign policy elite.
Official “enemies” are vilified (at times for good reason), while
the questionable actions of American leaders are largely left unchallenged, as
professional norms of “objectivity” do not allow for the challenge of official
statements. As the propaganda model suggests, American reporters have
faithfully taken to the role of an unofficial propaganda arm for the state,
most blatantly during times when the United States rules in favor of allies and
client regimes against powers deemed antagonistic to U.S. interests.
10
New Media and New Technologies
New media: do we know
what they are?
This book is a contribution to answering the question, ‘What
is new about “new media”?’ It also offers ways of thinking about that question,
ways of seeking answers. Here, at the outset, we ask two prior questions.
First, ‘What are media anyway?’. When you place the prefix ‘new’ in front of
something it is a good idea to know what you are talking about and ‘media’ has
long been a slippery term (we will also have a lot to say about that in various
parts of the book). Second, what, at face value and before we even begin to
interrogate them, do we include as ‘new media’?
Media studies
For some sixty years the word ‘media’, the plural of
‘medium’, has been used as a singular collective term, as in ‘the media’
(Williams 1976: 169). When we have studied the media we usually, and fairly
safely, have had in mind ‘communication media’ and the specialised and separate
institutions and organisations in which people worked: print media and the
press, photography, advertising, cinema, broadcasting (radio and television),
publishing, and so on. The term also referred to the cultural and material
products of those institutions (the distinct forms and genres of news, road
movies, soap operas which took the material forms of newspapers, paperback
books, films, tapes, discs: When
systematically studied (whether by the media institutions themselves as part of
their market research or by media academics inquiring critically into their
social and cultural significance) we paid attention to more than the point of
media production which took place within these institutions. We also investigated
the wider processes through which information and representations (the
‘content’) of ‘the media’ were distributed, received and consumed by audiences
and were regulated and controlled by the state or the market.
We do, of course, still do this, just as some of us still
watch 90-minute films, in the dark, at the cinema, or gather as families to
watch in a fairly linear way an evening’s scheduled ‘broadcast’ television. But
many do not consume their ‘media’ in such ways. These are old habits or
practices, residual options among many other newer ones. So, we may sometimes
continue to think about media in the ways we described above, but we do so
within a changing context which, at the very least, challenges some of the
assumed categories that description includes.
For example, in an age of trans-mediality we now see the
migration of content and intellectual property across media forms, forcing all
media producers to be aware of and collaborate with others. We are seeing the
fragmentation of television, the blurring of boundaries (as in the rise of the ‘citizen journalist’);
we have seen a shift from ‘audiences’ to ‘users’, and from ‘consumers’ to
‘producers’. The screens that we watch have become both tiny and mobile, and
vast and immersive. It is argued that we now have a media economics where
networks of many small, minority and niche markets replace the old ‘mass
audience’ (see The Long Tail 3.13). Does the term ‘audience’ mean the same as
it did in the twentieth century? Are media genres and media production skills
as distinct as they used to be? Is the ‘point of production’ as squarely based
in formal media institutions (large specialist corporations) as it used to be?
Is the state as able to control and regulate media output as it once was? Is
the photographic (lens based) image any longer distinct from (or usefully
contrasted to) digital and computer generated imagery?
However, we should note right now (because it will be a
recurring theme in this book), that even this very brief indication of changes
in the forms, production, distribution, and consumption of media is more
complex than the implied division into the ‘old’ and ‘new’ suggest. This is
because many of these very shifts also have their precedents, their history.
There have long been minority audiences, media that escape easy regulation,
hybrid genres and ‘inter-texts’ etc. In this way, we are already returned to
the question ‘What is “new” about “new media”?’ What is continuity, what is
radical change? What is truly new, what is only apparently so?
Despite the contemporary challenges to its assumptions, the
importance of our brief description of ‘media studies’ above is that it
understands media as fully social institutions which are not reducible to their
technologies. We still cannot say that about ‘new media’, which, even after
almost thirty years, continues to suggest something less settled and known. At
the very least, we face, on the one hand, a rapid and ongoing set of
technological experiments and entrepreneurial initiatives; on the other, a
complex set of interactions between the new technological possibilities and
established media forms. Despite this the singular term ‘new media’ is applied
unproblematically. Why? Here we suggest three answers. First, new media are
thought of as epochal; whether as cause or effect, they are part of larger,
even global, historical change. Second, there is a powerful utopian and
positive ideological charge to the concept ‘new’. Third, it is a useful and
inclusive ‘portmanteau’ term which avoids reducing ‘new media’ to technical or
more specialist (and controversial) terms.
The intensity of
change
The term ‘new media’ emerged to capture a sense that quite
rapidly from the late 1980s on, the world of media and communications began to
look quite different and this difference was not restricted to any one sector
or element of that world, although the actual timing of change may have been
different from medium to medium. This was the case from printing, photography,
through television, to telecommunications. Of course, such media had
continually been in a state of technological, institutional and cultural change
or development; they never stood still. Yet, even within this state of constant
flux, it seemed that the nature of change that was experienced warranted an
absolute marking off from what went before. This experience of change was not,
of course, confined only to the media in this period. Other, wider kinds of
social and cultural change were being identified and described and had been, to
varying degrees, from the 1960s onwards. The following are indicative of wider
kinds of social, economic and cultural change with which new media are
associated:
•
A shift from modernity to postmodernity: a
contested, but widely subscribed attempt to characterise deep and structural
changes in societies and economies from the 1960s onwards, with correlative
cultural changes. In terms of their aesthetics and economies new media are
usually seen as a key marker of such change (see e.g. Harvey 1989).
•
Intensifying processes of globalisation: a
dissolving of national states and boundaries in terms of trade, corporate
organisation, customs and cultures, identities and beliefs, in which new media
have been seen as a contributory element (see e.g. Featherstone 1990).
•
A replacement, in the West, of an industrial age
of manufacturing by a ‘post-industrial’ information age: a shift in employment,
skill, investment and profit, in the production of material goods to service
and information ‘industries’ which many uses of new media are seen to epitomise
(see e.g. Castells 2000).
•
A decentring of established and centralised
geopolitical orders: the weakening of mechanisms of power and control from
Western colonial centres, facilitated by the dispersed, boundary-transgressing,
networks of new communication media.
New media were caught up with and seen as part of these
other kinds of change (as both cause and effect), and the sense of ‘new times’
and ‘new eras’ which followed in their wake. In this sense, the emergence of
‘new media’ as some kind of epoch-making phenomena, was, and still is, seen as
part of a much larger landscape of social, technological and cultural change;
in short, as part of a new technoculture.
The ideological
connotations of the new
There is a strong sense in which the ‘new’ in new media
carries the ideological force of ‘new equals better’ and it also carries with
it a cluster of glamorous and exciting meanings. The ‘new’ is ‘the cutting
edge’, the ‘avant-garde’, the place for forward-thinking people to be (whether
they be producers, consumers, or, indeed, media academics). These connotations
of ‘the new’ are derived from a modernist belief in social progress as
delivered by technology. Such long-standing beliefs (they existed throughout
the twentieth century and have roots in the nineteenth century and even
earlier) are clearly reinscribed in new media as we invest in them. New media
appear, as they have before, with claims and hopes attached; they will deliver
increased productivity and educational opportunity and open up new creative and
communicative horizons. Calling a range of developments ‘new’, which may or
edutainment, edutainment may not be new or even similar, is part of a powerful
ideological movement and a narrative about progress in Western societies.
This narrative is subscribed to not only by the
entrepreneurs, corporations who produce the media hardware and software in
question, but also by whole sections of media commentators and journalists,
artists, intellectuals, technologists and administrators, educationalists and
cultural activists. This apparently innocent enthusiasm for the ‘latest thing’
is rarely if ever ideologically neutral. The celebration and incessant
promotion of new media and ICTs in both state and corporate sectors cannot be
dissociated from the globalising neo-liberal forms of production and
distribution which have been characteristic of the past twenty years.
Non-technical and
inclusive
‘New media’ has gained currency as a term because of its
useful inclusiveness. It avoids, at the expense of its generality and its
ideological overtones, the reductions of some of its alternatives. It avoids
the emphasis on purely technical and formal definition, as in ‘digital’ or
‘electronic’ media; the stress on a single, ill-defined and contentious quality
as in ‘interactive media’, or the limitation to one set of machines and
practices as in ‘computer-mediated communication’ (CMC).
What is new about
interactivity?
So, while a person using the term ‘new media’ may have one
thing in mind (the Internet), others may mean something else (digital T V, new
ways of imaging the body, a virtual environment, a computer game, or a blog).
All use the same term to refer to a range of phenomena. In doing so they each
claim the status of ‘medium’ for what they have in mind and they all borrow the
glamorous connotations of ‘newness’. It is a term with broad cultural resonance
rather than a narrow technicist or specialist application.
There is, then, some kind of sense, as well as a powerful
ideological charge, in the singular use of the term. It is a term that offers
to recognise some big changes, technological, ideological and experiential,
which actually underpin a range of different phenomena. It is, however, very
general and abstract.
We might, at this point, ask whether we could readily
identify some kind of fundamental change which underpins all new media –
something more tangible or more scientific than the motives and contexts we
have so far discussed. This is where the term ‘digital media’ is preferable for
some, as it draws attention to a specific means (and its implications) of the
registration, storage, and distribution of information in the form of digital
binary code. However, even here, although digital media is accurate as a formal
description, it presupposes an absolute break (between analogue and digital)
where we will see that none in fact exists. Many digital new media are reworked
and expanded versions of ‘old’ analogue media.
Distinguishing
between kinds of new media
The reasons for the adoption of the abstraction ‘new media’
such as we have briefly discussed above are important. We will have cause to
revisit them in other sections of this part of the book as we think further
about the historical and ideological dimensions of ‘newness’ and ‘media’. It is
also very important to move beyond the abstraction and generality of the term;
there is a need to regain and use the term in its plural sense. We need to ask
what the new media are in their variety and plurality. As we do this we can see
that beneath the general sense of change we need to talk about a range of
different kinds of change. We also need to see that the changes in question are
ones in which the ratios between the old and the new vary.
Below, as an initial step in getting clearer about this, we
provide a schema that breaks down the global term ‘new media’ into some more
manageable constituent parts. Bearing in mind the question marks that we have
already placed over the ‘new’, we take ‘new media’ to refer to the following:
•
New textual experiences: new kinds of genre and
textual form, entertainment, pleasure and patterns of media consumption
(computer games, simulations, special effects cinema).
•
New ways of representing the world: media which,
in ways that are not always clearly defined, offer new representational
possibilities and experiences (immersive virtual environments, screen-based
interactive multimedia).
•
New relationships between subjects (users and
consumers) and media technologies: changes in the use and reception of image
and communication media in everyday life and in the meanings that are invested
in media technologies.
The characteristics of new media: some defining concepts
•
New experiences of the relationship between
embodiment, identity and com-munity: shifts in the personal and social experience
of time, space, and place (on both local and global scales) which have
implications for the ways in which we experience ourselves and our place in the
world.
•
New conceptions of the biological body’s
relationship to technological media: challenges to received distinctions
between the human and the artificial, nature and technology, body and (media
as) technological prostheses, the real and the virtual.
•
New patterns of organisation and production:
wider realignments and integrations in media culture, industry, economy,
access, ownership, control and regulation.
•
If we were to set out to investigate any one of
the above, we would quickly find ourselves encountering a whole array of
rapidly developing fields of technologically mediated production (user-generated
content) and even a history of such as the site for our research. These would
include:
•
Computer-mediated communications: email, chat
rooms, avatar-based communication forums, voice image transmissions, the World
Wide Web, blogs etc., social networking sites, and mobile telephony.
•
New ways of distributing and consuming media
texts characterised by interactivity and hypertextual formats – the World Wide
Web, CD, DVD, Podcasts and the various platforms for computer games.
•
Virtual ‘realities’: simulated environments and
immersive representational spaces.
•
A whole range of transformations and
dislocations of established media (in, for example, photography, animation,
television, journalism, film and cinema).
The characteristics
of new media: some defining concepts
In previous section we noted that the unifying term ‘new
media’ actually refers to a wide range of changes in media production,
distribution and use. These are changes that are technological, textual,
conventional and cultural. Bearing this in mind, we nevertheless recognise that
since the mid-1980s at least (and with some changes over the period) a number
of concepts have come to the fore which offer to define the key characteristics
of the field of new media as a whole. We consider these here as some of the
main terms in discourses about new media. These are: digital, interactive,
hypertexual, virtual, networked, and simulated.
Before we proceed with this, we should note some important
methodological points that arise when we define the characteristics of a medium
or a media technology. What we are calling ‘characteristics’ here (digital,
interactive, hypertexual etc.) can easily be taken to mean the ‘essential
qualities’ of the medium or technology in question. When this happens being
‘digital’, for example, ceases to mean a source of possibilities, to be used,
directed, and exploited. It becomes, instead, a totalising or overarching
concept which wholly subsumes the medium in question. There is then a danger
that we end up saying, ‘Because a technology is like “this” (electronic,
composed of circuits and pulses which transform colour, sound, mass or volume
into binary digital code) it necessarily results in “that” (networked, fleeting
and immaterial products)’. To make this move risks the accusation of
‘essentialism’ (an ‘essentialist’ being someone who argues that a thing is what
it is because it possesses an unchanging and separable essence:).
One of the complete human-headed lions from the entrance to
the throneroom of Ashurnasirpal II now in the British Museum. The head of a
corresponding sculpture can be seen in the foreground. These two figures were
recorded using a NUB 3D Triple White light scanning system. They were recorded
and milled at a resolution of 400 microns.
With regard to ‘digitality’ an instructive example is
offered by the work carried out by the artists and technicians of
‘Factum–Arte’, a group who use digital technology to reproduce ancient
artefacts such as sculptures, monuments, bas-reliefs and paintings. These are
not virtual, screen based replicas of the original works but material
facsimiles (‘stunning second originals’) achieved by computers and digital
technology driving and guiding powerful 3-D scanners, printers and drills.
Here, the ‘digital’ produces hefty material objects rather than networked,
fleeting and immaterial things. This may be a rare case of digital technology
being directly connected to the production of physically massive artefacts
rather than flickering images on screens (the ‘virtual’) but it nevertheless
warns against the kind of ‘this therefore that’ (digital) essentialism we
warned of above.
On the other hand, while traditional media studies is wary
of doing so, we also argue that it is very important to pay attention to the
physical and material constitution of a technology (a digital media-technology
no less than a heavy industrial manufacturing technology), not just its
cultural meanings and social applications. This is because there is a real
sense in which the physical nature and constitution of a technology encourages
and constrains its uses and operation. To put this very basically, some
technologies are tiny things, some are large and hefty. In terms of media
technologies, compare an iPod to a 1980s ‘ghetto-blaster’, or a 1940s
‘radiogram’ and consider the influence that their sheer size has on how they
are used, where and by whom, quite apart from matters such as the lifestyles
and cultural meanings that may be attached to these objects.
Such physical properties of technologies are real. They change
the environments and ecologies, natural and social, in which they exist. They
seriously constrain the range of purposes to which they can be put and
powerfully encourage others. Hence, recognising what a technology is – really
and physically – is a crucial, if a partial and qualified aspect of a media
technology’s definition. This does not mean that we should reduce technology to
its physical features because in doing that we would become essentialist about
technological objects; we would arrive at a technological essentialism.
Let us take a final example from ‘old’ media: broadcast
television (or radio). It is common (especially when contrasted to digital
networked media) to think of television as a centralised medium – broadcasting
out from a centre to a mass audience. This is not because the tech-nology of
television inevitably leads to centralisation (just as Factum-Arte’s digitality
doesn’t inevitably lead to virtuality) but it does lend itself to such a use;
it readily facilitates centralisation. Of course, alternative uses of broadcast
media existed as in ‘ham’ and CB radio, in local television initiatives in many
parts of the world, or even the use of the television receiver as a sculptural
light-emitting object in the video installations of the artist Nam June Paik.
Nevertheless television came to be developed and put to use dominantly in a
centralising direction. That is, television came to be organised in this way
within a social structure which needed to communicate from centres of power to the
periphery (the viewer/listener). Recognising that a single media technology can
be put to a multiplicity of uses, some becoming dominant and others marginal
for reasons that can be cultural, social, economic or political as well as
technological, is one important way of understanding what a medium is.
So, our approach here, in identifying new media’s
‘characteristics’, is not meant to lead to or endorse essentialism but to take
seriously the physical constitution and operation of technologies as well as the
directions in which they have been developed. Being ‘digital’ is a real state
and it has effects and potentialities. On the other hand, this does not mean
that ‘being digital’ is a full description or wholly adequate concept of
something. There is, then, a difference between assuming or asserting that we
have detected the essence of something and recognising the opportunities or
constraints that the nature of a media technology places before us. A useful
term here, taken from design theory, is ‘affordance’ which refers to the
perceived and actual properties of (a) thing, primarily those fundamental
properties that determine just how the thing could possibly be used . . . A
chair affords (‘is for’) support, and, therefore, affords sitting. A chair can
also be carried. Glass is for seeing through, and for breaking.
‘Affordance’ draws our attention to the actions that the
nature of a thing ‘invites’ us to per-form. It is in this spirit that we now
discuss the defining characteristics of new media.
Digital
We need first of all to think about why new media are
described as digital in the first place – what does ‘digital’ actually mean in
this context? In addressing this question we will have cause to define digital
media against a very long history of analogue media. This will bring us to a
second question. What does the shift from analogue to digital signify for
producers, audiences and theorists of new media?
In a digital media process all input data are converted into
numbers. In terms of communication and representational media this ‘data’
usually takes the form of qualities such as light or sound or represented space
which have already been coded into a ‘cultural form’ (actually ‘analogues’),
such as written text, graphs and diagrams, photographs, recorded moving images,
etc. These are then processed and stored as numbers and can be output in that
form from online sources, digital disks, or memory drives to be decoded and
received as screen displays, dispatched again through telecommunications
networks, or output as ‘hard copy’. This is in marked contrast to analogue
media where all input data is converted into another physical object.
‘Analogue’ refers to the way that the input data (reflected light from a
textured surface, the live sound of someone singing, the inscribed marks of
someone’s handwriting) and the coded media product (the grooves on a vinyl disc
or the distribution of magnetic particles on a tape) stand in an analogous
relation to one another.
Analogues
‘Analogue’ refers to processes in which one set of physical
properties can be stored in another ‘analogous’ physical form. The latter is
then subjected to technological and cultural coding that allows the original
properties to be, as it were, reconstituted for the audience. They use their
skills at e.g. watching movies to ‘see’ the ‘reality’ through the analogies.
Analogos was the Greek term which described an equality of ratio or proportion
in mathematics, a transferable similarity that by linguistic extension comes to
mean a comparable arrangement of parts, a similar ratio or pattern, available
to a reader through a series of transcriptions. Each of these transcriptions
involves the creation of a new object that is determined by the laws of physics
and chemistry.
Analogue and digital
type
Consider how this book would have been produced by the
analogue print process which used discrete, movable pieces of metal type; the
way of producing books in the 500 years between Gutenberg’s mid
fifteenth-century invention of the printing press and the effective
introduction of digital printing methods in the 1980s. Handwritten or typed
notes would have been transcribed by a typesetter who would have set the pages
up using lead type to design the page. This type would then have been used with
ink to make a physical imprint of the words onto a second artefact – the book
proofs. After correction these would have been transcribed once more by the
printer to make a second layout, which would again have been made into a
photographic plate that the presses would have used to print the page. Between
the notebook and the printed page there would have been several analogous
stages before you could read the original notes. If, on the other hand, we
write direct into word processing software every letter is immediately represented
by a numerical value as an electronic response to touching a key on the
keyboard rather than being a direct mechanical impression in paper caused by
the weight and shape of a typewriter ‘hammer’ (see Hayles 1999: 26, 31).
Layout, design and correction can all be carried out within a digital domain
without recourse to the painstaking physical work of type manipulation.
Analogue media, mass production and broadcasting
The major media of the nineteenth and early twentieth
centuries (prints, photographs, films and newspapers) were the products not
only of analogue processes but also of technologies of mass production. For
this reason, these traditional mass media took the form of industrially
mass-produced physical artefacts which circulated the world as copies and
commodities.
With the development of broadcast media, the distribution
and circulation of such media as physical objects began to diminish. In
broadcast media the physical analogue properties of image and sound media are
converted into further analogues. These are wave forms of differing lengths and
intensities which are encoded as the variable voltage of transmission signals.
In live broadcast media such as pre-video television or radio there was a
direct conversion of events and scenes into such electronic analogues.
This electronic conversion and transmission (broadcast) of
media like film, which is a physical analogue, suggests that digital media
technologies do not represent a complete break with traditional analogue media.
Rather, they can be seen as a continuation and extension of a principle or
technique that was already in place; that is to say, the principle of
conversion from physical artefact to signal. However, the scale and nature of
this extension are so significant that we might well experience it not as a
continuation but as a complete break. We now look at why this is so.
Digital media
In a digital media process the physical properties of the
input data, light and sound waves, are not converted into another object but
into numbers; that is, into abstract symbols rather than analogous objects and
physical surfaces. Hence, media processes are brought into the symbolic realm
of mathematics rather than physics or chemistry. Once coded numerically, the
input data in a digital media production can immediately be subjected to the
mathematical processes of addition, subtraction, multiplication and division
through algorithms contained within software.
It is often mistakenly assumed that ‘digital’ means the
conversion of physical data into binary information. In fact, digital merely
signifies the assignation of numerical values to phenomena. The numerical
values could be in the decimal (0–9) system; each component in the system would
then have to recognise ten values or states (0–9). If, however, these numerical
values are converted to binary numbers (0 and 1) then each component only has
to recognise two states, on or off, current or no current, zero or one. Hence
all input values are converted to binary numbers because it makes the design and
use of the pulse recognition components that are the computer so much easier
and cheaper.
This principle of converting all data into enormous strings
of on/off pulses itself has a history. It is traced by some commentators from
the late seventeenth-century philosopher Leibniz, through the
nineteenth-century mathematician and inventor, Charles Babbage, to be
formulated seminally by Alan Turing in the late 1930s (Mayer 1999: 4–21). The
principle of binary digitality was long foreseen and sought out for a variety
of different reasons. However, without the rapid developments in electronic
engineering begun during the Second World War it would have remained a
mathematical principle – an idea. Once the twin engineering goals of
miniaturisation and data compression had combined with the principle of
encoding data in a digital form massive amounts of data could be stored and
manipulated.
In the last decades of the twentieth century the digital
encoding of data moved out from the laboratories of scientific, military and
corporate establishments (during the mainframe years) to be applied to
communications and entertainment media. As specialist software, accessible
machines and memory-intensive hardware became available, first text and then
sound, graphics and images became encodable. The process swiftly spread
throughout the analogue domain, allowing the conversion of analogue media texts
to digital bit streams.
The principle and practice of digitisation is important
since it allows us to understand how the multiple operations involved in the
production of media texts are released from existing only in the material realm
of physics, chemistry and engineering and shift into a symbolic computational
realm. The fundamental consequences of this shift are that:
•
Media texts are ‘dematerialised’ in the sense
that they are separated from their physical form as photographic print, book,
roll of film, etc. (However see the section ‘Digital processes and the material
world’ for an account of why this does not mean that digital media are
‘immaterial’.)
•
Data can be compressed into very small spaces;
•
It can be accessed at very high speeds and in
non-linear ways;
•
It can be manipulated far more easily than
analogue forms.
The scale of this
quantitative shift in data storage, access and manipulation is such that it has
been experienced as a qualitative change in the production, form, reception and
use of media.
Fixity and flux
Analogue media tend towards being fixed, where digital media
tend towards a permanent state of flux. Analogue media exist as fixed physical
objects in the world, their production being dependent upon transcriptions from
one physical state to another. Digital media may exist as analogue hard copy,
but when the content of an image or text is in digital form it is available as
a mutable string of binary numbers stored in a computer’s memory.
The essential creative process of editing is primarily
associated with film and video production, but in some form it is a part of
most media processes. Photographers edit contact strips, music producers edit
‘tapes’; and of course written texts of all kinds are edited. We can use the
process of editing to think further about the implications of ‘digitality’ for
media.
To change or edit a piece of analogue media involved having
to deal with the entire physical object. For instance, imagine we wanted to
change the levels of red on a piece of film as an analogue process. This would
involve having to ‘strike’ new prints from the negative in which the chemical
relationship between the film stock and the developing fluid was changed. This
would entail remaking the entire print. If the original and inadequate print is
stored digitally every pixel in every frame has its own data address. This
enables us to isolate only the precise shots and even the parts of the frame
that need to be changed, and issue instructions to these addresses to intensify
or tone down the level of red. The film as a digital document exists near to a
state of permanent flux until the final distribution print is struck and it
returns to the analogue world of cinematic exhibition. (This too is changing as
films get played out from servers rather than projectors in both on-demand
digital TV and movie theatres.)
Any part of a text can be given its own data address that
renders it susceptible to interactive input and change via software. This state
of permanent flux is further maintained if the text in question never has to
exist as hard copy, if it is located only in computer memories and accessible
via the Internet or the web. Texts of this kind exist in a permanent state of
flux in that, freed from authorial and physical limitation, any net user can
interact with them, turning them into new texts, altering their circulation and
distribution, editing them and sending them, and so on. This fundamental
condition of digitality is well summarised by Pierre Lévy:
The established differences between author and reader,
performer and spectator, creator and interpreter become blurred and give way to
a reading writing continuum that extends from the designers of the technology
and networks to the final recipient, each one contributing to the activity of
the other – the disappearance of the signature.
Digital processes and
the material world
So digitisation creates the conditions for inputting very
high quantities of data, very fast access to that data and very high rates of
change of that data. However, we would not want to argue that this represents a
complete transcendence of the physical world, as much digital rhetoric does.
The limits of the physical sciences’ ability to miniaturise the silicon chip
may have already have been reached although current research on nano-circuits
promises to reduce their current size by many times.
Although wireless connections between computers and servers
and to networks are becoming increasingly common, many connections continue to
rely upon cables and telephone lines, which have to be physically dug into the
earth. On a more day-to-day level the constant negotiations that any
computer-based media producer has to make between memory and compression are
also testament to the continuing interface with the physical emotional
response. I Way [Internet] thought is modular, non-linear, malleable and
co-operative. Many participants prefer internet writing to book writing as it
is conversational, frank and communicative rather than precise and over
written.
However, the responses prompted by the instantaneous
availability of the reply button are not always so positive – hence the
Internet-based practice of ‘flaming’ – argumentative, hostile and insulting
exchanges which can accelerate rapidly in a spiral of mutual recrimination. It
is precisely the absence of the face-to-face exchange that leads to
communication that can become dangerous. The carefully crafted diplomatically
composed memo gives way to the collectively composed, often acrimonious, email
debate.
With this kind of history in mind we can see how a
consideration of even the banal case of email might give rise to a number of
central critical questions:
1
Where does control over authorship lie when the
email text can be multiply amended and forwarded?
2
What kind of authority should we accord the
electronic letter? Why do we still insist on hard copy for contractual or legal
purposes?
3
What are the possible consequences of an
interpersonal communication system based increasingly not on face-to-face
interaction but on anonymous, instant, interaction?
In attempting to answer such questions we might have
recourse to different kinds of analytic contexts. First of all an understanding
of the cultural history and form of the letter itself. Second, an understanding
of the convergence of discrete media forms through the process of digitisation.
Third, an attempt to assess those shifts through already existing analyses of
culture – in this case theories of authorship and reading. Finally, the
questions above would have to be answered with reference to the study of CMC
(Computer Mediated Communications) in which the problem of the disappearance of
face-to-face communication has been central.
Interactivity
Since the early 1990s, the term ‘interactivity’ has been
much debated and has undergone frequent redefinition. Most commentators have
agreed that it is a concept that requires further definition if it is to have
any analytical purchase. At the ideological level, interactivity has been one
of the key ‘value added’ characteristics of new media. Where ‘old’ media
offered passive consumption new media offer interactivity. Generally, the term
stands for a more powerful sense of user engagement with media texts, a more
independent relation to sources of knowledge, individualised media use, and
greater user choice. Such ideas about the value of ‘interactivity’ have clearly
drawn upon the popular discourse of neo-liberalism which treats the user as,
above all, a consumer. Neo-liberal societies aim to commodify all kinds of
experience and offer more and more finely tuned degrees of choice to the
consumer. People are seen as being able to make individualised lifestyle
choices from a never-ending array of possibilities offered by the market.
Political economy
For full discussions of the problems of defining
interactivity see Jens F. Jensen’s ‘Interactivity -tracking a new concept in
media and communication studies’, in Paul Mayer (ed.) Computer Media and
Communication, Oxford: Oxford University Press, (1999), which offers a
comprehensive review of theoretical approaches, and E. Downes and S. McMillan,
‘Defining Interactivity’, New Media and Society 2.2 (2000): 157-179 for a
qualitative ethnographic account of the difficulties of applying theoretical
definitions in practice; and Lisbet Klastrup (2003) Paradigms of interaction
conceptions and misconceptions of the field today
Hypertextual
Virtual
What happened to Virtual Reality; The virtual and visual
culture; The digital virtual; Immersion: a history; Perspective, camera,
software; Virtual images/Images of the virtual ideological context then feeds
into the way we think about the idea of interactivity in digital media. It is
seen as a method for maximising consumer choice in relation to media texts.
However, in this section we are mainly concerned with the
instrumental level of meanings carried by the term ‘interactive’. In this
context, being interactive signifies the users’ (the individual members of the
new media ‘audience’) ability to directly intervene in and change the images
and texts that they access. So the audience for new media becomes a ‘user’
rather than the ‘viewer’ of visual culture, film and TV or a ‘reader’ of
literature. In interactive multi-media texts there is a sense in which it is
necessary for the user to actively intervene; to act as well as viewing or
reading in order to produce meaning. This intervention actually subsumes other
modes of engagement such as ‘playing’, ‘experimenting’, and ‘exploring’ under
the idea of interaction. Hinting at the connection between instrumental
definitions and ideological meanings, Rosanne Allucquere Stone suggests that
the wide field of possibility suggested by the idea of interactivity has been
‘electronically instantiated . . . in a form most suitable for commercial
development – the user moves the cursor to the appropriate place and clicks the
mouse, which causes something to happen’ (Stone 1995: 8). We can break down
this pragmatic account of interactivity further.
Hypertextual
navigation
Here the user must use the computer apparatus and software
to make reading choices in a database. (We are using the term ‘database’ in a
general rather than specifically technical sense – a database is any collection
of memory stored information, text, image, sound, etc.) In principle, this
database could be anything from the entire World Wide Web to a particular
learning package, an adventure game, or the hard drive on your own PC. The end
results of such interactions will be that the user constructs for him or
herself an individualised text made up from all the segments of text which they
call up through their navigation process. The larger the database the greater
the chance that each user will experience a unique text.
Immersive navigation
In the early 1990s Peter Lunenfeld (1993) usefully
distinguished between two paradigms of interaction, which he called the
‘extractive’ and the ‘immersive’. Hypertextual navigation (above) is
‘extractive’. However, when we move from seeking to gain access to data and
information to navigating representations of space or simulated 3D worlds we
move into ‘immersive’ interaction. In some sense both kinds of interaction rely
upon the same techno-logical fact – the existence of a very large database
which the user is called upon to experience. At one level, a more or less
realistically rendered 3D space like the game world of ‘Halo 3’ or ‘Grand Theft
Auto IV’ is just as much a big database as Microsoft’s ‘Encarta’ encyclopaedia.
We might say that the navigation of immersive media environments is similar to
hypertextual navigation, but with additional qualities.
When interacting in immersive environments the user’s goals
and the representational qualities of the media text are different. Immersive
interaction occurs on a spectrum from 3D worlds represented on single screens
through to the 3D spaces and simulations of virtual reality technologies.
Although the point-and-click interactivity of hypertextual navigation may well
be encountered in such texts, immersive interaction will also include the
potential to explore and navigate in visually represented screen spaces. Here
the purpose of interaction is likely to be different from the extractive
paradigm. Instead of a text-based experience aimed at finding and connecting
bits of information, the goals of the immersed user will include the visual and
sensory pleasures of spatial exploration.
Registrational
interactivity
Registrational interactivity refers to the opportunities
that new media texts afford their users to ‘write back into’ the text; that is
to say, to add to the text by registering their own messages. The base line of
this kind of interactivity is the simple activity of registration (i.e. sending
off details of contact information to a website, answering questions prompted
in online transactions, or typing in a credit card number). However, it extends
to any opportunity that the user has to input to a text. The original Internet
bulletin boards and newsgroups were a good example – not interactive in the
sense of face-to-face communication, yet clearly built up by successive inputs
of users’ comments. This ‘input’ or ‘writing back’ then becomes part of the
text and may be made available to other users of the database.
Interactive communications
As we have seen in our case study of email,
computer-mediated communications (CMC) have offered unprecedented opportunities
for making connections between individuals, within organisations, and between
individuals and organisations.
Much of this connectivity will be of the registrational
interactivity mode (defined above) where individuals add to, change, or
synthesise the texts received from others. However, when email and chat sites
are considered from the point of view of human communication, ideas about the
degree of reciprocity between participants in an exchange are brought into
play. So, from a Communication Studies point of view, degrees of interactivity
are further broken down on the basis of the kinds of communication that occur
within CMC. Communicative behaviours are classified according to their
similarity to, or difference from, face-to-face dialogue, which is frequently
taken as the exemplary communicative situation which all forms of ‘mediated’
communication have to emulate. On this basis, the question and response pattern
of a bulletin board or online forum, for instance, would be seen as less
interactive than the free-flowing conversation of a chat site. This inflects
the whole idea of interactivity by lending it a context of person-to-person
connection.
Interactivity and
problems of textual interpretation
Interactivity multiplies the traditional problems about how
texts are interpreted by their readers. By the problem of interpretation we
refer to the idea that the meaning of any given text is not securely encoded
for all audiences to decode in the same way. This is based upon the recognition
that the meanings of a text will vary according to the nature of its audiences
and circumstances of reception. We all already have highly active
interpretative relationships with the analogue (or linear) texts we encounter,
such as books and movies. Under conditions of interactivity this problem does
not disappear but is multiplied exponentially. This is because the producer of
an interactive text or navigable database never knows for certain which of the
many versions of the text their reader will encounter. For critics this has
raised the essential question of how to evaluate or even conceptualise a ‘text’
that never reads the same way twice. For producers it raises essential problems
of control and authorship. How do they make a text for a reader knowing that
they have very many possible pathways through it?
What is the
interactive text?
Established ways of thinking about how meaning is produced
between readers and texts assumed a stability of the text but a fluidity of
interpretation. Under conditions of interactivity this traditional stability of
the text has also become fluid. Hence as critics we find ourselves having to
reconceptualise the status of our own interpretations of the interactive text.
From a theoretical point of view the traditional semiotic tools used for
analysis of texts become
Problems for producers
If new media products pose new questions about textuality
they also demand different relationships between producers and users. How do
you design an interface that offers navigational choice but at the same time
delivers a coherent experience? These problems will of course vary from one
text to another. For instance, a website with many embedded links to other
sites will offer users many opportunities to take different pathways. The
reader/user is quite likely to click onto another site whilst only halfway
through your own. On the other hand, within a downloaded interactive learning
package, or one that runs off a discrete memory drive (i.e. CD-ROM/DVD) where
there is a finite database, the user can be far more easily ‘guided’ in their
navigation of pathways that the producers are able to pre-structure. This has
meant that producers of interactive texts have gradually come to understand
that they need to have collaborative and co-creative relationship with their
audiences. The digital media text (e.g. website, game, social network), is an
environment supporting a range of user activities that emerge within the
perimeters of the software. Producers therefore need, in Woolgar’s terms, to
‘configure’ the user, to have some idea of the kinds of behaviours that they
want their environment to afford, whilst simultaneously understanding that they
can neither wholly predict nor control what users will do within it. These rich
forms of interaction therefore have a number of consequences for producers:
•
They create the possibility for traditional
media producers to collaborate with audiences by finding ways to incorporate
‘user-generated content’ in their corporate projects e.g. newspapers ‘crowd
sourcing’ stories.
•
They also redefine the producer not as author
but as ‘experience designer’. Authors produced texts that readers interpreted.
Interactive media designers are increasingly experience designers, creating
open media spaces within which users find their own pathways (e.g. The Sims or
Second Life)
•
Audiences’ expectations of an interactive
experience with a mediated world create the conditions for transmedial
production in which for instance a TV programme can be repurposed across a
range of platforms, a website with chat/forum capability, a box set DVD with
additional material, a computer game etc.
Hypertextual
There are clear links between the navigational, explorative,
and configurative aspects of interactivity and hypertextuality. Also, like
interactivity, hypertextuality has ideological overtones and is another key
term that has been used to mark off the novelty of new media from analogue
media. Apart from its reference to non-sequential connections between all kinds
of data facilitated by the computer, in the early 1990s the pursuit of literary
hypertexts as novels and forms of non-linear fiction was much in evidence,
becoming something of an artistic movement. Such literary hypertexts also
attracted much attention from critics and theorists. This work now looks
something like a transitional moment produced by the meeting between literary
studies and new media potential. However, hypertext and hypertexuality remain
an important part of the history of computing, particularly in the way they
address ideas about the relationship of computer operating systems, software
and databases, to the operation of the human mind, cognitive processes and
learning.
Histories
The prefix ‘hyper’ is derived from the Greek ‘above, beyond,
or outside’. Hence, hypertext has come to describe a text which provides a
network of links to other texts that are ‘outside, above and beyond’ itself. Hypertext,
both as a practice and an object of study, has a dual history.
One history ties the term into academic literary and
representational theory. Here there has long been an interest in the way any
particular literary work (or image) draws upon or refers out to the content of
others, the process referred to as intertextuality. This places any text as
comprehensible only within a web of association that is at once ‘above, beyond
or outside’ the text itself. At another level, the conventional means of footnoting,
indexing, and providing glossaries and bibliographies – in other words the
navigational apparatus of the book – can be seen as antecedents of hypertexts,
again guiding the reader beyond the immediate text to necessary contextualising
information.
The other history is derived from the language of the
computer development industry. Here, any verbal, visual or audio data that has,
within itself, links to other data might be referred to as a hypertext. In this
sense the strict term ‘hypertext’ frequently becomes confused with the idea and
rhetoric of hypermedia (with its connotations of a kind of super medium which
is ‘above, beyond, or outside’ all other media connecting them all together in
a web of convergence).
Defining hypertexts
We may define a hypertext as a work which is made up from
discrete units of material, each of which carries a number of pathways to other
units. The work is a web of connection which the user explores using the
navigational aids of the interface design. Each discrete ‘node’ in the web has
a number of entrances and exits or links.
As we have seen, in a digitally encoded text any part can be
accessed as easily as any other so that we can say that every part of the text
can be equidistant from the reader. In an analogue system like traditional
video, arriving at a particular frame ten minutes into a tape involved having
to spool past every intervening frame. When this information came to be stored
digitally this access became more or less instantaneous. Such technology offers
the idea that any data location might have a number of instantly accessible
links to other locations built into it. Equally the many interventions and
manipulations enabled by this facility create the qualities of interactivity.
Hypertext and a model
of the mind
Vannevar Bush’s 1945 essay ‘As We May Think’ is often seen
as a seminal contribution to the idea of hypertext. Bush was motivated by the
problem of information overload; the problem of the sheer volume of knowledge
that specialists, even in the late 1940s, had to access and manipulate. Bush
proposed that science and technology might be applied to the management of
knowledge in such a way as to produce novel methods for its storage and
retrieval. He conceptualised a machine, the ‘Memex’, in which data could be
stored and retrieved by association rather than by the alphabetical and
numerical systems of library indices. Bush argued that,
The human mind operates by association. With one item in its
grasp, it snaps instantly to the next that is suggested by the association of
thoughts, in association with some intricate web of trails carried by the cells
of the brain.
It [the Memex] affords an immediate step . . . to
associative indexing, the basic idea of which is a provision whereby any item
may be caused at will to select immediately and automatically another . . . The
process of tying two items together is the important thing. (Bush in Mayer
1999: 34)
Bush’s argument from 1945 carries within it many of the
important ideas that have subsequently informed the technology and practice of
hypertext. In particular his position rests upon the assertion that associative
linkage of data is a more ‘natural’ model of information management than the
conventional linear alphabetical methods of bibliography such as the Dewey
library system. Associative linkage, argues Bush, replicates more accurately
the way the mind works. The continuing appeal of hypertext as both information
storage and creative methodology has been that it appears to offer a better
model of consciousness than linear storage systems. We can observe this appeal
continuing in speculation about the development of a global ‘neural net’ that
follows on from Nelson’s arguments below. These ideas also resurface in a
different form in the arguments of Pierre Lévy calling for a global ‘collective
intelligence’ and in the daily practice of using a site like Wikipedia. Such an
enterprise appears in many ways to conform to the idea that knowledge can be
produced through associative rather than linear linkage and that, moreover,
this knowledge can be collectively authored.
Hypertext as
non-sequential writing
The microfiche technologies of the postwar period were
unable to create Bush’s vision. However, twenty years later, as digital
computing began to be more widespread, his ideas were revived, most notably by
Ted Nelson. His 1982 paper ‘A New Home for the Mind’ argues for the wholesale
reorganisation of knowledge along hypertextual lines:
This simple facility – call it the jump-link capability –
leads immediately to all sorts of new text forms: for scholarship, for
teaching, for fiction, for poetry . . . The link facility gives us much more
than the attachment of mere odds and ends. It permits fully non sequential
writing. Writings have been sequential because pages have been sequential. What
is the alternative? Why hypertext – non sequential writing.
However, Nelson does not stop at the idea of non-sequential
writing, he also foresees, ten years before browser software made Internet
navigation a non-specialist activity, a medium very close to contemporary
website forms of the Internet. In this medium ‘documents window and link freely
to one another’, ‘every quotation may be traced instantly’, and ‘minority
inter-pretations and commentary may be found everywhere’. He envisages a
hyperworld – a new realm of published text and graphics, all available
instantly; a grand library that anybody can store anything in – and get a
royalty for – with links, alternate visions, and backtrack available as options
to anyone who wishes to publish them.
So, the postwar challenge of managing information overload,
a model of the mind as a web of trails and associations, and a concept of
non-linear writing then extended to a freely accessible ‘grand library’ of all
kinds of media, finally lead us to the concept of hypermedia. Nelson’s vision
of the potential of hypertext opens out to encompass an emancipatory
configuration of human knowledge based in accessibility and manipulation
through associative links.
Hypermediacy
More recently the very specific application of hypertext as
an information management principle expanded to suggest all kinds of
non-linear, networked paradigms. Here the term began to overlap with the idea
of hypermediacy. The ideological investment in the idea of hypertext spills
over into use of the term ‘hypermedia’ to describe the effects of hypertextual
methods of organisation on all mediated forms. By the end of the 1990s,
hypermediacy emerged as an important term in a theory of new media:
the logic of hypermediacy acknowledges multiple acts of
representation and makes them visible. Where immediacy suggests a unified
visual space, contemporary hypermediacy offers a heterogeneous space, in which
representation is conceived of not as a window on the world, but rather as ‘windowed’
itself – with windows that open on to other representations or other media. The
logic of hypermediacy multiplies the signs of mediation and in this way tries
to reproduce the rich sensorium of human experience.
Reproducing the ‘rich sensorium of human experience’ is the
kind of claim that recalls Marshall McLuhan’s view that media should be
understood as extensions of the human body (1.6.2). As we have seen, it is a
claim that that was present in the original formulations of ideas of
hypertextuality – the assumptions about cognition in Vannevar Bush and Ted
Nelson here become a principle in which hypermedia are valorised as somehow
representing the ultimate augmentation of human consciousness.
From the library to
Google – critical questions in hypertext
Much of the debate arising from the application of hypertext
overlapped with discussions about the consequences of interactivity. However,
debates about the issues and questions arising from hypertext practices have
been conducted with reference to literary theory while questions of
interactivity tended to reference human computer interface studies and
communication studies.
Clearly, considerations of interactivity and hypertext share
a concern with the status and nature of the text itself. What happens when
conventional ways of thinking about the text derived from literature or media
studies are applied to texts that, allegedly, work in entirely new ways? If the
existing structures of knowledge are built upon the book, what happens when the
book is replaced by the computer memory and hypertextual linking?
Since the Middle Ages human knowledge and culture has been
written, recorded and in some sense produced by the form of the book (see, for
example, Ong 2002; Chartier 1994). The printed word has established an entire
taxonomy and classification system for the management and production of
knowledge (e.g. contents, indices, reference systems, library systems, citation
methods, etc.). It is argued that this literary apparatus of knowledge is
defined around sequential reading and writing. When we write, we order our
material into a linear sequence in which one item leads into another within
recognised rhetorical terms of, for example, argument, narrative or
observation. Similarly the reader follows, by and large, the sequencing
established by the author. Now, it was argued, hypertext offered the
possibility of non-sequential reading and writing. There is no single order in
which a text must be encountered.
Each ‘node’ of text carries within it variable numbers of
links that take the reader to different successive nodes, and so on. Thus the
reader is offered a ‘non-linear’ or, perhaps more accurately, a ‘multilinear’
experience. (Following a link is a linear process; however the variable number
of links on offer in any given text produce high numbers of possible pathways.)
Mapping Marshall
McLuhan
The primary literature and debates arising are by now
extensive, and have become one of the most important points of contact between
European critical theory and American cyberculture studies. This section offers
a brief introductory overview of the key questions. For further study see, for
example, Jay David Bolter, Writing Space: The Computer, Hypertext and the
History of Writing, New York: Erlbaum (1991); George Landow and Paul Delaney
(eds), Hypermedia and Literary Studies, Cambridge, Mass.: MIT Press (1991);
George Landow, Hypertext: The Convergence of Contemporary Literary Theory and
Technology, Baltimore and London: Johns Hopkins University Press (1992)
(especially pp. 1-34); George Landow (ed.) Hyper/Text/Theory, Baltimore and
London: Johns Hopkins University Press (1994); Mark Poster, The Mode of
Information,
Knowledge constructed as multilinear rather than monolinear,
it is argued, threatens to overturn the organisation and management of
knowledge as we have known it to date, since all existing knowledge systems are
founded upon the principle of monolinearity.
Thus the very status of the text itself is challenged. The
book which you hold in your hand is dissolved into a network of association –
within the book itself numerous crosslinkages are made available which
facilitate many different reading pathways; and the book itself becomes
permeable to other texts. Its references and citations can be made instantly
available, and other related arguments or converse viewpoints made available
for immediate comparison. In short, the integrity of the book and of book-based
knowledge systems is superseded by network knowledge systems. The
superstructure of knowledge storage that formed library systems (Dewey
classification, indices, paper based catalogues) is replaced by the design of
the search engine with its associated systems of metadata, tagging and
user-generated taxonomies of knowledge.
Hypertext scholarship
We can identify two trajectories in the first wave of
hypertext scholarship that began to try and understand the significance of
these developments.
The first was the return to previously marginal works in the
history of literature which had themselves sought to challenge the linearity of
text – these often experimental works are then constructed as
‘proto-hypertexts’. So, for instance, works as diverse as the I Ching, Sterne’s
Tristram Shandy, Joyce’s Ulysses, stories by Borges, Calvino, and Robert Coover
and literary experiments with the material form of the book by Raymond Queneau
and Marc Saporta are all cited as evidence that hypertextual modes of
apprehension and composition have always existed as a limit point and challenge
to ‘conventional’ literature. For students of other media we might begin to add
the montage cinema of Vertov and Eisenstein, experiments with point of view in
films like Kurosawa’s Rashomon and time in a film like Groundhog Day (see, for
example, Aarseth 1997: 41–54 and Murray 1997: 27–64). Equally, the montage of
Dada, Surrealism and their echoes in the contemporary collage of screen-based
visual culture might also be seen as ‘hypermediated’ in Bolter and Grusin’s
sense. Here then is another important point at which the history of culture is
reformulated by the development of new media forms.
Networked
During the late 1970s and throughout the 1980s, capitalist
economies experienced recurring crises, caused by the rigidity of their
centralised production systems. These were crises in the profitability of the
mass production of homogeneous commodities for mass consumer markets. In his
detailed analysis of a shift from the ‘modern’ to the ‘postmodern’ mode of
production, the Marxist cultural geographer David Harvey traced the manner in
which these rigidities of centralised ‘fordist’ economies were addressed.
Writing in 1989, he noted,
what is most interesting about about the current situation
is the way that capitalism is becoming ever more tightly organized through
dispersal, geographical mobility, and flex-ible responses in labour markets,
labour processes and consumer markets, all accompanied by hefty doses of
institutional, product, and technological innovation [our emphases]
These changes were felt in the organisation of media
production. In 1985, Françoise Sabbah observed the tendency of the then
emerging ‘new media’ toward decentralisation of production, differentiation of
products, and segmentation of consumption or reception: the new media determine
a segmented, differentiated audience that, although massive in terms of
numbers, is no longer a mass audience in terms of simultaneity and uniformity
of the message it receives. The new media are no longer mass media . . .
sending a limited number of messages to a homogeneous mass audience. Because of
the multiplicity of messages and sources, the audience itself becomes more
selective. The targeted audience tends to choose its messages, so deepening its
segmentation . . . (Sabbah 1985: 219; quoted in Castells 1996: 339)
Now, in the first decade of the twenty-first century, these
have become key aspects of our networked and dispersed mediasphere. Over the
last twenty-five years or so, the development of decentralised networks has
transformed media and communication processes. Indeed, some commentators now
argue, we have recently entered a new phase in which these characteristics
become even more pronounced. Here, not only are the markets and audiences for
media of all kinds demassified, increasingly specialist and segmented, and
involving a blurring of producer and consumer, but whole sectors of the new
media industries are learning to see their role as providing the means and
opportunities for ‘users’ to generate their own content. Simultaneously, a new
media economics is being recognised, one that does not aim to address large
single audiences but instead seeks out the myriad of minority interests and
niche markets that the net is able to support.
The World Wide Web, corporate intranets, Virtual Learning
Environments, MPORPGs, ‘persistent worlds’, Social Network Sites, blog
networks, online forums of all kinds, and humble email distribution lists, are
all networks of various scales and complexities that nestle within or weave
their way selectively through others. All are ultimately connected in a vast, dense
and (almost) global network (the Internet itself) within which an individual
may roam, if policed and limited by firewalls, passwords, access rights,
available bandwidths and the efficiency of their equipment. This is a network
that is no longer necessarily accessed at fixed desktop workstations plugged
into terrestrial phone lines or cables, but also wirelessly and on the move,
via laptops, PDAs, GPS devices, and mobile phones.
There are intricacies, unforeseen contradictions and social,
political, economic and cultural questions that arise with these developments.
These issues are more fully discussed in Part 3 of this book. For the moment
our task is to see how, in recent history, there has been a shift from media
centralisation to dispersal and networking.
Consumption
From our present position we can see that from the 1980s on,
our consumption of media texts has been marked by a shift from a limited number
of standardised texts, accessed from a few dedicated and fixed positions, to a
very large number of highly differentiated texts accessed in multifarious ways.
The media audience has fragmented and differentiated as the number of media
texts available to us has proliferated. For instance, from an era with a
limited number of broadcast TV stations, containing no time-shifting VCRs or
DVD players, with very limited use of computers as communication devices and no
mobile media at all, we now find ourselves confronted by an unprecedented
penetration of media texts into everyday life. ‘National’ newspapers are
produced as geographically specific editions; they can be interactively
accessed, archived online, we can receive ‘alerts’ to specific contents.
Network and terrestrial TV stations are now joined by independent satellite and
cable channels. Alongside real-time broadcasts we have TV ‘on demand’, time
shifted, downloaded and interactive. The networked PC in the home offers a vast
array of communication and media consumption opportunities; mobile telephony
and mobile computing have begun to offer a future in which there are no media
free zones, at least in the lives of the populations of the ‘developed’ world.
Technologists are currently conceptualising what a ‘pervasive’ media
environment will be, when all media is available on a variety of wireless platforms
and devices.
The ‘mass media’, which were transformed in this way, were
the products of the communication needs of the first half of the twentieth
century in the industrialised world and as such they had certain
characteristics. They were centralised, content was produced in highly
capitalised industrial locations such as newspaper printworks or Hollywood film
studios. In broadcast media, press and cinema, distribution was tied to
production, film studios owned cinema chains, newspapers owned fleets of
distribution vans, the BBC and other national ‘broadcasters’ owned their own
transmission stations and masts. Consumption was characterised by uniformity:
cinema audiences all over the world saw the same movie, all readers read the
same text in a national newspaper, we all heard the same radio programme. And
we did these things at the same scheduled times. Twentieth-century mass media
were characterised by standardisation of content, distribution and production
process. These tendencies toward centralisation and standardisation in turn
reflected and created the possibility for control and regulation of media
systems, for professionalisation of communicative and creative processes, for
very clear distinctions between consumers and producers, and relatively easy
protection of intellectual property.
The centre of a
circle
A useful way to conceptualise the difference between
centralised and dispersed media distribution systems is to think about the
differences between radio and television broadcast transmissions and computer
media networks. The technology at the heart of the original radio and TV
broadcast systems is radio wave transmission; here transmission suites required
high investment in capital, plant, buildings, masts, etc. Airwave transmission
was supplemented by systems of coaxial cable transmission, where massive
investments throughout the twentieth century led to the establishment of a
global network of cable systems crossing whole continents and oceans. At the
core of this technology of transmission there was a central idea, that of
transmission from ‘one to many’: one input signal was relayed to many points of
consumption. The radio transmitter, then, works (for social and technological
reasons) on a centralised model.
Nodes in a web
In contrast, the computer server is the technology at the
heart of the dispersed systems of new media. A server, by contrast to a
transmission mast, is a multiple input/output device, capable of receiving
large amounts of data as input as well as making equally large quantities
available for downloading to a PC. The server is a networked device. It has
many input connections and many output connections, and exists as a node in a
web rather than as the centre of a circle. A radio transmitter capable of
handling broadcast radio and TV signals is an expensive capital investment way
beyond the reach of most enterprises or individuals. The server, on the other
hand, is relatively cheap, being commonplace in medium or large enterprises of
all kinds. Access to server space is commonly domestically available as part of
online subscription packages.
However, this simple opposition between the centralised and
the networked prompts questions. Most interestingly, it points up how there is
no radical and complete break between ‘old’ and ‘new’ media. This is because
networked media distribution could not exist without the technological spine
provided by existing media routes of transmission, from telephone networks to
radio transmission and satellite communications. ‘Old’ media systems of distribution
are not about to disappear, although they become less visible, because they are
the essential archaeological infrastructure of new media.
New media networks have been able to reconfigure themselves
around this ‘old’ core to facilitate new kinds of distribution that are not
necessarily centrally controlled and directed but are subject to a radically
higher degree of audience differentiation and discrimination. Many different
users can access many different kinds of media at many different times around
the globe using network-based distribution. Consumers and users are
increasingly able to customise their own media use to design individualised
menus that serve their particular and specific needs.
This market segmentation and fragmentation should not be
confused with a general democratisation of the media. As Steemers, Robins and
Castells have argued, the multiplication of possible media choices has been
accompanied by an intensification of merger activities among media
corporations: ‘we are not living in a global village, but in customised
cottages globally produced and locally distributed’.
Production
This increased flexibility and informality of our
interaction with media texts of all kinds is equally present in the field of
media production. Here, too, we have seen the development of production
technologies and processes that have challenged the older centralised methods
of industrial organisation and mass media production sectors. These changes can
be perceived within the professional audiovisual industries as well as within
our everyday domestic spheres.
Today, media industries are facing the fact that the
conjunction of computer-based communications and existing broadcast
technologies has created a wholly new and fluid area of media production. The
traditional boundaries and definitions between different media processes are
broken down and reconfigured. The specialist craft skills of twentieth-century
media production have become more generally dispersed throughout the population
as a whole, in the form of a widening baseline of ‘computer literacy’,
information technology skills, and especially the availability of software that
increasingly affords the production of ‘user-generated content’.
Across the period, the range of sites for the production of
media content has expanded – production has been dispersing itself more
thoroughly into the general economy, now frequently dubbed ‘the knowledge
economy’ or the ‘information society’. This dispersal of production can also be
observed from the perspective of the everyday worlds of work and domesticity.
Consider the proximity of media production processes to a twentieth-century
citizen. In the UK during the 1970s, for instance, the nineteenth-century media
processes of print and photography would probably have been the only kind of
media production processes that might be used or discussed in everyday life as
part of civic, commercial, cultural or political activity. Broadcasting and
publishing systems (the ‘press’) were mostly very distant from the lives of
ordinary people. However, by the end of the century, print production was
easier than ever through digitised desktop publishing, and editorial and design
technologies were all available in domestic software packages.
An extraordinary but little noticed and eccentric example of
this is the use of a subterranean system of conduits designed to provide
hydraulically (waterpowered) generated electricity to London households in the
1890s. The conduits were designed to hold water under pressure which powered generators
placed at the threshold of each subscribing home. This system, owned until the
1970s by the long defunct ‘London Hydraulic Power Company’, was purchased by
Mercury Telecommunications in 1992. Under Mercury’s ownership these conduits
originally designed to carry water I were used as a means to I deliver Internet
cable I services to those same I homes (Gershuny 1992) digital cameras,
post-production processes, and distribution through file compression and
networks, have transformed domestic photography (see Rubinstein and Sluis
2008). Television production has moved much closer to the viewer in the sense
that very many of us ‘shoot’ digital video which can now be distributed online
by, for example, YouTube (see 3.23). There may be limitations to this self
production of media images, although new conventions and forms are also
emerging to which the once mainstream media respond reflexively, but, as
Castells recognised, it has also modified the older ‘one way flow’ of images
and has ‘reintegrated life experience and the screen’ (1996: 338).
The integration of media process into everyday life is not
confined to the domestic sphere. As work has increasingly moved towards service
rather than production economies all kinds of non-media workers find themselves
called upon to be familiar with various kinds of media production processes
from web design to Powerpoint presentation and computer-mediated communication
software. Both at home and at work media production processes are far closer to
the rhythms of everyday life. While we certainly would not wish to
over-emphasise the degree of this proximity by echoing claims of cyber pioneers
for the total collapse of the distinction between consumption and production,
it is certainly the case that the distance between the elite process of media
production and everyday life is smaller now than at any time in the age of mass
media.
Consumption meets
production
Across a range of media we have seen the development of a
market for ‘prosumer’ technologies; that is, technologies that are aimed at
neither the professional nor the (amateur) consumer market but both –
technologies that enable the user to be both consumer and producer. This is
true in two senses; the purchaser of a £2,000 digital video camera is clearly a
consumer (of the camera), and may use it to record home movies, the traditional
domain of the hobbyist consumer. However, they may equally use it to record
material of a broadcast quality for a Reality TV show, or to produce an
activist anti-capitalist video that could have global distribution or
pornographic material that could equally go into its own circuit of
distribution. Until the 1990s the technological separation between what was
acceptable for public distribution and what was ‘only’ suitable for domestic
exhibition was rigid. The breakdown of the professional/amateur category is a
matter ultimately of cost. The rigid distinction between professional and
amateur technologies defined by engineering quality and cost has now broken
down into an almost infinite continuum from the video captured on a mobile
phone to the high-definition camera commanding six-figure prices.
The impact of these developments has been most clearly seen
in the music industry. Digital technologies have made possible a dispersal and
diffusion of music production that has fundamentally changed the nature of the
popular music market. The apparatus of analogue music production, orchestral
studios, 20-foot sound desks and 2-inch rolls of tape can all now be collapsed
into a sampling keyboard, a couple of effects units, and a computer. The
bedroom studio was clearly one of the myths of ‘making it’ in the 1990s;
however, it is not without material foundation. The popular success of dance
music in all its myriad global forms is in part the consequence of digital
technologies making music production more accessible to a wider range of
producers than at any time previously.
The PC itself is in many ways the ultimate figure of media
‘prosumer’ technology. It is a technology of distribution, of consumption, as
well as a technology of production. We use it to look at and listen to other
people’s media products, as well as to produce our own, from ripping CD
compilations to editing videotape, mixing music or publishing websites. This
overlap between consumption and production is producing a new networked zone of
media exhibition that is neither ‘professionalised’ mainstream nor amateur
hobbyist. Jenkins argues that it is clear that new media technologies have
profoundly altered the relations between media producers and consumers. Both
culture jammers and fans have gained greater visibility as they have deployed
the web for community building, intellectual exchange, cultural distribution,
and media activism. Some sectors of the media industries have embraced active
audiences as an extension of their marketing power, have sought greater
feedback from their fans, and have incorporated viewer generated content into
their design processes. Other sectors have sought to contain or silence the
emerging knowledge culture. The new technologies broke down old barriers
between media consumption and media production. The old rhetoric of opposition
and cooptation assumed a world where consumers had little direct power to shape
media content and where there were enormous barriers to entry into the
marketplace, whereas the new digital environment expands their power to
archive, annotate, appropriate, and recirculate media products.
In the media industries the craft bases and apprenticeship
systems that maintained quality and protected jobs have broken down more or
less completely, so that the question of how anyone becomes ‘qualified’ to be a
media producer is more a matter of creating a track record and portfolio for
yourself than following any pre-established routes. This crisis is also
reflected in media education. Here, some argue for a pressing need for a new
vocationalism aimed at producing graduates skilled in networking and the
production of intellectual and creative properties. Others argue that, in the
light of the new developments outlined above, media studies should be seen as a
central component of a new humanities, in which media interpretation and
production are a core skill set for all kinds of professional employment. Yet
others argue for a ‘Media Studies 2.0’ which would break with the traditional
media studies emphasis on ‘old’ broadcasting models and would embrace the new
skills and creativity of a ‘YouTube’ generation.
In summary, new media are networked in comparison to mass
media – networked at the level of consumption where we have seen a
multiplication, segmentation and resultant individuation of media use;
dispersed at the level of production where we have witnessed the multiplication
of the sites for production of media texts and a greater diffusion within the
economy as a whole than was previously the case. Finally, new media can be seen
as networked rather than mass for the way in which consumers can now more
easily extend their participation in media from active interpretation to actual
production.
Virtual
Virtual worlds, spaces, objects, environments, realities,
selves and identities, abound in discourses about new media. Indeed, in many of
their applications, new media technologies produce virtualities. While the term
‘virtual’ (especially ‘virtual reality’) is readily and frequently used with
respect to our experience of new digital media it is a difficult and complex
term. In this section we make some initial sense of the term as a
characteristic feature of new media.
First, throughout the 1990s, the popular icon of ‘virtual
reality’ was not an image of such a reality itself but of a person experiencing
it and the apparatus that produced it. This is the image of a head-set wearing,
crouching and contorted figure perceiving a computer-generated ‘world’ while
their body, augmented by helmets carrying stereoscopic LCD screens, a device
that monitors the direction of their gaze, and wired gloves or body suits
providing tactile and positioning feedback, moves in physical space.
Equally powerful have been a series of movies, cinematic
representations of virtual reality, from the early 1980s onwards, in which the
action and narrative takes place in a simulated, computer generated world.
The ‘virtual reality’ experienced by the wearer of the
apparatus is produced by immersion in an environment constructed with computer
graphics and digital video with which the ‘user’ has some degree of
interaction. The movies imagine a condition where human subjects inhabit a
virtual world which is mistaken for, or has replaced, a ‘real’ and physical
one.
Second, alongside these immersive and spectacular forms of
virtual reality, another influential use of the term refers to the space where
participants in forms of online communication feel themselves to be. This is a
space famously described as ‘where you are when you’re talking on the
telephone’ (Rucker et al. 1993: 78). Or, more carefully, as a space which
‘comes into being when you are on the phone: not exactly where you happen to be
sitting, nor where the other person is, but somewhere in between’ (Mirzoeff
1999: 91).
As well as these uses, the ‘virtual’ is frequently cited as
a feature of postmodern cultures and technologically advanced societies in
which so many aspects of everyday experience are technologically simulated.
This is an argument about the state of media culture, postmodern identity, art,
entertainment, consumer and visual culture; a world in which we visit virtual
shops and banks, hold virtual meetings, have virtual sex, and where
screen-based 3D worlds are explored or navigated by videogame players,
technicians, pilots, surgeons etc.
Increasingly we also find the term being used
retrospectively. We have already noted the case of the telephone, but also the
experience of watching film and television, reading books and texts, or
contemplating photographs and paintings are being retrospectively described as
virtual realities. These retrospective uses of the term can be understood in
two ways: either as a case of the emergence of new phenomena casting older ones
in a new light (Chesher 1997: 91) or that, once it is looked for, experience of
the ‘virtual’ is found to have a long history (Mirzoeff 1999: 91 and Shields
2003).
As Shields has pointed out (2003: 46) in the digital era the
meaning of ‘virtual’ has changed. Where, in everyday usage, it once meant a
state that was ‘almost’ or ‘as good as’ reality, it has now come to mean or be
synonymous with ‘simulated’. In this sense, rather than meaning an ‘incomplete
form of reality’ it now suggests an alternative to the real and, maybe, ‘better
than the real’. However, some older meanings of ‘virtual’ still find echoes in
modern usage. One of these is the connection between the virtual and the
‘liminal’ in an anthropological sense, where the liminal is a borderline or
threshold between different states such as the carnivals or coming of age
rituals held in traditional societies. Such rituals are usually marked by a
period in which the normal social order is suspended for the subject who is
passing from one status or position to another. The more recent interest in
virtual spaces as spaces of identity performance or places where different
roles can be played out appears continuous with older liminal zones (Shields
2003: 12).
The rise of the digital virtual (the virtual as simulation
and as an alternative reality) has also led to interest in philosophical
accounts of the virtual. Here, particularly in the thought of the philosopher
Gilles Deleuze, we are urged to see that the virtual is not the opposite of the
real but is itself a kind of reality and is properly opposed to what is
‘actually’ real. This is an important argument as, in a world in which so much
is virtual, we are saved from concluding that this is tantamount to living in
some kind of un-real and immaterial fantasy world. In networked,
technologically intensive societies we increasingly pass between actual and
virtual realities; in such societies we deal seamlessly with these differing
modes of reality.
There is a common quality to the two kinds of virtual reality
with which we started above (that produced by technological immersion and
computer generated imagery and that imagined space generated by online
communications). This is the way that they give rise to puzzling relationships
between new media technologies and our experiences and conceptions of space, of
embodiment (literally: of having and being conscious of having bodies) and
identity. The generic concept which has subsumed both kinds of virtual reality
has been ‘cyberspace’. It is now arguable that the widespread and deep
integration of new technologies into everyday life and work means that the
concept of ‘cyberspace’ (as another space to ‘real’ physical space) is losing
its force and usefulness. Nevertheless, the promise of a fusion of these two kinds
of virtual reality – the sensory plenitude of immersive VR and the connectivity
of online communication – has been an important theme in the new media
imaginary because, in such a scenario, full sensory immersion would be combined
with extreme bodily remoteness.
The middle term, the ground for anticipating such a fusion
of the two VRs, is the digital simulation of ‘high resolution images of the
human body in cyberspace’. The empirical grounds for venturing such a claim are
seen in the form of virtual actors or synthespians (computer simulations of
actors) that appear in cinema, T V, and videogames. However, the computing
power and the telecommunications bandwidth necessary to produce, transmit and
refresh simulations of human beings and their environments, let alone the
programming that would enable them to interact with one another in real time,
remains a technological challenge. Instead we find the body digitally
represented in a host of different ways. In popular culture for instance we see
increasing hybridisation of the human body in performance as real actors create
the data for a performance which is finally realised in CGI form through
various techniques of motion capture. In the realm of MMORPGs we see the body
of the user represented through avatars that are the subject of intense and
intricate work by their users.
If we were to understand these digitisations of the body as
partial realisations of the fully immersive 3-D Avatar, interesting questions
arise. Where does the desire for such developments lie? And, what goals or
purposes might attract the financial investment necessary for such
technological developments? In thinking about these developments, their
desirability and purpose, we have to take into account the technological
imaginary which so powerfully shapes thinking about new media of all kinds. We
are also reminded of the part played by science fiction in providing us with
ideas and images with which to think about cyberspace and the virtual. Writing
in the mid-1990s, Stone (1994: 84), suggested that when the first ‘virtual
reality’ environments came online they would be realisations of William
Gibson’s famous definition of cyberspace, in his novel Neuromancer, as a
‘consensual hallucination’. The current examples of persistent online worlds
such as ‘Second Life’ or games like World of Warcraft mark the current stage of
this vision and project.
The technological imaginary
William Gibson, in Neurotnancer (1986: 52), describes
cyberspace as ‘a consensual hallucination experienced daily by billions of
legitimate operators in every nation ... a graphic representation of data
abstracted from the banks of every computer in every human system. Unthinkable
complexity. Lines of light ranged in the nonspace of the mind, clusters and
constellations of data. Like city lights receding.’ This has become the
standard science fictional basis for imagining cyberspace as an architectural
(Cartesian) space, in which ‘a man may be seen, and perhaps touched as a woman
and vice versa - or as anything else. There is talk of renting prepackaged body
forms complete with voice and touch . . . multiple personality as commodity
fetish!’ (Stone 1994: 85)
Simulated
We saw in the previous section that uses of the concept
‘virtual’ have, in a digital culture, close relationships with ‘simulation’.
Simulation is a widely and loosely used concept in the new media literature,
but is seldom defined. It often simply takes the place of more established
concepts such as ‘imitation’ or ‘representation’. However where the concept is
paid more attention, it has a dramatic effect on how we theorise cultural
technologies such as VR (2.1–2.6) and cinema (2.7). For the moment, it is
important to set out how the term has been used in order to make the concept of
simulation, and how we will subsequently use it, clear.
Looser current uses of the term are immediately evident,
even in new media studies, where it tends to carry more general connotations of
the illusory, the false, the artificial, so that a simulation is cast as an
insubstantial or hollow copy of something original or authentic. It is
important to invert these assumptions. A simulation is certainly artificial,
synthetic and fabricated, but it is not ‘false’ or ‘illusory’. Processes of
fabrication, synthesis and artifice are real and all produce new real objects.
A videogame world does not necessarily imitate an original space or existing
creatures, but it exists. Since not all simulations are imitations, it becomes
much easier to see simulations as things, rather than as representations of
things. The content of simulations may of course (and frequently does) derive
from ‘representations’. This is what lies at the core of Umberto Eco’s analysis
of Disneyland for instance: the houses in Disneyland’s version of an ideal
American Main Street are fakes, deceits, they look something like real houses
yet are something quite different (in this case supermarkets or gift shops)
(Eco 1986: 43). But noticing a gap between the representational content of a
simulation (shops, space invaders) and its architectural or mechanical workings
should not lead us to discount and ignore the latter. The simulation exists
regardless of whether we are fooled by its content or not. Thus the problem to
which simulation draws our attention is not that of the difference between
‘simulated’ and ‘real’ content, but rather that of the material and real
existence of simulations as part of the furniture of the same real world that
has been so thoroughly ‘represented’ throughout the history of the arts and
media. In other words a simulation is real before it imitates or represents
anything.
For the present, however, as things stand in new media
studies, not only is there no agreement that simulation does in fact differ
from representation or imitation, but the simple profusion of answers to the
question of what simulation really is and how, or if it differs at all from
representation or imitation, has led many commentators to give up seeking any
specificity to the concept and to concede that [t]he distinction between simulation
and imitation is a difficult and not altogether clear one. Nevertheless, it is
vitally important. It lies at the heart of virtual reality.
Yet if the concept is, as Woolley here notes, ‘vitally
important’, it surely becomes all the more important to seek some clarity. We
should then examine the ways in which the term is in use with regard to the
analysis of new media. There are three very broad such ways, which we will call
Postmodernist, Computer, and Game simulation.
Postmodernist simulation
Here the term is drawn principally from Jean Baudrillard’s
identification of simulation with hyperreality (Baudrillard 1997). According to
Baudrillard, simulacra are signs that cannot be exchanged with ‘real’ elements
outside a given system of other signs, but only with other signs within it.
Crucially, these sign-for-sign exchanges assume the functionality and
effectiveness of ‘real’ objects, which is why Baudrillard calls this regime of
signs hyperreal. When, under these conditions, reality is supplanted by hyperreality,
any reality innocent of signs disappears into a network of simulation.
In postmodernist debates over the past few decades claims
that simulation is superseding representation have raised fundamental questions
of the future of human political and cultural agency. Baudrillard himself,
however, is no fan of postmodernist theory: ‘The postmodern is the first truly
universal conceptual conduit, like jeans or coca-cola . . . It is a world-wide
verbal fornication’ (Baudrillard 1996a: 70). This is in stark contrast to those
who use Baudrillard’s theorising as the exemplification of postmodern thought.
Douglas Kellner, for instance, considers Baudrillard as resignedly telling the
story of the death of the real without taking political responsibility for this
story. Others consider him the media pessimist par excellence, who argues that
the total coverage of the real with signs is equivalent to its absolute
disappearance. Still others celebrate Baudrillard as an elegant ‘so what?’ in
the face of the collapse of all values. All, however, omit the central point
regarding his theory of simulation: that it functions and has effects – it is
operational – and is therefore hyper-real rather than hyper-fictional. The
grounds of this operativity are always, for Baudrillard, technological: ‘Only
technology perhaps gathers together the scattered fragments of the real’
(Baudrillard 1996b: 4). ‘Perhaps’, he adds, ‘through technology, the world is
toying with us, the object is seducing us by giving us the illusion of power
over it’ (1996b: 5).
Baudrillard, who published an early (1967) and positive
review of McLuhan’s Understanding Media, makes it clear that the ground of
hyperrealism is technology as a complex social actor over which we maintain an
illusion of control. To cite a typically contentious Baudrillardian example,
electoral systems in developed democratic states do not empower an electorate,
but rather determine the exercise of democracy in cybernetic terms: voting for
party X rather than party Y consolidates the governance of binary coding over
political systems. This constitutes a ‘simulation’ of democracy not in the
sense that there are really and in fact more complex political issues
underlying this sham democracy; but rather in the sense that real and effective
politics is now conducted in precisely this new scenario. Choice has become the
only reality that matters, and it is precisely quantifiable. Thus the
simulation, or transposition of democracy onto another scene, concerned
exclusively with a hypertrophied ‘choice’, is the only political reality there
is. It is for this reason that simulations constitute, for Baudrillard, the
hyperreality of cybernetic governance. The ‘perfect crime’ to which the title
of one of Baudrillard’s works alludes is not the destruction of reality itself,
but the destruction of an illusory reality beyond the technologies that make it
work (Baudrillard 1996b). The effect is not a loss of reality, but the
consolidation of a reality without an alternative.
Where commentators on contemporary cultural change have
seized upon the concept of simulation is in noting a shift from
‘representation’ to simulation as dominant modes of the organisation of
cultural objects and their signifying relationships to the world. According to
such scholars ‘representation’ was conceived to be a cultural act, an artefact
of negotiated meanings, pointing, however unsuccessfully or incompletely, to a
real world beyond it. ‘Simulation’, they assert, supplants these negotiated
relationships between social and cultural agents and reality, replacing them
with relationships that operate only within culture and its mediations:
The theory of simulation is a theory of how our images, our
communications and our media have usurped the role of reality, and a history of
how reality fades. Such critical approaches draw on theories that identify
profound cultural, economic and political shifts taking place in the developed
world in recent decades. A defining moment in the development of this approach
is Guy Debord’s Society of the Spectacle (1967), which argues that the
saturation of social space with mass media has generated a society defined by
spectacular rather than real relations. Although there are various approaches
and positions within this broad trend, they generally share the assumption that
the emergence in the postwar period of a consumption-led economy has driven a
culture which is dominated and colonised by the mass media and commodification.
The rise of this commercialised, mediated culture brings with it profound
anxieties about how people might know, and act in, the world. The sheer
proliferation of television screens, computer networks, theme parks and
shopping centres, and the saturation of everyday life by spectacular images so
thoroughly mediated and processed that any connection with a ‘real world’ seems
lost, adds up to a simulated world: a hyperreality where the artificial is
experienced as real. Representation, the relationship (however mediated)
between the real world and its referents in the images and narratives of
popular media and art, withers away. The simulations that take its place also
replace reality with spectacular fictions whose lures we must resist. In broad
outlines, this remains the standard view of Baudrillard’s theses.
Accordingly, Baudrillard’s controversial and often
poorly-understood versions of simulation and simulacra have proved very
influential on theories and analysis of postwar popular and visual culture. The
nature of the ascendency of this order of simulation over that of representation
has been posited as being of fundamental importance to questions of the future
of human political and cultural agency. Cultural and critical theory, when
faced with the manufactured, the commodified and the artificial in modern
culture, has identified the simulational and simulacral character of postwar
culture in the developed world – a culture, it is claimed, that is increasingly
derealised by the screens of the mass media, the seductions and veilings of
commodification, and (more recently) the virtualisations of digital culture.
For instance, Fredric Jameson describes the contemporary world as one in which
all zones of culture and everyday life are subsumed by the commodifying reach
of consumer capitalism and its spectacular media: a whole historically original
consumers’ appetite for a world transformed into sheer images of itself and for
pseudo-events and ‘spectacles’ . . . It is for such objects that we reserve
Plato’s concept of the ‘simulacrum’, the identical copy for which no original
has ever existed. Appropriately enough, the culture of the simulacrum comes to
life in a society where exchange value has been generalized to the point at
which the very memory of use value is effaced, a society of which Guy Debord
has observed, in an extraordinary phrase, that in it ‘the image has become the
final form of commodity reification . . .’. (Jameson 1991: 18)
Similarly, for Cubitt, as reality fades, the materiality of
the world around us becomes unsteady, ‘the objects of consumption are unreal:
they are meanings and appearances, style and fashion, the unnecessary and the
highly processed’ (Cubitt 2001: 5).
What is at stake for these theorists is that any sense of
political agency or progressive knowledge is lost in this seductive,
consumerist apocalypse. The relationship between the real and the mediated, the
artificial and the natural, implodes. It is also clear how the technological
sophistication, seductive/immersive and commercial nature of videogames might
be seen as a particularly vivid symptom of this postmodernist condition (Darley
2000). It is equally clear, however, that these critics’ conceptions of
Baudrillard in general and simulation in particular are at best partial, and at
worst wholly misleading. For these reasons, it is wholly appropriate to refer
to such a constellation of theories as ‘postmodernist’, as it is to argue that
Baudrillard’s simulation is not postmodernist. Far from providing any
specificity to the concept of simulation, the postmodernist approach
generalises it to the point where it becomes an entire theory of culture.
Computer simulation
The second use of the concept reflects a more specific
concern with simulation as a particular form of computer media. Just as a
confusion of imitation, representation or mimesis with simulation arises in
postmodernist uses, critical approaches to computer simulation tend to take a
more nuanced attitude to the mimetic elements sometimes (but not always)
present in simulation. The principal difference is, in this case, that
simulation is not a dissembling, illusory distraction from the real world (like
Eco’s Disneyland) but rather a model of the world (or of some aspect of it).
This context presents a more specific and differentiated use of simulation than
that of the postmodernists. For some (writers, engineers, social scientists,
military planners, etc.) the computer simulation models complex and dynamic
systems over time in ways impossible in other media.
Marc Prensky, in a book that espouses the use of computer
games in education and training, offers three definitions of simulation:
•
any synthetic or counterfeit creation
•
creation of an artificial world that
approximates the real one
•
a mathematical or algorithmic model, combined
with a set of initial conditions, that allows prediction and visualisation as
time unfolds (Prensky 2001: 211)
The first and second of these definitions recall the
confusion of some aspects of simulation with imitation. That a simulation is a
‘counterfeit’ (definition 1) suggests it may be smuggled in, unnoticed, to stand
in for ‘the real thing’. That it is ‘synthetic’, by contrast, suggests only
that it has been manufactured. Just as it would be false to say that any
manufactured product, by virtue of being manufactured, counterfeits a reality
on which it is based (what does a car counterfeit?), so it would be equally
false to argue that all simulations ‘counterfeit’ a reality. In short, if
manufacturing goods adds additional elements to reality, so too, surely, should
manufacturing simulations.
Definition 2 repeats this error: an artificial world does
not necessarily approximate the real one. Consider, for example, the work of
exobiologists – biologists who research the possible forms life on other worlds
might take. An exobiologist, for instance, might simulate a world with denser
gravity than ours; this would entail that, if life evolved on such a world, it
would take a different form, with creatures perhaps more horizontally than
vertically based, replacing legs with other means of locomotion, and so forth.
Undoubtedly such a world is simulated, but it precisely does not approximate
ours. In a more familiar sense, this is what we encounter in videogame-worlds,
and the rules governing the motion of characters, the impact and consequence of
collisions, and so on. In particular, the issue of ‘virtual gravity’ (generally
weaker than the terrestrial variety with which we are familiar) demonstrates
the extent to which such simulations owe their contribution to reality to their
differences from, rather than approximations of, our own.
In computer game culture the term ‘simulation games’ refers
to a specific genre in which the modelling of a dynamic system (such as a city
in SimCity or a household in The Sims) provides the main motive of the game as
structure and gameplay experience of automata quite specifically differentiate
between automata proper and simulacra – in brief, not all automata are
simulacra, insofar as they do not necessarily approximate the human form. These
examples alone ought to make us wary of suggesting any equivalence between
imitation and simulation.
For the task in hand – the identification of analytical
concepts and approaches in the study of computer simulation in the context of a
general account of new media studies – Prensky’s third definition of simulations
as material (and mathematical) technologies and media is very useful. It
recalls, for instance, both the temporal aspects of simulation (see below) and
the Baudrillardian sense, reflecting on the notion of simulation as productive
of reality, neither a ‘counterfeit’ nor necessarily an approximation of a real
world beyond them. This is helpful in that such an account makes more obvious
sense of those simulations used in many different contexts, for example by
economists to predict market fluctuations, and by geographers to analyse
demographic change. Unlike the postmodernist use of the term, this gain in
applicability does not cost a loss of specificity. The processes of simulation
are also foregrounded in gaming, since all digital games are simulations to
some extent. Prensky cites Will Wright (the creator of SimCity, The Sims, and
numerous other simulation games) discussing simulations as models quite
different from, for example, balsa wood models. The simulation is temporal,
modelling processes such as decay, growth, population shifts, not physical
structures. The model, we might say in more familiar terms, really does precede
the reality it produces.
Simulation games
In recent years, game studies has adopted analytical, formal
and descriptive approaches to the specificity of computer simulation software.
‘Simulation’ here refers to the particular char-acter and operations of games,
particularly computer and videogames, as processual, algorithmic media.
Distinctions are made between simulation as a media form that models dynamic,
spatio-temporal and complex relationships and systems (for example, of urban
development and economics in SimCity) and the narrative or representational
basis of other, longer-established, media (literature, film, television, etc.).
Gonzalo Frasca’s
simulations are media objects that model complex systems. They are not limited
to computer media (pre-digital machines and toys can simulate) but come into
their own with the processing affordances of computing. This emphasis on the
simulational character of computer and videogames has proven to be productive
in the task of establishing the distinctiveness of the videogame as a hybrid
cultural form, emphasising features, structures and operations inherited from
both its computer science and board game forebears over other sides of its
family – notably its media ancestors (literature, cinema, television).
What distinguishes the computer simulation is precisely what
video games remind us of: it is a dynamic real-time experience of intervening
with sets of algorithms that model any environment or process (not just
imitating existing ones) – playing with parameters and variables.
So simulation in a videogame could be analysed thus:
•
Productive of reality – so in Doom, Tomb Raider,
or Grand Theft Auto the game is representational on one level – tunnels, city
streets, human figures, monsters and vehicles – part of the universe of popular
media culture, but the experience of playing the game is one of interacting
with a profoundly different kind of environment. These maps are not maps of any
territory, but interfaces to a database and the algorithms of the computer
simulation;
•
This ‘reality’ then is mathematically structured
and determined. As Prensky points out, The Sims adds a fun interface to a cultural
form rooted in science and the mathematical and traditionally presented only as
numbers on the screen. Games such as SimCity incorporated a variety of ways of
modelling dynamic systems – including linear equations (like a spreadsheet),
differential equations (dynamic system-based simulations like Stella) and
cellular automata – where the behaviors of certain objects come from their own
properties and rules for how those properties interacted with neighbors rather
than from overall controlling equations.
•
As we have seen, exobiology and some videogames
clearly indicate that simulations can function without simulating or
representing already existing phenomena and systems. The mimetic elements of
Tetris, Minesweeper and Donkey Kong are residual at best, yet each of these
games is a dynamic simulated world with its own spatial and temporal dimensions
and dynamic relationships of virtual forces and effects. They simulate only
themselves.
•
Thinking of videogames as simulations also
returns us to the assertion that the player’s experience of cyberspace is one
not only of exploration but of realising or bringing the gameworld into being
in a semiotic and cybernetic circuit:
•
The distinguishing quality of the virtual world
is that the system lets the participant observer play an active role, where he
or she can test the system and discover the rules and structural qualities in
the process.
Summary
Ostensibly, these three positions have quite different
objects of concern: the computer simulation of interest to game studies is not
postmodernist simulation. Game studies is more modest – keen to establish the
difference of games and simulations from narrative or representational media
forms, rather than claiming simulation as an overarching model of contemporary
culture. To analyse a videogame as a computer simulation is to understand it as
an instance in everyday life, rather than as an all-encompassing hyperreality.
Moreover, the screen metaphors of the postmodernist simulation carry little
sense of the dynamic and procedural characteristics of computer simulation.
Studied as such, computer simulations can be seen not only as the visual
presentation of artificial realities (as, again, the screens of hyper-reality
suggest) but as the generation of dynamic systems and economies, often with
(and always in videogames) an assumption of interactive engagement written into
the models and processes.
The three broad
concepts of simulation outlined above overlap however. Postmodernist
simulation, though formulated before the rise of computer media to their
current predominance and predicated on – crudely speaking – the electronic
media and consumer culture, is now widely applied to the Internet, Virtual
Reality and other new media forms. Discussions of the nature of computer simulations
often also entail a consideration of the relationships (or lack of) between the
computer simulation and the real world. Both make a distinction between
‘simulation’ (where a ‘reality’ is experienced that does not correspond to any
actually existing thing), and ‘representation’ (or ‘mimesis’, the attempt at an
accurate imitation or representation of some real thing that lies outside of
the image or picture) – though often with very different implications and
intentions.
To sum up: within all of these approaches to simulation
there is a tendency to miss a key point: simulations are real, they exist, and
are experienced within the real world which they augment. Since, as Donkey Kong
and the alien creatures of exobiology teach us, not all simulations are imitations,
it becomes much easier to see simulations as things in their own right, rather
than as mere representations of other (‘realer’) things.
Conclusion
The characteristics which we have discussed above should be
seen as part of a matrix of qualities that we argue is what makes new media
different. Not all of these qualities will be present in all examples of new
media – they will be present in differing degrees and in different mixes. These
qualities are not wholly functions of technology – they are all imbricated into
the organisation of culture, work and leisure with all the economic and social
determinations that involves. To speak of new media as networked, for instance,
is not just to speak of the difference between server technology and broadcast
transmitters but also to talk about the deregulation of media markets. To talk
about the concept of the virtual is not just to speak of head-mounted display
systems but also to have to take into account the ways in which experiences of
self and of identity are mediated in a ‘virtual’ space. Digitality,
Interactivity, Hypertextuality, Virtuality, Networked Media and Simulation are
offered as the beginnings of a critical map. This discussion of the
‘characteristics’ of new media has merely established the grounds upon which we
might now begin substantially to address the questions that they raise.
11
Change and Continuity
From this section to the end we now change our tack. So far
we have considered, as promised at the outset, what it is that we take to be ‘new
media’ and we have gone as far as to suggest some defining characteristics. We
now take up the question of what is involved in considering their ‘newness’.
Enthusiastic students of media technologies might wonder why this is a
necessary question. Why do we not simply attempt to describe and analyse the
exciting world of media innovation that surrounds us? Writing in this manner
would be at the mercy of what we referred to in the introduction as permanent
‘upgrade culture’ – no sooner published than out of date because it failed to
offer any critical purchase on the field. There are plenty of existing sites
for readers to catch up on latest developments most of which are designed to
facilitate the reader’s consumption. Our purpose is to facilitate critical
thinking. In order to do that we need to get beyond the banal pleasures of
novelty to reveal how the ‘new’ is constructed. Our aim here is to enable a
clarity of thought often disabled by the shiny dazzle of novelty. We hope to
show that this centrally involves knowing something about the history of media,
the history of newness, and the history of our responses to media and
technological change. But there is more to it than that.
Introduction
Media theorists, and other commentators, tend to be polarised
over the degree of new media’s newness. While the various camps seldom engage
in debate with each other, the argument is between those who see a media
revolution and those who claim that, on the contrary, behind the hype we
largely have ‘business as usual’. To some extent this argument hinges upon the
disciplinary frameworks and discourses within which proponents of either side
of the argument work. What premises do they proceed from? What questions do
they ask? What methods do they apply? What ideas do they bring to their
investigations and thinking?
In this section we simply recognise that while the view is
widely held that new media are ‘revolutionary’ – that they are profoundly or
radically new in kind – throughout the now extensive literature on new media
there are also frequent recognitions that any attempt to understand new media
requires a historical perspective. Many reasons for taking this view will be
met throughout the book as part of its detailed case studies and arguments. In
this section we look at the general case for the importance of history in the
study of new media.
Measuring ‘newness’
The most obvious question that needs to be asked is: ‘How do
we know that something is new or in what way it is new if we have not carefully
compared it with what already exists or has gone before?’ We cannot know with
any certainty and detail how new or how large changes are without giving our
thinking a historical dimension. We need to establish from what previous states
things have changed. Even if, as Brian Winston observes, the concept of a
‘revolution’ is implicitly historical, how can one know ‘that a situation has
changed – has revolved – without knowing its previous state or position?’
(Winston 1998: 2). In another context, Kevin Robins (1996: 152) remarks that,
‘Whatever might be “new” about digital technologies, there is something old in
the imaginary signification of “image revolution”.’ Revolutions then, when they
take place, are historically relative and the idea itself has a history. It is
quite possible to take the view that these questions are superfluous and only
divert us from the main business. This certainly seems to be the case for many
new media enthusiasts who are (somewhat arrogantly, we may suggest) secure in
their conviction that the new is new and how it got to be that way will be of a
lot less interest than what comes next!
However, if asked, this basic question can help us guard
against missing at least three possibilities:
1
Something may appear to be new, in the sense
that it looks or feels unfamiliar or because it is aggressively presented as
new, but on closer inspection such newness may be revealed as only superficial.
It may be that something is new only in the sense that it turns out to be a new
version or configuration of something that, substantially, already exists,
rather than being a completely new category or kind of thing. Alternatively,
how can we know that a medium is new, rather than a hybrid of two or more older
media or an old one in a new context which in some ways transforms it?
2
Conversely, as the newness of new media becomes
familiar in everyday use or consumption we may lose our curiosity and
vigilance, ceasing to ask questions about exactly what they do and how they are
being used to change our worlds in subtle as well as dramatic ways.
3
A final possibility that this simple question
can uncover is that on close inspection and reflection, initial estimates of
novelty can turn out not to be as they seem. We find that some kinds and
degrees of novelty exist but not in the ways that they were initially thought
to. The history of what is meant by the new media buzzword ‘interactivity’ is a
prime example of the way a much-lauded quality of new media has been repeatedly
qualified and revised through critical examination.
The overall point is that the ‘critical’ in the critical
study of new media means not taking things for granted. Little is assumed about
the object of study that is then illuminated by asking and attempting to answer
questions about it. An important way of doing this – of approaching something
critically – is to ask what its history is or, in other words, how it came to
be as it is. Lastly, in this review of reasons to be historical in our approach
to new media, we need to recall how extensive and heterogeneous are the range
of changes, developments, and innovations that get subsumed under the term ‘new
media’. This is so much the case that without some attempt to break the term or
category down into more manageable parts we risk such a level of abstraction
and generalisation in our discussions that they will never take us very far in
the effort to understand one or another of these changes. A better approach is
to look for the different ratios of the old and the new across the field of new
media. One way of doing this is, precisely, historical. It is to survey the
field of new media in terms of the degree to which any particular development
is genuinely and radically new or is better understood as simply an element of
change in the nature of an already established medium.
Old media in new
times?
For instance, it can be argued that ‘digital television’ is
not a new medium but is best understood as a change in the form of delivering
the contents of the TV medium, which has a history of some fifty years or more.
This would be a case of what Mackay and O’Sullivan describe as an ‘old’ medium
‘in new times’ as distinct from a ‘new medium’ (1999: 4–5). On the other hand,
immersive virtual reality or massively multi-player online gaming look to be,
at least at first sight, mediums of a radically and profoundly new kind. This,
however, still leaves us with the problem of defining what is truly new about
them.
Before we accept this ‘new/old’ axis as a principle for
distinguishing between kinds of new media, we have to recognise immediately
that the terms can, to some extent, be reversed. For instance, it can be argued
that some of the outcomes of producing and transmitting TV digitally have had
quite profound effects upon its programming and modes of use and consumption
such that the medium of TV has significantly changed. It could also be claimed
that the increased image size, high definition, programmes on demand,
interactive choice etc., of contemporary television effectively transforms the
medium. Whether we would want to go as far as saying that it will be an
entirely new medium still seems unlikely, if not impossible. On the other hand,
the apparently unprecedented experiences offered by the technologies of
immersive VR or online, interactive, multimedia can be shown to have histories
and antecedents, both of a technological and a cultural kind, upon which they
draw and depend. Whether, in these cases, however, we would want to go as far
as saying that therefore VR is adequately defined by tracing and describing its
many practical and ideological antecedents is another matter.
The idea of
‘remediation’
A third possibility is that put forward by Jay Bolter and
Richard Grusin (1999) who, following an insight of Marshall McLuhan,
effectively tie new media to old media as a structural condition of all media.
They propose and argue at some length that the ‘new’, in turn, in new media is
the manner in which the digital technologies that they employ ‘refashion older
media’, and then these older media ‘refashion themselves to answer to the
challenges of new media’. It seems to us that there is an unassailable truth in
this formulation. This is that new media are not born in a vacuum and, as
media, would have no resources to draw upon if they were not in touch and
negotiating with the long traditions of process, purpose, and signification
that older media possess. Yet, having said this, many questions about the
nature and extent of the transformations taking place remain.
What is new about
interactivity?
From the 1990s onward, ‘interactivity’ became a key buzzword
in the world of new media. The promise and quality of interactivity has been
conceived in a number of ways.
The creative
management of information
This concept of interactivity has roots in the ideas of
early computer visionaries dating back as far as the 1940s, such as Vannevar
Bush (1945) and Alan Kay and Adele Goldberg (1977) (both in Mayer 1999). These
are visions of interactive computer databases liberating and extending our
intellects. Such concepts, conceived in the years after the Second World War,
were in part responses to the perceived threat of information overload in the
modern world. Searchable databases that facilitated a convergence of existing
print and visual media and the information they contained were seen as a new
way for the individual to access, organise, and think with information.
Interactivity as consumer
choice technologically embodied
We saw in our discussion of the concept in 1.2 how it has
been central to the marketing of personal computers by linking it to
contemporary ideas about consumer choice. On this view, being interactive means
that we are no longer the passive consumers of identical ranges of
mass-produced goods, whether intellectual or material. Interactivity is
promoted as a quality of computers that offers us active choices and
personalised commodities, whether of knowledge, news, entertainment, banking,
shopping and other services.
The death of the
author
During the 1990s, cybertheorists were keen to understand
interactivity as a means of placing traditional authorship in the hands of the
‘reader’ or consumer (Landow 1992). Here, the idea is that interactive media
are a technological realisation of a theory, first worked out mainly in
relation to literature, known as ‘post-structuralism’. We had, it was
suggested, witnessed the ‘death of the author’, the central, fixed and god-like
voice of the author behind the text (see, for example, Landow 1992).
Interactivity meant that users of new media would be able to navigate their way
across uncharted seas of potential knowledge, making their own sense of a body
of material, each user following new pathways through the matrix of data each
time they set out on their journeys of discovery.
A related idea is that the key property of interactivity is
a major shift in the traditional relationship between the production and
reception of media. This resides in the power that computers give the
reader/user to ‘write back’ into a text. Information, whether in the form of
text, image, or sound, is received within software applications that allow the
receiver to change – delete, add, reconfigure – what they receive. It has not
been lost on many thinkers that this practice, while enabled by electronic
digital technology, resembles the medieval practice of annotating and adding
extensive marginalia to manuscripts and books so that they became palimpsests.
These are surfaces upon which generations of additions and commentaries are
overwritten on texts, one on the other. While this is true it has only a
limited sense. There is after all a tremendous difference between the operation
of the Internet and the highly selective access of the privileged class of
medieval monks to sacred texts.
More recently, in the face of exaggerated claims for the
almost magical powers of interactivity and on the basis of practice-based
critical reflection, more critical estimations have been made. As the artist
Sarah Roberts has put it:
the illusion that goes along with [interactivity] is of a
kind of democracy . . . that the artist is sharing the power of choice with the
viewer, when actually the artist has planned every option that can happen . . .
it’s a great deal more complex than if you [the user] hadn’t had a sort of
choice, but it’s all planned. (Penny 1995: 64)
These concepts of
interactivity are less descriptions of particular technical, textual, or
experiential properties and more claims or propositions rooted in the inspired
founding visions, imaginative marketing strategies, and the sophisticated
analogies of academic theorists about new, real or imagined, possibilities of
human empowerment. However, whatever merits these ideas have, whether visionary
or opportunistic, they have been subjected to methodical enquiry from within a
number of disciplines which we need to attend to if we are to get beyond these
broad characterisations of interactivity.
Human–computer
interaction: intervention and control
A technical idea of interactivity has taken shape most
strongly within the discipline of human–computer interaction (HCI). This is a
scientific and industrial field which studies and attempts to improve the
interface between computers and users.
An ‘interactive mode’ of computer use was first posited
during the years of mainframe computers when large amounts of data were fed
into the machine to be processed. At first, once the data was entered, the
machine was left to get on with the processing (batch processing). Gradually
however, as the machines became more sophisticated, it became possible to
intervene into the process whilst it was still running through the use of
dialogue boxes or menus. This was known as operating the computer in an
‘interactive’ mode (Jensen 1999: 168). This ability to intervene in the
computing process and see the results of your intervention in real time was
essentially a control function. It was a one-way command communication from the
operator to the machine. This is a very different idea of interaction from the
popularised senses of hypertextual freedom described above (Huhtamo 2000).
This idea of interaction as control continued to develop
through the discipline of HCI and was led by the ideas of technologists like
Licklider and Engelbart (Licklider and Taylor 1999 [orig: 1968]; Engelbart 1999
[orig: 1963]). If the kind of symbiosis between operator and machine that they
envisaged was to take place then this interactive mode had to be extended and
made available outside of the small groups who understood the specialised
programming languages. To this end, during the early 1970s, researchers at the
Xerox Palo Alto Research Center developed the GUI, the graphical user
interface, which would work within the simultaneously developed standard format
for the PC: keyboard, processor, screen and mouse. In what has become one of
the famous moments in the history of Xerox, they failed to exploit their
remarkable breakthroughs. Later, Apple were able to use the GUI to launch their
range of PCs in the early 1980s: first the Apple Lisa, then in 1984 the
celebrated Apple Mac. These GUI systems were then widely imitated by Microsoft.
Communication studies
and the ‘face-to-face’ paradigm
However, this idea of interaction as control, as interface
manipulation, is somewhat at odds with the idea of interactivity as a mutually
reciprocal communication process, whether between user and machine/database or
between user and user. Here we encounter an understanding of the term derived
from sociology and communications studies. This tradition has attempted to
describe and analyse interactivity and computers in relation to interactivity
in face-to-face human communication. In this research interaction is identified
as a core human behaviour, the foundation of culture and community. For
communications theorists interaction is a quality present in varying degrees as
a quality of communication. So a question and answer pattern of communication
is somewhat ‘less’ interactive than an open-ended dialogue (see, for example,
Shutz 2000; Jensen 1999). Similarly the modes of interactivity described earlier
would here be classified on a scale of least to most interactive, with the
various kinds of CMC ‘most’ interactive and the navigational choices ‘least’
interactive.
Various commentators (for example, Stone 1995: 10; Aarseth
1997: 49) quote Andy Lippman’s definition of interactivity generated at MIT in
the 1980s as an ‘ideal’. For Lippman interactivity was ‘mutual and simultaneous
activity on the part of both participants, usually working toward some goal,
but not necessarily’. This state needed to be achieved through a number of
conditions:
·
Mutual interruptibility
·
limited look ahead (so that none of the partners
in the interaction can foresee the future shape of the interaction)
·
no default (there is no pre-programmed route to
follow)
·
the impression of an infinite database (from the
participants’ point of view).
This sounds like a pretty good description of conversation,
but a very poor description of using a point-and-click interface to ‘interact’
with a computer.
The study of
artificial intelligence
There seem to us to be some real problems with the
application of communications theories based in speech to technologically
mediated communications. Unresolved, these problems lead to impossible
expectations of computers, expectations that open up a gap between what we
experience in computer-based interaction and what we might desire. Often this
gap gets filled by predictions drawn from yet another methodological field –
that of artificial intelligence (AI). The argument usually goes something like
this. Ideal human–computer interaction would approach as close as possible to
face-to-face communication; however, computers obviously can’t do that yet since
they are (still) unable to pass as human for any length of time. Futuristic
scenarios (scientific and science fictional) propose that this difficulty will
be resolved as chips get cheaper and computing enters into its ubiquitous phase
(see ubiquitous computing and pervasive media). In the meantime we have to make
do with various degrees along the way to ‘true’ (i.e. conversational)
interaction. In this construction interactivity is always a failure awaiting
rescue by the next development on an evershifting technological event horizon.
Media studies
Understandings of interactivity not only draw on HCI,
communications studies, and AI research but often call up debates around the
nature of media audiences and their interpretations of meanings that have been
generated within media studies. Influential strands within media studies teach
that audiences are ‘active’ and make multiple and variable interpretative acts
in response to media texts: the meaning of the text must be thought of in terms
of which set of discourses it encounters in any particular set of
circumstances, and how this encounter may restructure both the meaning of the
text and the discourses which it meets.
This reading of audience behaviour is sometimes referred to
as an ‘interactive’ activity. Prior to the emergence of computer media, it is
argued that as readers we already had ‘interactive’ relationships with
(traditional analogue) texts. This position is then extended to argue that not
only do we have complex interpretative relationships with texts but active
material relationships with texts; we have long written marginalia, stopped and
rewound the videotape, dubbed music from CD to tape, physically cut and pasted
images and text from print media into new arrangements and juxtapositions. In
this reading, interactivity comes to be understood as, again, a kind of
technological correlative for theories of textuality already established and an
extension of material practices that we already have. So, for instance, even
though we might not all share the same experience of a website we may construct
a version of ‘the text’ through our talk and discussion about the site;
similarly it is argued we will not all share the same experience of watching a
soap opera. Indeed, over a period of weeks we will almost certainly not see the
same ‘text’ as other family members or friends, but we can construct a common
‘text’ through our responses and talk about the programme. The text and the
meanings which it produces already only exist in the spaces of our varied interpretations
and responses.
In other words there is a perspective on interactivity,
based in literary studies and media studies, that argues that nothing much has
changed in principle. We are just offered more opportunities for more complex
relationships with texts but these relationships are essentially the same
(Aarseth 1997: 2). However, we would argue that the distinction between
interaction and interpretation is even more important now than previously. This
is because the problems which face us in understanding the processes of
mediation are multiplied by new media: the acts of multiple interpretation of
traditional media are not made irrelevant by digital and technological forms of
interactivity but are actually made more numerous and complex by them. The more
text choices available to the reader the greater the possible interpretative
responses. The very necessity of intervention in the text, of manipulation of
the text’s forms of interaction, requires a more acute understanding of the act
of interpretation.
What kind of history?
Grassroots democratic exchange
Beyond the particular ways of understanding interactivity
that flow from the four methodologies we have discussed, there lies another,
more diffuse yet extremely powerful, discourse about interactivity that is so
pervasive as to have become taken for granted. Within this usage ‘interactive’
equals automatically better – better than passive, and better than just
‘active’ by virtue of some implied reciprocity. This diffuse sense of the
virtue of interactivity also has a social and cultural history, dating from the
late 1960s and early 1970s. In this history, democratising challenges to
established power systems were led by constant calls for dialogue and increased
lateral, rather than vertical and hierarchical, communications as a way of
supporting social progress. This ideological attack on one-way information
flows in favour of lateral or interactive social communications lay behind much
of the radical alternative rhetorics of the period. A community arts and media
group active in London through the 1970s and 1980s, under the name of
‘Interaction’, is characteristic of the period in its analysis:
The problems of a pluralist urban society (and an over
populated one dependent on machines as well) are very complex. Answers, if
there are any, lie in the ability to relate, to inform, to listen – in short
the abilities of creative people.
The abilities to ‘relate’ and to ‘listen’ are the skills of
face-to-face dialogue and social interaction recast as a progressive force.
This valorisation of social dialogue was ‘in the air’ in the early 1970s. It
informed a radical critique of mainstream media which took root not only in the
burgeoning of alternative and community media practices of the period but also
in early ideas about computer networking. As was pointed out by Resource One, a
community computing facility based in the Bay area of San Francisco:
Both the quantity and content of available information is
set by centralised institutions – the press, T V, radio, news services, think
tanks, government agencies, schools and universities – which are controlled by
the same interests which control the rest of the economy. By keeping
information flowing from the top down, they keep us isolated from each other.
Computer technology has thus far been used . . . mainly by the government and
those it represents to store and quickly retrieve vast amounts of information
about huge numbers of people. . . . It is this pattern that convinces us that
control over the flow of information is so crucial.
This support for ‘democratic media’ is a kind of popular and
latter-day mobilisation of ideas derived from the Frankfurt School, with its
criticisms of the role of mass media in the production of a docile population
seduced by the pleasures of consumption and celebrity. In this reading
‘interactive’ media are constructed as a potential improvement on passive media
in that they appear to hold out the opportunity for social and political
communications to function in a more open and democratic fashion which more
closely approaches the ideal conditions of the public sphere.
The return of the
Frankfurt School critique in the popularisation of new media
We are now in a position to see that the idea of
interactivity, as one of the primary ‘new’ qualities of new media, comes to us
as an automatic asset with a rich history. Yet, as we have also seen, it is a
term that carries the weight of a number of different, and contradictory,
histories. It may be possible to argue that it is precisely this lack of
definition which makes it such a suitable site for our investment in the idea
of ‘the new’.
What kind of history?
‘“I Love Lucy” and “Dallas”, FORTRAN and fax, computer
networks, comsats, and mobile telephones. The transformations in our psyches
triggered by the electronic media thus far may have been preparation for bigger
things to come’ (Rheingold 1991: 387).
In previous chapter we posed a number of basic questions
that need to be asked if critical studies of new media are to proceed without
being based upon too many assumptions about what we are dealing with. We
strongly suggested that asking these questions requires us to take an interest
in the available histories of older media. There is, however, another important
reason why the student of new media may need to pay attention to history. This
is because, from their very inception, new media have been provided with
histories, some of which can be misleading.
From the outset, the importance of new media, and the kind
of futures they would deliver, has frequently been conceived as part of a
historical unfolding of long-glimpsed possibilities. As the quote above
suggests, such accounts imply that history may only have been a preparation for
the media technologies and products of our time. In other words, a historical
imagination came into play at the moment we began to strive to get the measure
of new media technologies. These historical perspectives are often strongly
marked by paradoxically old-fashioned ideas about history as a progressive process.
Such ideas rapidly became popular and influential. There is little exaggeration
in saying that, subsequently, a good deal of research and argument in the early
years of ‘new media studies’ has been concerned with criticising these
‘histories’ and outlining alternative ways of understanding media change.
This section
While this book is not the place to study theories of
history in any depth, a body of historical issues now attaches itself to the
study of new media. Some examples, and an idea of the critical issues they
raise, are therefore necessary. In this section we first consider what are
known as teleological accounts of new media. The meaning of this term will
become clearer through the following discussion of some examples but, broadly,
it refers to the idea that new media are a direct culmination of historical
processes. In this section, by taking an example of work on the history of new
media we seek to show that there can be no single, linear historical narrative
that would add to our understanding of all that ‘new media’ embraces. Instead,
we are clearly faced with a large number of intersecting histories. These are
unlikely to fall into a pattern of tributaries all feeding regularly and
incrementally into a main stream. We would be hard put to think, let alone
prove, that all of the developments, contexts, agents and forces that are
involved in these histories had anything like a shared goal or purpose. We then
outline the approaches of some theorists of new media who, rejecting the idea
that new media can simply be understood as the utopian end point of progressive
historical development, seek alternative ways of thinking about the differences
and the complex connections between old and new media. In doing this we will
consider how Michel Foucault’s influential ‘genealogical’ theory of history has
found a place in studies of new media.
Lastly, we consider a view derived from modernist
aesthetics, which argues that for a medium to be genuinely new its unique
essence has to be discovered in order for it to break itself free from the past
and older media. In questioning this idea we introduce a number of examples in
which new media are seen to recall the past, rather than break with it.
Teleological accounts
of new media
From cave paintings to mobile phones
In a once popular and influential history of ‘virtual
reality’, Howard Rheingold takes us to the Upper Palaeolithic cave paintings of
Lascaux, where 30,000 years ago, ‘primitive but effective cyberspaces may have
been instrumental in setting us on the road to computerized world building in
the first place’ (Rheingold 1991: 379). He breathlessly takes his reader on a
journey which has its destination in immersive virtual environments. En route
we visit the origins of Dionysian drama in ancient Greece, the initiation rites
of the Hopi, Navajo, and Pueblo tribes ‘in the oldest continuously inhabited
human settlements in North America’, the virtual worlds of TV soap operas like
I Love Lucy and Dallas, arriving at last to meet the interactive computing
pioneers of Silicon Valley, major US universities and Japanese corporations. In
Rheingold’s sweeping historical scheme, the cave painting appears to hold the
seeds of the fax machine, the computer network, the communications satellite
and the mobile phone (Rheingold 1991: 387)!
Few examples of this way of understanding how we came to
have a new medium are as mind-boggling in their Olympian sweep as Rheingold’s.
But, as we shall see, other theorists and commentators, often with more limited
ambitions, share with him the project to understand new media as the
culmination or present stage of development of all human media over time. When
this is done, new media are placed at the end of a chronological list that
begins with oral communication, writing, printing, drawing and painting, and
then stretches and weaves its way through the image and communication media of
the nineteenth and twentieth centuries, photography, film, T V, video and
semaphore, telegraphy, telephony and radio. In such historical schemas there is
often an underlying assumption or implication – which may or may not be openly
stated – that new media represent a stage of development that was already
present as a potential in other, earlier, media forms. A further example will
help us see how such views are constructed and the problems associated with
them.
From photography to
telematics: extracting some sense from teleologies
Peter Weibel, a theorist of art and technology, former
director of Ars Electronica and now director of a leading centre for new media
art (ZKM, the Zentrum für Kunst und Medientechnologie, in Karlsruhe, Germany),
offers an 8-stage historical model of the progressive development of
technologies of image production and transmission which, having photography as
its first stage, spans 160 years (1996: 338–339).
Weibel notes that in 1839 the invention of photography meant
that image making was freed for the first time from a dependence upon the hand
(this is Stage 1). Images were then further unfixed from their locations in
space by electronic scanning and telegraphy (Stage 2). In these developments
Weibel sees ‘the birth of new visual worlds and telematic culture’ (1996: 338).
Then, in Stages 3–5, these developments were ‘followed by’
film which further transformed the image from something that occupied space to
one that existed in time. Next, the discovery of the electron, the invention of
the cathode ray tube, and magnetic recording brought about the possibility of a
combination of film, radio, and television – and video was born. At this stage,
Weibel observes, ‘the basic conditions for electronic image production and
transfer were established’ (1996: 338).
In Stage 6, transistors, integrated circuits and silicon
chips enter the scene. All previous developments are now revolutionised as the
sum of the historical possibilities of machine-aided image generation are at
last united in the multimedia, interactive computer. This newly interactive
machine, and the convergence of all other technological media within it, then
join with telecommunications networks and there is a further liberation as
‘matterless signs’ spread like waves in global space (Stage 7). A new era
(first glimpsed at Stage 2) now dawns: that of post-industrial, telematic
civilisation.
So, Stage 7, Weibel’s penultimate stage, is that of
interactive telematic culture, more or less where we may be now at the end of
the first decade of the twenty-first century. His final Stage 8 tips us into
the future, a stage ‘until now banished to the domain of science fiction’ but ‘already
beginning to become a reality’ (1996: 339). This is the sphere of advanced
sensory technologies in which he sees the brain as directly linked to ‘the
digital realm’ (ibid.).
Weibel clearly sees this history as progressive, one in
which ‘Over the last 150 years the mediatisation and mechanisation of the
image, from the camera to the computer have advanced greatly’ (1996: 338).
There is a direction, then, advancing toward the present and continuing into
the future, which is revealed by the changing character of our media over time.
As we look back over Wiebel’s eight stages we see that the
‘advances’ all concern the increasing dematerialisation of images and visual
signs, their separation from the material vehicle which carries them. The
final, culminating stage in this dynamic is then glimpsed: neurological
engineering which is about to usher in a direct interfacing of the brain with
the world – a world where no media, material or immaterial, exist. We have the
end of media or, as his title states, The World as Interface.
What kind of history
is being told here?
•
Each of Weibel’s stages points to real
technological developments in image media production and transmission. These
technologies and inventions did happen, did and do exist.
•
Moving out from the facts, he then offers brief
assessments of what these developments have meant for human communication and
visual culture. In these assessments, the insights of other media theorists
show through.
•
Overall, Weibel organises his observations
chronologically; the stages follow each other in time, each one appearing to be
born out of the previous one.
•
There is an ultimate point of origin –
photography. The birth of this image technology is placed as a founding moment
out of which the whole process unfolds.
•
He finds a logic or a plot for his unfolding
story – his sequential narrative of progress. This is the story of the
increasing automation of production and increasing separation of signs (and
images) from any physical vehicle that carries them.
This story is not without sense. But it is important to see
that it is, in actuality, an argument. It is an organisation and integration of
facts and ways of thinking about those facts. Facts? Photography and then
telecommunications were invented. Hard to contest. Ways of thinking about the
significance of those facts? Photography and telecommunications converged to
mean that reality (real, material, physically tangible space) disappeared. A
dramatic pronouncement that, at the very least, we may want to debate.
By selectively giving each fact a particular kind of
significance (there are many others that he could have found), Weibel is making
a case. Although it is more focused than the example we took from Rheingold’s
‘history’ of VR, it is basically similar in that an argument is made in the
form of a historical narrative. Within Weibel’s ‘history’ he foregrounds and
makes us think about some very important factors. Good, perceptive and
well-researched stories have always done this.
However, at the same time, there are some big problems with
Weibel’s account if we take it as a credible historical account without asking
further questions about its implications. This is because he does not tell us
why and how the apparent unfolding of events takes place. What drives this march
of media from machine-aided production of material images (photography) to the
simulation of ‘artificial and natural worlds’, and even the coming simulation
of the ‘brain itself’? What, in this pattern of seamless evolution, has he
detected? How was the bloom of interactive ‘telematic civilisation’ always
contained in the seed of photography?
Historical narratives of the kind that Rheingold and Weibel
tell are forms of teleological argument. These are arguments in which the
nature of the past is explained as a preparation for the present. The present
is understood as being prefigured in the past and is the culmination of it.
Such arguments seek to explain how things are in terms of their ‘ends’ (their
outcomes or the purposes, aims and intentions that we feel they embody) rather
than in prior causes. There have been many versions of such teleological
historical explanation, beginning with those that saw the world as the outcome
of God’s design, through various kinds of secular versions of grand design, of
cosmic forces, the unfolding of a world soul, through to dialectical
explanation in which the present state of things is traceable to a long
historical interplay of opposites and contradictions which inevitably move on
toward a resolution. Related, if slightly less deterministically teleological,
versions of historical explanation think in terms of history as a process of
problem solving. Often a kind of relay race of great geniuses, in which each
one takes up the questions left by their predecessors and, in each case, it is
implied that the project is somehow communicated across and carried on over
centuries of time as the final answer is sought.
Such attempts to find a (teleo)logic in history were strong
in the nineteenth century, particularly in Western Europe and North America.
Here, a dominant sense of optimism and faith in the progress of industry and
science encouraged the view that history (as the growth, evolution and maturing
of human societies) was drawing to a close.
Operating over very different timescales, both Rheingold and
Weibel continue to tell stories about the rise of new media by adopting a kind
of historical perspective which is as old as the hills. There is something of a
paradox in the way in which new media have rapidly been provided with histories
of a rather naive and uncritical (we are tempted to say old-fashioned) kind.
While we have stressed the importance of historical
knowledge and research to understanding the contemporary field of new media, it
does not, in our view, readily include these kinds of teleology which can be
highly misleading in their grand sweep and the way in which they place new
media, far too simply, as the end point of a long process of historical
development.
Seeing the limits of
new media teleologies
We now look at a third and recent contribution to the
history of new media. This is a historical overview, in which Paul Mayer
identifies the ‘seminal ideas and technical developments’ that lead to the
development of computer media and communication. He traces the key concepts
which lead from an abstract system of logic, through the development of
calculating machines, to the computer as a ‘medium’ which can ‘extend new
possibilities for expressions, communication, and interaction in everyday life’
(Mayer 1999: 321).
The important point for our present discussion is that as
Mayer’s thorough historical outline of ‘pivotal conceptual insights’ proceeds,
we can also see how other histories that are quite distinct from that of the
conceptual and technical development of computing itself are entwined with the
one he traces. At various points in his history, doors are opened through which
we glimpse other factors. These factors do not contribute directly to the
development of computer media, but they indicate how quite other spheres of
activity, taking place for other reasons, have played an essential but
contingent part in the history of new media. We will take two examples.
In the first section of his history Mayer traces the
conceptual and practical leaps which led to the building of the first mainframe
computers in the 1940s. He begins his history with the project of the
late-seventeenth-century philosopher, Leibniz, to formulate a way of reasoning
logically by matching concepts with numbers, and his efforts to devise a ‘universal
logic machine’ (Mayer 1999: 4). He then points to a whole range of other
philosophical, mathematical, mechanical, and electronic achievements occurring
in the 300-year period between the 1660s and the 1940s. The history leads us to
the ideas and practical experiments in hypermedia carried out by Vannevar Bush
and Ted Nelson in the mid-twentieth century. It is a history which focuses on
that part of technological development that involves envisioning: the capacity
to think and imagine possibilities from given resources.
Clearly, many of these achievements, especially the earlier
ones, were not directed at developing the computer as a medium as we would
understand it. Such a use of the computer was not part of the eighteenth- and
nineteenth-century frame of reference: it was not a conceivable or imaginable
project. As Mayer points out, Leibniz had the intellectual and philosophical
ambitions of his period (the late seventeenth and early eighteenth centuries)
as one of the ‘thinkers who advanced comprehensive philosophical systems during
the Age of Reason’ with its interest in devising logical scientific systems of
thought which had universal validity (Mayer 1999: 4). Neither were our modern
ideas about the interpersonal communications and visual-representational
possibilities of the computer in view during the nineteenth-century phase of
the Industrial Revolution. At this time the interest in computing was rooted in
the need for calculation, ‘in navigation, engineering, astronomy, physics’ as
the demands of these activities threatened to overwhelm the human capacity to
calculate. (This last factor is an interesting reversal of the need that
Vannevar Bush saw some 100 years later, in the 1950s, for a machine and a
system that would augment the human capacity to cope with an overload of data
and information).
Hence, as we follow Mayer’s historical account of key
figures and ideas in the history of computing, we also see how the conceptual
development of the modern computer as medium took place for quite other
reasons. At the very least these include the projects of eighteenth-century
philosophers, nineteenth-century industrialisation, trade and colonisation, and
an early twentieth-century need to manage statistics for the governance and
control of complex societies. As Mayer identifies, it is only in the 1930s
when, alongside Turing’s concept of ‘the universal machine’ which would
automatically process any kind of symbol and not just numbers, the moment
arrives in which, ‘the right combination of concepts, technology and political
will colluded to launch the construction of machines recognisable today as
computers in the modern sense’ (1999: 9). In short, while Mayer traces a set of
chronological connections between ‘pivotal concepts’ in the history of computing,
we are also led to see:
1
That the preconditions were being established
for something that was not yet conceived or foreseen: the computer as a medium.
2
That even the conceptual history of computing,
formally presented as a sequence of ideas and experiments, implies that other
histories impact upon that development.
To sum up, we are led to see that a major factor in the
development of computer media is the eventual impact of one set of technologies
and practices – those of computing numbers – on other sets: these being social
and personal practices of communication and aural, textual and visual forms of
representation. In short, a set of technological and conceptual developments
which were undertaken for one set of reasons (and even these, as we have seen,
were not stable and sustained, as the philosophical gave way to the industrial
and the commercial, and then the informational) have eventually come to
transform a range of image and communication media. It is also apparent that
this happened in ways that were completely unlooked for. New image and
communications media were not anticipated by the thinkers, researchers,
technologists and the wider societies to which they belonged, during the period
between the eighteenth and the mid-twentieth century in which digital computing
develops (Mayer 1999).
If this first example begins to show how teleological
accounts obscure and distort the real historical contingency of computer media,
our second example returns us to the greater historical complexity of what are
now called new media. Mayer’s focus is on the computer as a medium itself: the
symbol-manipulating, networked machine through which we communicate with
others, play games, explore databases and produce texts. Returning to our
initial breakdown of the range of phenomena that new media refers to, we must
remind ourselves that this is not all that new media has come to stand for.
Computer-mediated communication, Mayer’s specific interest, is only one key
element within a broader media landscape that includes convergences,
hybridisations, transformations, and displacements within and between all forms
of older media. These media, such as print, telecommunications, photography,
film, television and radio, have, of course, their own, and in some cases long,
histories. In the last decades of the twentieth century these histories of
older media become precisely the kinds of factors that began to play a crucial
role in the development of computer media, just as the demands of navigators or
astronomers for more efficient means of calculating did in the nineteenth.
This is a vital point as Mayer’s historical sketch of the
conceptual development of the computer ends, with Alan Kay and Adele Goldberg’s
1977 prototype for an early personal computer named the ‘Dynabook’. He observes
that the ‘Dynabook’ was conceived by its designers as ‘a metamedium, or a
technology with the broadest capabilities to simulate and expand the
functionality and power of other forms of mediated expression’ (Mayer 1999:
20). Kay and Goldberg themselves make the point somewhat more directly when
they write that ‘the computer, viewed as a medium itself, can be all other
media’. In the late 1970s, Kay and Goldberg’s vision of the media that the
Dynabook would ‘metamediate’ was restricted to text, painting and drawing,
animation and music. (Subsequently, of course, with increased memory capacity
and software developments, the ‘other media’ forms which the computer ‘can be’
would include photography, film, video and TV.)
On the face of it, this seems simple enough. What Kay and
Goldberg are saying is that the computer as a ‘medium’ is able to simulate
other media. However, both they and Mayer, in his history, seem to assume that
this is unproblematic. As Mayer puts it, one of the great things about the Dynabook
as a prototype computer medium, is that it is an ‘inspiring realisation of
Leibniz’s generality of symbolic representation’ (1999: 21) due to its ability
to reduce all signs and languages – textual, visual, aural – to a binary code.
It does a great deal more besides, of course: it ‘expand[s] upon the
functionality and power of other forms of mediated expression’ (1999: 20).
However, this convergence and interaction of many previously separate media
actually makes the picture far more complicated. We have to remind ourselves
that this range of ‘old’ media, that the computer carries and simulates, have
in turn their own histories. Ones which parallel, and in some cases are far
older than that of the computer.
The media which the computer ‘simulates and expands’ are
also the result of conceptual and technical, as well as cultural and economic,
histories which have shaped them in certain ways. In an expanded version of
Mayer’s history, space would need to be made for the ways in which these
traditional media forms contributed to thinking about the Dynabook concept
itself. For, if we are to understand the complex forms of new media it is not
enough to think only in terms of what the computer might have offered to do for
‘other forms of mediated expression’ but also to ask how these other media
forms shaped the kind of ‘metamediating’ that Goldberg and Kay envisaged. The
universal symbol-manipulating capacity of the computer could not, by itself,
determine the forms and aesthetics of the computer medium. This is because the
very media that the computer (as medium) incorporates (or metamediates) are not
neutral elements: they are social and signifying practices. We would want to
know, for instance, what the outcomes of other histories – the conventions of
drawing, the genres of animation, the trust in photographic realism, the
narrative forms of text and video, and the lan-guages of typography and graphic
design, etc. – brought to this new metamedium. These are, in fact, the very
issues which have come to exercise practitioners and theorists of new media,
and which the various parts of this book discuss.
Foucault and
genealogies of new media
A widely read theorist of new media, Mark Poster, has
suggested:
The question of the new requires a historical problematic, a
temporal and spatial framework in which there are risks of setting up the new
as a culmination, telos or fulfillment of the old, as the onset of utopia or
dystopia. The conceptual problem is to enable a historical dif-ferentiation of
old and new without initialising a totalising narrative. Foucault’s proposal of
a genealogy, taken over from Nietzsche, offers the most satisfactory
resolution. (Poster 1999: 12)
In this way, Poster sums up the problems we have been
discussing. How do we envisage the relationship of new and old media over time,
sequentially, and in space (what kind of coexistence or relationship with each
other and where?) without assuming that new media bring old media to some kind
of concluding state for good or bad? How do we differentiate between them
without such sweeping, universalising schemas as we met above? Foucault’s
concept of genealogy is his answer.
Jay Bolter and Richard Grusin introduce their book on new
media, entitled Remediation, with an explicit acknowledgement of their debt to
Foucault’s method:
The two logics of remediation have a long history, for their
interplay defines a genealogy that dates back at least to the Renaissance and
the invention of linear perspective. Note 1: Our notion of genealogy is
indebted to Foucault’s, for we too are looking for historical affiliations or
resonances, and not of origins. Foucault . . . characterised genealogy as ‘an
examination of descent’, which ‘permits the discovery, under the unique aspect
of a trait or a concept of the myriad events through which – thanks to which,
against which – they were formed’. (Bolter and Grusin 1999: 21)
How does an idea or a practice, which for Bolter and Grusin
is the concept and practice of remediation (the way that one medium absorbs and
transforms another), reach us (descend)? What multiple factors have played a
part in shaping that process?
We should note that Poster is particularly keen to avoid
thinking of history as a process with a ‘culmination’ and end point. Bolter and
Grusin, like Foucault, are not interested in the origins of things. They are
not interesting in where things began or where they finished. They are
interested in ‘affiliations’ (the attachments and connections between things)
and ‘resonances’ (the sympathetic vibrations between things). They want to know
about the ‘through’ and ‘against’ of things. Instead of images of linear
sequences and chains of events we need to think in terms of webs, clusters,
boundaries, territories, and overlapping spheres as our images of historical
process.
Artificial Life
A simple model of the complex of histories ‘through’ and
‘against’ which new media emerge.
Theorists of new media seeking alternative ways of thinking
about the differences and the complex connections between old and new media
have drawn upon the influential ‘genealogical’ theory of history, as argued and
put into practice in a number of major works of cultural history by the
philosopher-historian Michel Foucault. It is a historical method which offers
the possibility of thinking through new media’s relationship to the past while
avoiding some of the problems we have met above. In doing this, theorists of
new media are following in the footsteps of other historians of photography,
film, cinema and visual culture such as John Tagg (1998), Jonathan Crary (1993)
and Geoffrey Batchen (1997) who have used what has become known as a
‘Foucauldian’ perspective.
New media and the modernist concept of progress
the full aesthetic potential of this medium will be realised
only when computer artists come to the instrument from art rather than computer
science . . . Today the kind of simulation envisioned . . . requires a $10
million Cray-1 supercomputer, the most powerful computer in the world . . .
[T]he manufacturers of the Cray-1 believe that by the early 1990s computers
with three-fourths of its power will sell for approximately $20,000 less than
the cost of a portapak and editing system today . . . [F]inally accessible to
autonomous individuals, the full aesthetic potential of computer simulation will
be revealed, and the future of cinematic languages . . . will be rescued from
the tyranny of perceptual imperialists and placed in the hands of artists and
amateurs. (Youngblood 1999: 48)
In the name of ‘progress’ our official culture is striving
to force the new media to do the work of the old. (McLuhan and Fiore 1967a: 81)
In order to conceive a properly genealogical account of new
media histories we need not only to take account of the particular teleologies
of technohistory above but also the deeply embedded experience of modernism
within aesthetics.
Commentators on new media, like Gene Youngblood, frequently
refer to a future point in time when their promise will be realised. Thought
about new media is replete with a sense of a deferred future. We are repeatedly
encouraged to await the further development of the technologies which they
utilise. At times this takes the simple form of the ‘when we have the computing
power’ type of argument. Here, the present state of technological
(under)development is said to constrain what is possible and explains the gap
between the potential and actual performance (see for example, our discussion
of virtual reality.
Related to views of this kind, there are some which embody a
particular kind of theory about historical change. It is not technological
underdevelopment per se that is blamed for the failure of a new medium to
deliver its promise; rather, the culprit is seen to be ingrained cultural
resistance. Here, the proposal is that in their early phases new media are bound
to be used and understood according to older, existing practices and ideas, and
that it is largely such ideological and cultural factors that limit the
potential of new media. The central premises here is that each medium has its
own kind of essence; that is, some unique and defining characteristic or
characteristics which will, given time and exploration, be clearly revealed. As
they are revealed the medium comes into its own. This kind of argument adds
ideas about the nature of media and culture to the simpler argument about
technological underdevelopment.
Such a view has quite a long history itself, as will be seen
in the example from the pioneering writer on ‘expanded’ cinema, Gene
Youngblood, quoted at the beginning of this section. Writing in 1984, in an
essay on the then emerging possibilities of digital video and cinema (in
Druckery 1999), he looks forward to the 1990s when he foresees affordable
computers coming to possess the kind of power that, at his time of writing, was
only to be found in the $10 million Cray-1 mainframe supercomputer. Then, in a
clear example of the modernist argument that we have outlined, he adds that we
must also look forward to the time when the ‘full aesthetic potential of the
computer simulation will be revealed’, as it is rescued from ‘the tyranny of
perceptual imperialists’ (in Druckery 1999: 48). Such imperialists being, we
can assume, those scientists, artists and producers who impose their old habits
of vision and perception upon the new media.
In a more recent example, Steve Holzmann (1997: 15) also
takes the view that most existing uses of new media fail to ‘exploit those
special qualities that are unique to digital worlds’. Again, this is because he
sees them as having as yet failed to break free of the limits of ‘existing
paradigms’ or historical forms and habits. He, too, looks forward to a time
when new media transcend the stage when they are used to fulfill old purposes
and when digital media’s ‘unique qualities’ come to ‘define entirely new
languages of expression’.
As Bolter and Grusin have argued (1999: 49–50), Holzmann
(and Youngblood before him in our other example) represent the modernist
viewpoint. They believe that for a medium to be significantly new it has to
make a radical break with the past.
A major source of such ideas is to be found in one of the
seminal texts of artistic modernism: the 1961 essay ‘Modernist Painting’ by art
critic and theorist Clement Greenberg. Although the new, digital media are
commonly understood as belonging to a postmodern period, in which the cultural
projects of modernism are thought to have been superseded, Greenbergian ideas
have continued to have a considerable pull on thinking about new media.
Clearly, the point of connection is between the sense that new media are at the
cutting edge of culture, that there is an opening up of new horizons and a need
for experimentation, and the ideology of the earlier twentieth-century artistic
avant-garde movements in painting, photography, sculpture, film and video.
We meet these modernist ideas whenever we hear talk of the
need for new media to break clear of old habits and attitudes, the gravity
field of history and its old thought patterns and practices. It is also present
when we hear talk about the essential characteristics of new media; when the
talk is of the distinctive essence of ‘digitality’ as against the
‘photographic’, the ‘filmic’ or the ‘televisual’.
Greenberg himself did not think that modern art media should
or could break with the past in any simple sense. But he did think they should
engage in a process of clarifying and refining their nature by not attempting
to do what was not proper to them. This process of refinement included ditching
old historical functions that a medium might have served in the past. Painting
was the medium that interested him in particular, and his efforts were part of
his search to identify the importance of the painting in an age of mechanical
reproduction – the age of the then relatively ‘new’ media of photography and
film. He argued that painting should rid itself of its old illustrative or
narrative functions to concentrate on its formal patterning of colour and
surface. Photography was better suited to illustrative work and showed how it
was not, after all, appropriate to painting. Painting could now realise its
true nature.
Greenberg also made his arguments in the
mid-twentieth-century context of a critique of the alienating effects of
capitalism on cultural experience. He shared with other critics the view that
the heightened experiences that art had traditionally provided were being
eroded and displaced by a levelling down to mere ‘entertainment’ and popular
kitsch. He argued that the arts could save their higher purpose from this fate
‘by demonstrating that the kind of experience they provided was valuable in its
own right and not obtained from any other kind of activity’ (Greenberg 1961, in
Harrison and Wood 1992: 755). He urged that this could be done by each art
determining, ‘through the operations peculiar to itself, the effects peculiar
and exclusive to itself’ (ibid.). By these means each art would exhibit and
make explicit ‘that which was unique and irreducible’ to it (ibid.). The task
of artists, then, was to search for the fundamental essence of their medium,
stripping away all extraneous factors and borrowings from other media. It is
often thought that this task now falls to new media artists and forward-looking
experimental producers.
However, the manner in which a new medium necessarily
adopts, in its early years, the conventions and ‘languages’ of established
media is well known. There is the case of the early photographers known as the
Pictorialists, who strove to emulate the aesthetic qualities of painting,
seeing these as the standards against which photography as a medium had to be
judged. In Youngblood’s terms they would be examples of ‘perceptual
imperialists’ who acted as a brake on the exploration of the radical
representational possibilities afforded by photography as a new medium.
Similarly, it is well known that early cinema adopted the conventions of the
theatre and vaudeville, and that television looked for its forms to theatre,
vaudeville, the format of the newspaper, and cinema itself.
As we have seen, Bolter and Grusin’s theory of ‘remediation’
(1999) deploys a Foucauldian historical perspective to argue against the
‘comfortable modernist rhetoric’ of authentic media ‘essences’ and ‘breaks with
the past’ that we have discussed here. They follow McLuhan’s insight that ‘the
content of a medium is always another medium’ (1999: 45). They propose that the
history of media is a complex process in which all media, including new media,
depend upon older media and are in a constant dialectic with them (1999: 50).
Digital media are in the process of representing older media in a whole range
of ways, some more direct and ‘transparent’ than others. At the same time,
older media are refashioning themselves by absorbing, repurposing, and
incorporating digital technologies. Such a process is also implied in the view
held by Raymond Williams, whose theory of media change we discuss fully later.
Williams argues that there is nothing inherent in the nature of a media
technology that is responsible for the way a society uses it. It does not, and
cannot, have an ‘essence’ that would inevitably create ‘effects peculiar and
exclusive to itself’. In a closely argued theory of the manner in which
television developed, he observes that some 20 years passed before, ‘new kinds
of programme were being made for television and there were important advances
in the productive use of the medium, including . . . some kinds of original
work’ (Williams 1974: 30). Productive uses of a new medium and original work in
them are not precluded, therefore, by recognising their long-term interplay
with older media.
We need, then, to ask a number of questions of the modernist
and avant-garde calls for new media to define itself as radically novel. Do
media proceed by a process of ruptures or decisive breaks with the past? Can a
medium transcend its historical contexts to deliver an ‘entirely new language’?
Do, indeed, media have irreducible and unique essences (which is not quite the
same as having distinguishing characteristics which encourage or constrain the
kind of thing we do with them)? These seem to be especially important questions
to ask of new digital media which, in large part, rely upon hybrids,
convergences and transformations of older media.
The return of the Middle Ages and other media archaeologies
This section looks at yet another historicising approach to
new media studies; here, however, insights from our encounters with new media
are drawn upon to rethink existing media histories. Such revisions imply a view
of history that is far from teleological, or a basis in the belief in
inevitable ‘progress’. Unlike the previous examples we turn here to a kind of
historical thinking that neither looks at new media as the fulfilment of the
recent past nor does it assume a future time in which new media will inevitably
transcend the old. Rather, it is suggested that certain uses and aesthetic
forms of new media significantly recall residual or suppressed intellectual and
representational practices of relatively, and in some cases extremely, remote
historical periods. In the context of his own argument against ‘sequential
narratives’ of change in image culture, Kevin Robins observes that: It is
notable that much of the most interesting discussion of images now concerns not
digital futures but, actually, what seemed until recently to be antique and
forgotten media (the panorama, the camera obscura, the stereoscope): from our
postphotographic vantage point these have suddenly acquired new meanings, and
their reevaluation now seems crucial to understanding the significance of
digital culture. (Robins 1996: 165)
The ludic: cinema and
games
A major example of this renewed interest in ‘antique’ media
is in the early cinema of circa 1900–1920 and its prehistory in mechanical
spectacles such as the panorama. Its source is in the way the structures,
aesthetics and pleasures of computer games are being seen to represent a
revival of qualities found in that earlier medium. It is argued that this
‘cinema of attractions’ was overtaken and suppressed by what became the
dominant form of narrative cinema, exemplified by classical Hollywood in the
1930s–1950s. Now, at the beginning of the twenty-first century, changes in
media production and in the pleasures sought in media consumption, exemplified
in the form of the computer game and its crossovers with special effects
‘blockbuster’ cinema, indicate a return of the possibilities present in early
cinema. These ideas and the research that supports them are discussed in more
detail later. What is significant in the context of this section is the way
that noticing things about new media has led some of its theorists to find
remarkable historical parallels which cannot be contained within a methodology
of technological progress, but rather of loss, suppression or marginalisation,
and then return.
Rhetoric and
spatialised memory
Benjamin Woolley, writing about Nicholas Negroponte’s
concept of ‘spatial data management’, exemplified in computer media’s
metaphorical desktops, and simulated 3D working environments, draws a parallel
with the memorising strategies of ancient preliterate, oral cultures. He sees
the icons and spaces of the computer screen recalling the ‘mnemonic’ traditions
of classical and medieval Europe. Mnemonics is the art of using imaginary
spaces or ‘memory palaces’ (spatial arrangements, buildings, objects, or
painted representations of them) as aids to remembering long stories and
complex arguments (Woolley 1992: 138–149). Similarly, with a focus on computer
games, Nickianne Moody (1995) traces a related set of connections between the
forms and aesthetics of role play games, interactive computer games and the
allegorical narratives of the Middle Ages.
Edutainment and the
eighteenth-century Enlightenment
Barbara Maria Stafford observes that with the increasingly
widespread use of interactive computer graphics and educational software
packages we are returning to a kind of ‘oral-visual culture’ which was at the
centre of European education and scientific experiment in the early eighteenth
century (1994: xxv). Stafford argues that during the later eighteenth century,
and across the nineteenth, written texts and mass literacy came to be the only
respectable and trustworthy media of knowledge and education. Practical and the
visual modes of enquiry, experiment, demonstration and learning fell into
disrepute as seductive and unreliable. Now, with computer animation and
modelling, virtual reality, and even email (as a form of discussion), Stafford
sees the emergence of a ‘new vision and visionary art-science’, a form of
visual education similar to that which arose in the early eighteenth century,
‘on the boundaries between art and technology, game and experiment, image and
speech’ (ibid.). However, she argues, in order for our culture to guide itself
through this ‘electronic upheaval’ (ibid.) we will need ‘to go backward in
order to go forward’, in order to ‘unearth a past material world that had once
occupied the centre of a communications network but was then steadily pushed to
the periphery’ (ibid.: 3).
Stafford’s case is more than a formal comparison between two
periods when the oral, visual and practical dominate over the literary and
textual. She also argues that the use of images and practical experiments,
objects and apparatuses, that characterised early Enlightenment education
coincided with the birth of middle-class leisure and early forms of consumer
culture (1994: xxi). Stafford also suggests that our late twentieth- and early
twenty-first-century anxieties about ‘dumbing down’ and ‘edutainment’ are
echoed in eighteenth-century concerns to distinguish authentic forms of learning
and scientific demonstration from quackery and charlatanism. Her argument,
overall, is that the graphic materials of eighteenth-century education and
scientific experiment were the ‘ancestors of today’s home- and place-based
software and interactive technology’ (ibid.: xxiii).
In each of these cases, history is not seen simply as a
matter of linear chronology or unilinear progress in which the present is
understood mainly as the superior development of the immediate past; rather,
short-circuits and loops in historical time are conceived. Indeed, it chimes
with the postmodern view that history (certainly social and cultural history)
as a continuous process of progressive development has ceased. Instead, the
past has become a vast reservoir of styles and possibilities that are
permanently available for reconstruction and revival. The most cursory glance
at contemporary architecture, interior design and fashion will show this
process of retroactive culture recycling in action.
We can also make sense of this relation between
chronologically remote times and the present through the idea that a culture
contains dominant, residual, and emergent elements (Williams 1977: 121–127).
Using these concepts, Williams argues that elements in a culture that were once
dominant may become residual but do not necessarily disappear. They become
unimportant and peripheral to a culture’s major concerns but are still
available as resources which can be used to challenge and resist dominant
cultural practices and values at another time. We might note, in this
connection, how cyber-fiction and fantasy repeatedly dresses up its visions of
the future in medieval imagery. The future is imagined in terms of the past. As
Moody puts it: Much fantasy fiction shares a clearly defined quasi-medieval
diegesis. One that fits snugly into Umberto Eco’s categorisation of the ‘new
middle ages’ . . . For Eco it would be entirely logical that the ‘high tech’
personal computer is used to play dark and labyrinthine games with a medieval
diegesis. (Moody 1995: 61)
For Robins, the significance of these renewed interests in
the past, driven by current reflections on new media, is that they allow us to
think in non-teleological ways about the past and to recognise what ‘modern
culture has repressed and disavowed’ (1996: 161) in its overriding and often
exclusive or blind concern for technological rationalism. The discovery of the
kind of historical precedents for new media which our examples stand for, may,
in his terms, be opportunities for grasping that new media are not best thought
of as the narrow pinnacle of technological progress. Rather, they are evidence
of a more complex and richer coexistence of cultural practices that the diverse
possibilities of new media throw into fresh relief.
A sense of déjà vu
The utopian, as well as dystopian, terms in which new media
have been received have caused several media historians to record a sense of
déjà vu, the feeling that we have been here before. In particular, the quite
remarkable utopian claims made for earlier new media technologies such as
photography and cinema have been used to contextualise the widespread
technophilia of the last fifteen or so years (e.g. Dovey 1995: 111). So, the
history in question this time is not that of the material forerunners of new
image and communication media themselves but of the terms in which societies
responded to and discussed earlier ‘media revolutions’. This is discussed more
fully later.
Two kinds of historical enquiry are relevant here. The first
is to be found in the existing body of media history, such as: literacy (Ong
2002), the printing press (Eisenstein 1979), the book (Chartier 1994),
photography (Tagg 1998), film and television (Williams 1974). These
long-standing topics of historical research provide us with detailed empirical
knowledge of what we broadly refer to as earlier ‘media revolutions’. They also
represent sustained efforts to grasp the various patterns of determination, and
the surprising outcomes of the introductions, over the long term, of new media
into particular societies, cultures and economies. While it is not possible to
transfer our understanding of the ‘coming of the book’ or of ‘the birth of
photography’ directly and wholesale to a study of the cultural impact of the
computer, because the wider social context in which each occurs is different,
such studies provide us with indispensable methods and frameworks to guide us
in working out how new technologies become media, and with what outcomes.
Second, a more recent development has been historical and
ethnographic research into our imaginative investment in new technologies, the
manner in which we respond to their appearance in our lives, and the ways in
which the members of a culture repurpose and subvert media in everyday use
(regardless of the purposes which their inventors and developers saw for them).
This is also discussed more fully in, where we deal with the concept of the
‘technological imaginary’.
Conclusion
Paradoxically, then, it is precisely our sense of the ‘new’
in new media which makes history so important – in the way that something so
current, rapidly changing and running toward the future also calls us back to
the past. This analytic position somewhat challenges the idea that new media
are ‘postmodern’ media; that is, media that arise from, and then contribute to,
a set of socio-cultural developments which are thought to mark a significant
break with history, with the ‘modern’ industrial period and its forerunner in
the eighteenth-century age of Enlightenment. We have seen that thinking in
terms of a simple separation of the present and the recent past (the
postmodern) from the ‘modern’ period may obscure as much as it reveals about
new media. We have argued instead for a history that allows for the
continuation of certain media traditions through ‘remediation’, as well as the
revisiting and revival of suppressed or disregarded historical moments in order
to understand contemporary developments. Our review of (new) media histories is
based in the need to distinguish between what may be new about our contemporary
media and what they share with other media, and between what they can do and
what is ideological in our reception of new media. In order to be able to
disregard what Langdon Winner (1989) has called ‘mythinformation’ we have argued
that history has never been so important for the student of media.
Who was dissatisfied
with old media?
The question
The question that forms the title of this section is asked
in order to raise a critical issue – what were the problems to which new communications
media are the solutions? We might, of course, say that there were none. ‘New’
media were simply that – ‘new’ – in themselves and have no relation to any
limits, shortcomings, or problems that might have been associated with ‘old’
media. But, the two quotes above, one referring to television and the other to
photography, can stand for many other views and comments that strongly suggest
that they do.
In thinking about such a question we will find ourselves
considering the discursive frameworks that establish the conditions of
possibility for new media. This in turn will allow us to look at some of the
ways in which previously ‘new’ media have been considered in order to
understand the discursive formations present in our contemporary moment of novelty.
In the rumours and early literature about the coming of
multimedia and virtual reality, and as soon as new media forms themselves began
to appear, they were celebrated as overcoming, or at least as having the
promise to overcome, the negative limits and even the oppressive features of
established and culturally dominant analogue media. As the above statements
about television and photography imply, in the reception of new media there
was, and still is, an implication that we needed them in order to overcome the
limits of the old.
On this basis it could seem reasonable to ask whether media
were in such bad odour in pre-digital days, that a mass of criticism and
dissatisfaction formed a body of pressure such that something better was
sought. Or, alternatively, we might ask whether ideas about the superiority of
new media are merely retrospective projections or post-hoc rationalisations of
change; simply a case of wanting to believe that what we have is better than
what went before.
However, these questions are too reductive to arrive at an
understanding of how our perceptions and experiences of new media are framed.
In order to arrive at a better explanation, this section considers how the
development and reception of new media have been shaped by two sets of ideas.
First, the socio-psychological workings of the ‘technological imaginary’;
second, earlier twentieth-century traditions of media critique aimed at the
‘mass’ broadcast media and their perceived social effects. We will be
interested in these traditions to the extent that they are picked up and used
in the evaluation of new media.
The technological
imaginary
The phrase the ‘technological imaginary’, as it is used in
critical thought about cinema in the first place (De Lauretis et al. 1980) and
now new media technologies, has roots in psychoanalytic theory. It has migrated
from that location to be more generally used in the study of culture and
technology. In some versions it has been recast in more sociological language
and is met as a ‘popular’ or ‘collective’ imagination about technologies
(Flichy 1999). Here, tendencies that may have been originally posited (in
psychoanalytical theory) as belonging to individuals are also observed to be
present at the level of social groups and collectivities. However, some of the
specific charge that the word has in psychoanalytic theory needs to be retained
to see its usefulness. The French adjective imaginaire became a noun, a name
for a substantive order of experience, the imaginaire, alongside two others – the
‘real’ and the ‘symbolic’ – in the psychoanalytic theories of Jacques Lacan.
After Lacan, imaginaire or the English ‘imaginary’ does not refer, as it does
in everyday use, to a kind of poetic mental fac-ulty or the activity of
fantasising (Ragland-Sullivan 1992: 173–176). Rather, in psychoanalytic theory,
it refers to a realm of images, representations, ideas and intuitions of
fulfilment, of wholeness and completeness that human beings, in their
fragmented and incomplete selves, desire to become. These are images of an
‘other’ – an other self, another race, gender, or significant other person,
another state of being. Technologies are then cast in the role of such an
‘other’. When applied to technology, or media technologies in particular, the
concept of a technological imaginary draws attention to the way that
(frequently gendered) dissatisfactions with social reality and desires for a
better society are projected onto technologies as capa-ble of delivering a
potential realm of completeness.
This can seem a very abstract notion. The Case studies in
this section show how, in different ways, new media are catalysts or vehicles
for the expression of ideas about human existence and social life. We can begin
to do this by reminding ourselves of some typical responses to the advent of
new media and by considering the recurring sense of optimism and anxiety that
each wave of new media calls up.
As a new medium becomes socially available it is necessarily
placed in relation to a culture’s older media forms and the way that these are
already valued and understood. This is seen in expressions of a sense of
anxiety at the loss of the forms that are displaced. Well-known examples of
this include the purist fears about the impact of photography on painting in
the 1840s, and of television and then video on cinema in the 1970s. More
recently, regret has been expressed about the impact of digital imaging on
photography (Ritchen 1990) and graphics software on drawing and design as they
moved from the traditional craft spaces of the darkroom and the drawing board
to the computer screen. In terms of communication media this sense of loss is
usually expressed in social, rather than aesthetic or craft terms. For
instance, during the last quarter of the nineteenth century it was feared that
the telephone would invade the domestic privacy of the family or that it would
break through important settled social hierarchies, allowing the lower classes
to speak (inappropriately) to their ‘betters’ in ways that were not permitted
in traditional face-to-face encounters (Marvin 1988). Since the early 1990s, we
have seen a more recent example in the widespread shift that has taken place
between terrestrial mail and email. Here anxieties are expressed, by some,
about the way that email has eradicated the time for reflection that was
involved in traditional letter writing and sending leading to notorious email
‘flaming’ and intemperate exchanges.
Conversely, during the period in which the cultural
reception of a new medium is being worked out, it is also favourably positioned
in relation to existing media. The euphoric celebration of a new medium and the
often feverish speculation about its potential is achieved, at least in part,
by its favourable contrast with older forms. In their attempts to persuade us
to invest in the technology advertisers often use older media as an ‘other’
against which the ‘new’ is given an identity as good, as socially and
aesthetically progressive. This kind of comparison draws upon more than the
hopes that a culture has for its new media, it also involves its existing
feelings about the old (Robins 1996).
Traditional chemical photography has played such a role in
recent celebrations of digital imaging (see Lister 1995; Robins 1995), as has
television in the talking-up of interactive media. Before the emergence and
application of digital technologies, T V, for instance, was widely perceived as
a ‘bad object’ and this ascription has been important as a foil to celebrations
of interactive media’s superiority over broadcast television (Boddy 1994; see
also Case study 1.5). Television is associated with passivity, encapsulated in
the image of the TV viewer as an inert ‘couch potato’ subject to its ‘effects’,
while the interactive media ‘user’ (already a name which connotes a more active
relation to media than does ‘viewer’) conjures up an image of someone occupying
an ergonomically designed, hi-tech swivel chair, alert and skilled as they
‘navigate’ and make active choices via their screen-based interface. Artists,
novelists, and technologists entice us with the prospect of creating and living
in virtual worlds of our own making rather than being anonymous and passive
members of the ‘mass’ audience of popular television. As a broadcast medium, TV
is seen as an agent for the transmission of centralised (read authoritarian or
incontestable) messages to mass audiences. This is then readily compared to the
new possibilities of the one-to-one, two-way, decentralised transmissions of
the Internet or the new possibilities for narrowcasting and interactive T V.
Similar kinds of contrast have been made between non-linear, hot-linked,
hypertext and the traditional form of the book which, in this new comparison,
becomes ‘the big book’ (like this one), a fixed, dogmatic text which is the
prescriptive voice of authority.
So, a part of understanding the conditions in which new
media are received and evaluated involves (1) seeing what values a culture has
already invested in old media, and this may involve considering whose values
these were, and (2) understanding how the concrete objects (books, TV sets,
computers) and the products (novels, soap operas, games) of particular media
come to have good or bad cultural connotations in the first place. In order to
do this we first consider how apparent the technological imaginary is in the
ways we talk and write about media.
The discursive
construction of new media
It is essential to realise that a theory does not find its
object sitting waiting for it in the world: theories constitute their own
objects in the process of their evolution. ‘Water’ is not the same theoretical
object in chemistry as it is in hydraulics – an observation which in no way
denies that chemists and engineers alike drink, and shower in, the same
substance. (Burgin 1982: 9)
Victor Burgin offers this example of the way that the nature
of a common object of concern – water – will be differently understood
according to the specific set of concepts which are used to study it. A key
argument of post-structuralist theory is that language does not merely describe
a pre-given reality (words are matched to things) but that reality is only
known through language (the words or concepts we possess lead us to perceive
and conceive the world in their terms). Language, in this sense, can be thought
of as operating as microscopes, telescopes and cameras do – they produce
certain kinds of images of the world; they construct ways of seeing and
understanding. Elaborated systems of language (conversations, theories,
arguments, descriptions) which are built up or evolved as part of particular
social projects (expressing emotion, writing legal contracts, analysing social
behaviour, etc.) are called discourses. Discourses, like the words and concepts
they employ, can then be said to construct their objects. It is in this sense
that we now turn to the discursive construction of new media as it feeds
(frames, provides the resources for) the technological imagination.
On meeting the many claims and predictions made for new
media, media historians have expressed a sense of déjà vu – of having ‘seen
this’ or ‘been here’ before (Gunning 1991). This is more than a matter of
history repeating itself. This would amount to saying that the emergence and
development of each new medium occurs and proceeds technologically and socio-economically
in the same way, and that the same patterns of response are evident in the
members of the culture who receive, use and consume it. There are, indeed, some
marked similarities of this kind, but it would be too simple to leave the
matter there. To do this would simply hasten us to the ‘business as usual’
conclusion which we have rejected as conservative and inadequate. More
importantly, it would be wrong. For, even if there are patterns that recur in
the technological emergence and development of new media technologies, we have
to recognise that they occur in widely different historical and social
contexts. Furthermore, the technologies in question have different capacities
and characteristics.
For example, similarities are frequently pointed out between
the emergence of film technology and the search for cinematic form at the end
of the nineteenth century and that of multimedia and VR at the end of the
twentieth century. However, film and cinema entered a world of handmade images
and early kinds of still photographic image (at that time, a difficult craft),
of venue-based, mechanically produced theatrical spectacles in which the
‘movement’ and special effects on offer were experienced as absolutely novel
and would seem primitive by today’s standards. There was no broadcasting, and
even the telephone was a novel apparatus. And, of course, much wider factors
could be pointed to: the state of development of mass industrial production and
consumer culture, of general education, etc. The world into which our new media
have emerged is very different; it has seen a hundred years of increasingly
pervasive and sophisticated technological visual culture (Darley 1991).
It is a world in which images, still and moving, in print
and on screens, are layered so thick, are so intertextual, that a sense of what
is real has become problematic, buried under the thick sediment of its visual
representations. New media technologies which emerge into this context enter an
enormously complex moving image culture of developed genres, signifying
conventions, audiences with highly developed and ‘knowing’ pleasures and ways
of ‘reading’ images, and a major industry and entertainment economy which is
very different from, even if it has antecedents in, that of the late nineteenth
century.
What then gives rise to the sense of déjà vu mentioned
above? It is likely that it does not concern the actual historical repetition
of technologies or mediums themselves – rather, it is a matter of the
repetition of deeply ingrained ways in which we think, talk, and write about
new image and communication technologies. In short, their discursive
construction. Whatever the actual and detailed paths taken by a new media
technology in its particular historical context of complex determinations (the
telephone, the radio, T V, etc.) it is a striking matter of record that the
responses of contemporaries (professionals in their journals, journalists,
academic and other commentators) are cast in uncannily similar terms (Marvin
1988; Spiegel 1992; Boddy 1994).
In noticing these things, the experience of loss with the
displacement of the old, the simultaneous judgement of the old as limited, and
a sense of repetition in how media and technological change is talked and
written about, we are ready to consider some more detailed examples of the
‘technological imaginary’ at work.
The examples above argue that the processes that determine
the kind of media we actually get are neither solely economic nor solely
technological, but that all orders of decision in the development process occur
within a discursive framework powerfully shaped by the technological imaginary.
The evidence for the existence of such a framework can be tracked back through
the introduction of numerous technologies and goods throughout the modern
period.
The return of the
Frankfurt School critique in the popularisation of new media
We now return to a broader consideration of the points
raised concerning the allegedly ‘democratic’ potential of interactivity. Here,
however, we point out how a tradition of criticism of mass media finds itself
reappropriated as another discursive framework that shapes our ideas about what
new media are or could be.
This tradition of media critique expressed profound
dissatisfaction with the uses and the cultural and political implications of
broadcast media throughout the early and mid-twentieth century. Such critics of
the effects of twentieth-century mass media did not normally think that there
was a technological solution to the problems they identified. They did not
suggest that new and different media technologies would overcome the social and
cultural problems they associated with the media they were familiar with. To
the extent that they could conceive of change in their situation they saw hope
lying in social action, whether through political revolution or a conservative
defence of threatened values. In another tradition it was more imaginative and
democratic uses of existing media that were seen as the answer. Nevertheless,
the critique of mass media has become, in the hands of new media enthusiasts, a
set of terms against which new media are celebrated. The positions and theories
represented by these media critics have been frequently rehearsed and continue
to be influential in some areas of media studies and theory. Because of this
they need not be dealt with at great length here as many accessible and
adequate accounts already exist (Strinati 1995; Stevenson 1995; Lury 1992).
The ‘culture
industry’, the end of democratic participation and critical distance
From the 1920s until the present day the mass media
(especially the popular press and the broadcast media of radio and television)
have been the object of sustained criticism from intellectuals, artists,
educationalists, feminists and left-wing activists. It is a (contentious)
aspect of this critique, which sees mass culture as disempowering,
homogenising, and impo-sitional in nature, that is of relevance in this
context. Strinati sums up such a view: [there] is a specific conception of the
audience of mass culture, the mass or the public which consumes mass produced
cultural products. The audience is conceived of as a mass of passive consumers
. . . supine before the false pleasures of mass consumption . . . The picture
is of a mass which almost without thinking, without reflecting, abandoning all
critical hope, buys into mass culture and mass consumption. Due to the
emergence of mass society and mass culture it lacks the intellectual and moral
resources to do otherwise. It cannot think of, or in terms of, alternatives.
(Strinati 1995: 12)
Such a conception and evaluation of the ‘mass’ and its
culture was argued by intellectuals who were steeped in the values of a
literary culture. Alan Meek has described well a dominant kind of relationship
which such intellectuals and artists had to the mass media in the early and
mid-twentieth century:
The modern Western intellectual appeared as a figure within
the public sphere whose technological media was print and whose institutions
were defined by the nation state. The ideals of democratic participation and
critical literacy which the intellectual espoused have often been seen to be
undermined by the emerging apparatus of electronic media, ‘mass culture’, or
the entertainment industry.
(Meek 2000: 88) Mass society critics feared four things:
•
The debasement and displacement of an authentic
organic folk culture;
•
The erosion of high cultural traditions, those
of art and literature;
•
Loss of the ability of these cultural traditions
(as the classical ‘public sphere’) to comment critically on society’s values;
•
The indoctrination and manipulation of the
‘masses’ by either totalitarian politics or market forces.
The context within which these fears were articulated was
the rise of mass, urban society. Nineteenth- and early twentieth-century
industrialisation and urbanisation in Western Europe and America had weakened
or destroyed organic, closely knit, agrarian communities. The sense of
identity, community membership and oral, face-to-face communication fostered
and mediated by institutions like the extended family, the village, and the
Church were seen to be replaced by a collection of atomised individuals in the
new industrial cities and workplaces. At the same time the production of
culture itself became subject to the processes of industrialisation and the
marketplace. The evolving Hollywood mode of film production, popular ‘pulp’
fiction, and popular music were particular objects of criticism. Seen as
generic and formulaic, catering to the lowest common denominators of taste,
they were assembly line models of cultural production. Radio, and later
television, were viewed as centralised impositions from above. Either as a
means of trivialising the content of communication, or as a means of political
indoctrination, they were seen as threats to democracy and the informed
critical participation of the masses in cultural and social life. How, feared
the intellectuals, given the burgeoning of mass electronic media, could people
take a part in a democratic system of government in which all citizens are
active, through their elected representatives, in the decisions a society
makes?
With the erosion of folk wisdom and morality, and the
trivialisation, commercialisation and centralisation of culture and
communications, how could citizens be informed about issues and able, through
their educated ability, to think independently and form views on social and
political issues? Critical participation demanded an ability and energy to take
issue with how things are, to ask questions about the nature or order of
things, and a capacity to envision and conceive of better states as a guide to
action. In the eyes of theorists such as those of the Frankfurt School, such
ideals were terminally threatened by the mass media and mass culture.
Further, such developments took place in the context of twin
evils. First, the twin realities of Fascism and Stalinism which demonstrated
the power of mass media harnessed to totalitarianism. Second, the tyranny of
market forces to generate false needs and desires within the populations of
capitalist societies where active citizens were being transformed into ‘mere’
consumers.
This ‘mass society theory’, and its related critiques of the
mass media, has been much debated, challenged and qualified within media
sociology, ethnography, and in the light of postmodern media theory in recent
years. Despite the existence of more nuanced accounts of the mass media which
offer a more complex view of their social significance, it has now become clear
that some of the main proponents of the twenty-first century’s new
communications media are actually celebrating their potential to restore
society to a state where the damage perceived to be wrought by mass media will
be undone. In some versions there is an active looking back to a pre-mass
culture golden age of authentic exchange and community. We can especially note
the following:
•
The recovery of community and a sphere of public
debate. In this formulation the Internet is seen as providing a vibrant counter
public sphere. In addition, shared online spaces allegedly provide a sense of
‘cyber community’ against the alienations of contemporary life.
•
The removal of information and communication
from central authority, control and censorship.
•
The ‘fourth estate’ function of mass media, seen
here to be revived with the rise of the ‘citizen journalist’ as alternative
sources of news and information circulate freely through ‘blogs’, online
publishing, camera-phone photography etc.
•
The creative exploration of new forms of
identity and relationship within virtual communities and social networking
sites.
Online communication is here seen as productive not of
‘passive’ supine subjects but of an active process of identity construction and
exchange. These arguments all in some way echo and answer ways in which
conventional mass media have been problematised by intellectuals and critics.
The Brechtian
avant-garde and lost opportunities
These ‘answers’ to a widespread pessimism about mass media
can be seen in the light of another tradition in which the emancipatory power
of radio, cinema, and television (also the mass press) lay in the way that they
promised to involve the workers of industrial society in creative production,
self-education and political expression. A major representative of this view is
the socialist playwright Bertolt Brecht. Brecht castigated the form that radio
was taking in the 1930s as he saw its potentials being limited to ‘prettifying
public life’ and to ‘bringing back cosiness to the home and making family life
bearable’. His alternative, however, was not the male hobby, as described by
Boddy above, but a radical practice of exchange and networking. It is
interesting to listen to his vision of radio conceived as a ‘vast network’ in
1932: radio is one-sided when it should be two. It is purely an apparatus for
distribution, for mere sharing out. So here is a positive suggestion: change
this apparatus over from distribution to communication. The radio would be the
finest possible communication apparatus in public life, a vast network of
pipes. That is to say, it would be if it knew how to receive as well as submit,
how to let the listener speak as well as hear, how to bring him into a
relationship instead of isolating him. (Brecht 1936, in Hanhardt 1986: 53)
Brecht’s cultural politics have lain behind radical
movements in theatre, photography, television and video production from the
1930s to the 1980s. In a final or latest resurgence they now inform politicised
ideas about the uses of new media. Here it is argued that new media can be used
as essentially two-way channels of communication that lie outside of official
control. Combined with mobile telephony and digital video anti-capitalist
demonstrators are now able to webcast near live information from their actions,
beating news crews to the action and the transmission.
Finally, it is necessary to mention the influential ideas of
a peripheral member of the Frankfurt School, Walter Benjamin. He took issue, in
some of his writing, with the cultural pessimism of his colleagues. In ‘The Work
of Art in the Age of Mechanical Reproduction’, and ‘The Author As Producer’, he
argues that photography, film, and the modern newspaper, as media of mass
reproduction, have revolutionary potential. Benjamin roots his argument in
noticing some of the distinctive characteristics of these media, and the
implications that he draws from them can be heard to echo today in the more
sanguine estimations of the potential of new (digital) media. However, Benjamin
sees that whether or not this potential will be realised is finally a matter of
politics and not technology.
Conclusion
This section has served to illustrate how the debates about
new media, what it is, what it might be, what we would like it to be, rehearse
many positions that have already been established within media studies and
critical theory. Though the debates above are largely framed in terms of the
amazing novelty of the possibilities that are opening up, they in fact revisit
ground already well trodden. The disavowal of the history of new media thus
appears as an ideological sleight of hand that recruits us to their essential
value but fails to help us understand what is happening around us.
12
New Media: Determining or Determined?
In previous sections this book we have been looking at what kinds
of histories, definitions and discourses shape the way we think about new
media. We begin this final section by turning to examine two apparently
competing paradigms, or two distinct approaches to the study of media, both of
which underlie different parts of what will follow in this volume.
At the centre of each of these paradigms is a very different
understanding of the power media and technology have to determine culture and
society. The long-standing question of whether or not a media technology has
the power to transform a culture has been given a very high profile with the
development of new media. It will repay the good deal of attention that we give
it here in this chapter. In this section we will investigate this issue and the
debates that surround it by turning back to the writings of two key but very
different theorists of media: Marshall McLuhan and Raymond Williams. It is
their views and arguments about the issue, filtered through very different
routes, that now echo in the debate between those who see new media as
revolutionary or as ‘business as usual’ that we pointed to in.
Although both authors more or less ceased writing at the
point where the PC was about to ‘take off’ their analysis of the relationships
between technology, culture and media continues to resonate in contemporary
thought. As media theorists, both were interested in new media. It is precisely
McLuhan’s interest to identify and ‘probe’ what he saw as big cultural shifts
brought about by change in media technologies. Williams, too, speaks of ‘new
media’ and is interested in the conditions of their emergence and their
subsequent use and control. While McLuhan was wholly concerned with identifying
the major cultural effects that he saw new technological forms (in history and in
his present) bringing about, Williams sought to show that there is nothing in a
particular technology which guarantees the cultural or social outcomes it will
have (Williams 1983: 130). McLuhan’s arguments are at the core of claims that
‘new media change everything’. If, as McLuhan argued, media determine
consciousness then clearly we are living through times of profound change. On
the other hand, albeit in a somewhat reduced way, the ‘business as usual’ camp
is deeply indebted to Williams for the way in which they argue that media can
only take effect through already present social processes and structures and
will therefore reproduce existing patterns of use and basically sustain
existing power relations.
The status of McLuhan
and Williams
In the mainstream of media studies and much cultural studies
the part played by the technological element that any medium has is always
strongly qualified. Any idea that a medium can be reduced to a technology, or
that the technological element which is admitted to be a part of any media
process should be central to its study, is strongly resisted. The grounds for
this view are to be found in a number of seminal essays by Raymond Williams
(1974: 9–31; 1977: 158–164; 1983: 128–153), which, at least in part, responded
critically to the ‘potent observations’ (Hall 1975: 81) of the Canadian
literary and media theorist Marshall McLuhan. Williams’s arguments against
McLuhan subsequently became touchstones for media studies’ rejection of any
kind of technological determinism.
Yet, and here we meet one of the main sources of the present
clash of discourses around the significance of new media, McLuhan’s ideas have
undergone a renaissance – literally a rebirth or rediscovery – in the hands of
contemporary commentators, both popular and academic, on new media. The
McLuhanite insistence on the need for new non-linear (‘mosaic’ is his term)
ways of thinking about new media, which escape the intellectual protocols,
procedures and habits of a linear print culture, has been taken up as something
of a war cry against the academic media analyst. The charge that the
neo-McLuhan cybertheorists make about media studies is made at this
fundamental, epistemological level; that they simply fail to realise that its
viewpoints (something, in fact, that McLuhan would claim we can no longer have)
and methodologies have been hopelessly outstripped by events. As an early
critic of McLuhan realised, to disagree with McLuhanite thinking is likely to
be seen as the product of ‘an outmoded insistence on the logical, ABCD minded,
causality mad, one-thing-at-a-time method that the electronic age and its
prophet have rendered obsolete’ (Duffy 1969: 31).
Both Williams and McLuhan carried out their influential work
in the 1960s and 1970s. Williams was one of the founding figures of British
media and cultural studies. His rich, if at times abstract, historical and
sociological formulations about cultural production and society provided some
of the master templates for what has become mainstream media studies. Countless
detailed studies of all kinds of media are guided and informed by his careful
and penetrating outlines for a theory of media as a form of cultural
production. His work is so deeply assimilated within the media studies
discipline that he is seldom explicitly cited; he has become an invisible
presence. Wherever we consider, in this book, new media as subject to control
and direction by human institutions, skill, creativity and intention, we are
building upon such a Williamsite emphasis.
On the other hand, McLuhan, the provoking, contentious
figure who gained almost pop status in the 1960s, was discredited for his
untenable pronouncements and was swatted away like an irritating fly by the
critiques of Williams and others (see Miller 1971). However, as Williams
foresaw (1974: 128), McLuhan has found highly influential followers. Many of
his ideas have been taken up and developed by a whole range of theorists with
an interest in new media: Baudrillard, Virilio, Poster, Kroker, De Kerckhove.
The work of McLuhan and his followers has great appeal for those who see new
media as bringing about radical cultural change or have some special interest
in celebrating its potential. For the electronic counterculture he is an
oppositional figure and for corporate business a source of propaganda – his
aphorisms, ‘the global village’ and ‘the medium is the message’, ‘function as
globally recognised jingles’ for multinational trade in digital commodities
(Genosko 1998). The magazine Wired has adopted him as its ‘patron saint’ (Wired,
January 1996).
Williams’s insights, embedded in a grounded and systematic
theory, have been a major, shaping contribution to the constitution of an
academic discipline. McLuhan’s elliptical, unsystematic, contradictory and
playful insights have fired the thought, the distinctive stance, and the
methodological strategies of diverse but influential theorists of new media. We
might say that Williams’s thought is structured into media studies while, with
respect to this discipline, McLuhan and those who have developed his ideas
stalk its margins, sniping and provoking in ways that ensure they are
frequently, if sometimes begrudgingly, referenced. Even cautious media
academics allow McLuhan a little nowadays. He is seen as a theoretically
unsubtle and inconsistent thinker who provokes others to think (Silverstone
1999: 21). It matters if he is wrong. One or another of his insights is often
the jumping-off point for a contemporary study.
McLuhan’s major publications appeared in the 1960s, some two
decades before the effective emergence of the PC as a technology for
communications and media production. It is a shift from a 500-year-old print
culture to one of ‘electric’ media, by which he mainly means radio and
television, that McLuhan considers. He only knew computers in the form of the
mainframe computers of his day, yet they formed part of his bigger concept of
the ‘electric environment’, and he was sharp enough to see the practice of
timesharing on these machines as the early signs of their social availability.
By the 1990s, for some, McLuhan’s ideas, when applied to developments in new
media, had come to seem not only potent but extraordinarily prescient as well.
It is quite easy to imagine a student at work in some future time, who, failing
to take note of McLuhan’s dates, is convinced that he is a 1990s writer on
cyber culture, a contemporary of Jean Baudrillard or William Gibson. While this
may owe something to the way that his ideas have been taken up in the
postmodern context of the last two decades of the twentieth century by writers
such as Baudrillard, Virilio, De Kerckhove, Kroker, Kelly, and Toffler, this
hardly undermines the challenging and deliberately perverse originality of his
thought.
The debate between the Williams and McLuhan positions, and
Williams’s apparent victory in this debate, left media studies with a legacy.
It has had the effect of putting paid to any ‘good-sense’ cultural or media
theorist raising the spectre of the technological determinism associated with
the thought of McLuhan. It has also had the effect of foreclosing aspects of
the way in which cultural and media studies deals with technology by implicitly
arguing that technology on its own is incapable of producing change, the view
being that whatever is going on around us in terms of rapid technological
change there are rational and manipulative interests at work driving the
technology in particular directions and it is to these that we should primarily
direct our attention. Such is the dismissal of the role of technology in cultural
change that, should we wish to confront this situation, we are inevitably faced
with our views being reduced to apparent absurdity: ‘What!? Are you suggesting
that machines can and do act, cause things to happen on their own? – that a
machine caused space flight, rather than the superpowers’ ideological struggle
for achievement?’
However, there are good reasons to believe that technology
cannot be adequately analysed only within the humanist frame Williams
bequeathed cultural and media theorists. Arguments about what causes
technological change may not be so straightforward as culturalist accusations
of political or theoretical naivety seem to suggest. In this section,
therefore, we review Williams’s and McLuhan’s arguments about media and technology.
We then examine the limits of the humanist account of technology that Williams
so influentially offered and ask whether he was correct in his dismissal of
McLuhan as a crude technological determinist. Finally, we explore other
important nonhumanist accounts of technology that are frequently excluded from
the contemporary study of media technologies.
Humanism
‘Humanism’ is a term applied to a long and recurring
tendency in Western thought. It appears to have its origins in the fifteenth-
and sixteenth-century Italian Renaissance where a number of scholars (Bruno,
Erasmus, Valla, and Pico della Mirandola) worked to recover elements of
classical learning and natural science lost in the ‘dark ages’ of the medieval
Christian world. Their emphasis on explaining the world through the human
capacity for rational thought rather than a reliance on Christian theology
fostered the ‘[b]elief that individual human beings are the fundamental source
of all value and have the ability to understand - and perhaps even to control -
the natural world by careful application of their own rational faculties’
(Oxford Companion to Philosophy). This impetus was added to and modified many
times in following centuries. Of note is the seventeenth-century Cartesian idea
of the human subject, ‘I think, therefore I am. I have intentions, purposes,
goals, therefore I am the sole source and free agent of my actions’ (Sarup
1988: 84). There is a specifically ‘Marxist humanism’ in the sense that it is
believed that self-aware, thinking and acting individuals will build a rational
socialist society. For our purposes here it is important to stress that a
humanist theory tends only to recognise human individuals as having agency (and
power and responsibility) over the social forms and the technologies they
create and, even, through rational science, the power to control and shape
nature.
Mapping Marshall
McLuhan
Many of McLuhan’s more important ideas arise within a kind
of narrative of redemption. There is little doubt that much of McLuhan’s appeal
to new media and cyber enthusiasts lies in the way that he sees the arrival of
an ‘electronic culture’ as a rescue or recovery from the fragmenting effects of
400 years of print culture. McLuhan has, indeed, provided a range of
ideological resources for the technological imaginary of the new millennium.
Here, we outline McLuhan’s grand schema of four cultures,
determined by their media forms, as it is the context in which some important
ideas arise; ideas which are, arguably, far more important and useful than his
quasi-historical and extremely sweeping narrative. We then concentrate on three
key ideas. First, ‘remediation’, a concept that is currently much in vogue and
finds its roots in McLuhan’s view that ‘the content of any medium is always
another medium’ (1968: 15-16). Second, his idea that media and technologies are
extensions of the human body and its senses. Third, his famous (or notorious)
view that ‘the medium is the message’. This section is the basis for a further
discussion, in 1.6.4, of three ‘theses’ to be found in McLuhan’s work: his
extension thesis, his environmental thesis, and his anti-content thesis.
A narrative of
redemption
McLuhan’s view of media as technological extensions of the
body is his basis for conceiving of four media cultures which are brought about
by shifts from oral to written communication, from script to print, and from
print to electronic media. These four cultures are: (1) a primitive culture of
oral communication, (2) a literate culture using the phonetic alphabet and
handwritten script which co-existed with the oral, (3) the age of
mass-produced, mechanical printing (The Gutenberg Galaxy), and (4) the culture
of ‘electric media’: radio, television, and computers.
‘PRIMITIVE’
ORAL/AURAL CULTURE
In pre-literate ‘primitive’ cultures there was a greater
dominance of the sense of hearing than in literate cultures when, following the
invention of the phonetic alphabet (a visual encoding of speech), the ratio of
the eye and the ear was in a better state of equilibrium. Pre-literate people
lived in an environment totally dominated by the sense of hearing. Oral and
aural communication were central. Speaking and hearing speech was the
‘ear-man’s’ main form of communication (while also, no doubt, staying alert to
the sound of a breaking twig!). McLuhan is not enthusiastic about this kind of
culture. For him it was not a state of ‘noble savagery’ (Duffy 1969: 26).
Primitive man lived in a much more tyrannical cosmic machine
than Western literate man has ever invented. The world of the ear is more
embracing and inclusive than that of the eye can ever be. The ear is
hypersensitive. The eye is cool and detached. The ear turns man over to
universal panic while the eye, extended by literacy and mechanical time, leaves
some gaps and some islands free from the unremitting acoustic pressure and
reverberation.
THE CULTURE OF
LITERACY
McLuhan says that he is not interested in making judgements
but only in identifying the configurations of different societies (1968: 94).
However, as is implied in the above passage, for McLuhan the second culture,
the culture of literacy, was an improvement on pre-literate, oral culture. For
here, via the alphabet and writing, as extensions of the eye, and, in its later
stages, the clock, ‘the visual and uniform fragmentation of time became
possible’ (1968: 159). This released ‘man’ from the panic of ‘primitive’
conditions while maintaining a balance between the aural and the visual. In the
literate, scribal culture of the Middle Ages McLuhan sees a situation where
oral traditions coexisted alongside writing: manuscripts were individually
produced and annotated by hand as if in a continual dialogue, writers and
readers were hardly separable, words were read aloud to ‘audiences’, and the
mass reproduction of uniform texts by printing presses had not led to a
narrowing dominance and authority of sight over hearing and speaking. Writing
augmented this culture in specialised ways without wholly alienating its
members from humankind’s original, participatory, audio-tactile universe (Theal
1995: 81).
PRINT CULTURE
For McLuhan, the real villain of the piece is print culture
– the Gutenberg Galaxy with its ‘typographic man’, where the sensory alienation
which was avoided in literate culture occurs. Here we meet the now familiar
story of how the mass reproduction of writing by the printing press, the
development of perspectival images, the emerging scientific methods of
observation and measurement, and the seeking of linear chains of cause and
effect came to dominate modern, rationalist print culture. In this process its
members lost their tactile and auditory relation with the world, their rich
sensory lives were fragmented and impoverished as the visual sense dominated.
In McLuhan’s terms this is a culture in which the ‘stepping up of the visual
component in experience . . . filled the field of attention’ (1962: 17). The
culture was hypnotised by vision (mainly through its extensions as typography
and print) and the ‘interplay of all the senses in haptic harmony’ dies. Fixed points
of view and measured, separating distances come to structure the human
subject’s relation to the world. With this ‘instressed concern with one sense
only, the mechanical principle of abstraction and repetition emerges’, which
means ‘the spelling out of one thing at a time, one sense at a time, one mental
or physical operation at a time’ (1962: 18). If the primitive pre-literate
culture was tyrannised by the ear, Gutenberg culture is hypnotised by its eye.
McLuhan’s ideas about television received very short shrift from British
cultural and media studies, even in its formative period (see Hall 1975).
The fourth culture, electronic culture, is ‘paradise
regained’ (Duffy 1969). Developing from the invention of telegraphy to
television and the computer, this culture promises to short-circuit that of
mechanical print and we regain the conditions of an oral culture in acoustic
space. We return to a state of sensory grace; to a culture marked by qualities
of simultaneity, indivisibility and sensory plenitude. The haptic or tactile
senses again come into play, and McLuhan strives hard to show how television is
a tactile medium.
The terms in which McLuhan described this electric age as a
new kind of primitivism, with tribal-like participation in the ‘global village’,
resonates with certain strands of New Age media culture. McLuhan’s
all-at-onceness or simultaneity, the involvement of everyone with everyone,
electronic media’s supposedly connecting and unifying characteristics, are easy
to recognise in (indeed, in some cases have led to) many of the terms now used
to characterise new media – connectivity, convergence, the network society,
wired culture, and interaction.
First, and most uncontentiously because it was an idea that
McLuhan and Williams shared, is the idea that all new media ‘remediate’ the
content of previous media. This notion, as developed by McLuhan in the 1960s,
has become a key idea, extensively worked out in a recent book on new media. In
Remediation: Understanding New Media (1999), Jay David Bolter and Richard
Grusin briefly revisit the clash between Williams and McLuhan as they set out
their own approach to the study of new media. They define a medium as ‘that
which remediates’. That is, a new medium ‘appropriates the techniques, forms,
and social signif-icance of other media and attempts to rival or refashion them
in the name of the real’ (ibid.: 65). The inventors, users, and economic
backers of a new medium present it as able to represent the world in more
realistic and authentic ways than previous media forms, and in the process what
is real and authentic is redefined (ibid.). This idea owes something to
McLuhan, for whom ‘the “content” of any medium is always another medium’ (1968:
15–16).
Bolter and Grusin have something interesting to say about
Williams and McLuhan which bears directly upon our attempt to get beyond the
polarised debates about new media. They agree with Williams’s criticism that
McLuhan is a technological determinist who single-mindedly took the view that
media technologies act directly to change a society and a culture, but they
argue that it is possible to put McLuhan’s ‘determinism’ aside in order to
appreciate ‘his analysis of the remediating power of various media’. Bolter and
Grusin encourage us to see value in the way that McLuhan ‘notices intricate
correspondences involving media and cultural artefacts’ (1999: 76), and they
urge us to recognise that his view of media as ‘exten-sions of the human
sensorium’ has been highly influential, prefiguring the concept of the cyborg
in late twentieth-century thought on media and cyberculture or technoculture.
It is precisely this ground, and the question of the relationship between human
agency and technology in the age of cybernetic culture, which the
neo-McLuhanites attempt to map.
Extending the
sensorium
McLuhan reminds us of the technological dimension of media.
He does so by refusing any distinction between a medium and a technology. For
him, there is no issue. It is not accidental that he makes his basic case for a
medium being ‘any extension of ourselves’ (1968: 15) by using as key examples
the electric light (ibid.) and the wheel (ibid.: 52) – respectively a system
and an artefact which we would ordinarily think of as technologies rather than
media. Basically, this is no more than the commonplace idea that a ‘tool’ (a
name for a simple technology) is a bodily extension: a hammer is an extension
of the arm or a screwdriver is an extension of the hand and wrist.
In The Medium is the Massage (McLuhan and Fiore 1967a)
McLuhan drives this point home. We again meet the wheel as ‘an extension of the
foot’, while the book is ‘an extension of the eye’, clothing is an extension of
the skin, and electric circuitry is an ‘extension of the central nervous
system’. In other places he speaks of money (1968: 142) or gunpowder (ibid.:
21) as a medium. In each case, then, an artefact is seen as extending a part of
the body, a limb or the nervous system. And, as far as McLuhan is concerned,
these are ‘media’.
McLuhan conflates technologies and mediums in this way
because he views both as part of a larger class of things; as extensions of the
human senses: sight, hearing, touch, and smell. Wheels for instance, especially
when driven by automotive power, radically changed the experience of travel and
speed, the body’s relationship to its physical environment, and to time and
space. The difference between the view we have of the world when slowly
walking, open on all sides to a multisensory environment, or when glimpsed as
rapid and continuous change through the hermetically sealed and framing window
of a high-speed train, is a change in sensory experience which did and
continues to have cultural significance. (See, for instance, Schivelbusch
1977.) It is this broadening of the concept of a medium to all kinds of
technologies that enabled McLuhan to make one of his central claims: that the
‘medium is the message’. In understanding media, it matters not, he would
claim, why we are taking a train journey, or where we are going on the train.
These are irrelevant side issues which only divert us from noticing the train’s
real cultural significance. Its real significance (the message of the medium
itself) is the way it changes our perception of the world.
McLuhan also asserts (he doesn’t ‘argue’) that such
extensions of our bodies, placed in the context of the body’s whole range of
senses (the sensorium), change the ‘natural’ relationships between the sensing
parts of the body, and affect ‘the whole psychic and social complex’ (1968:
11). In short, he is claiming that such technological extensions of our bodies
affect both our minds and our societies. In The Gutenberg Galaxy (1962: 24) he
expresses the idea of technological extension more carefully when he says,
‘Sense ratios change when any one sense or bodily or mental function is
externalised in technological form.’ So, for McLuhan, the importance of a
medium (seen as a bodily extension) is not just a matter of a limb or
anatomical system being physically extended (as in the hammer as ‘tool’ sense).
It is also a matter of altering the ‘ratio’ between the range of human senses
(sight, hearing, touch, smell) and this has implications for our ‘mental
functions’ (having ideas, perceptions, emotions, experiences, etc.).
Media, then, change the relationship of the human body and
its sensorium to its environment. Media generally alter the human being’s
sensory relationship to the world, and the specific characteristics of any one
medium change that relationship in different ways. This is McLuhan’s broad and
uncontestable premiss upon which he spins all manner of theses – some far more
acceptable than others. It is not hard to see how such a premiss or idea has
become important at a time of new media technologies and emergent new media
forms.
The medium is the
message
As we saw above, in what has been widely condemned as an
insupportable overstatement, McLuhan concludes from his idea of media as
extensions of man that ‘understanding media’ has nothing to do with attending
to their content. In fact he maintains that understanding is blocked by any
preoccupation with media content and the specific intentions of media
producers. He views the ‘conventional response to all media, namely that it is
how they are used that counts’, as ‘the numb stance of the technological idiot.
For the “content” of a medium is like the juicy piece of meat carried by the
burglar to distract the watchdog of the mind’ (1968: 26).
McLuhan will have no truck with questions of intention
whether on the part of producers or consumers of media. In a seldom referred to
but telling passage in Understanding Media (1968: 62) he makes it clear that
‘It is the peculiar bias of those who operate the media for the owners that
they be concerned about program content.’ The owners themselves ‘are more
concerned about the media as such’. They know that the power of media ‘has
little to do with “content”’. He implies that the owner’s preoccupation with
the formula ‘what the public wants’ is a thin disguise for their knowing lack
of interest in specific contents and their strong sense of where the media’s
power lies.
Hence his deliberately provocative slogan ‘The medium is the
message’. This is where his use of the electric light as a ‘medium’ pays off.
It becomes the exemplary case of a ‘medium without a message’ (1968: 15).
McLuhan asserts that neither the (apparent and irrelevant) messages that it
carries (the words and meanings of an illuminated sign) nor its uses
(illuminating baseball matches or operating theatres) are what is important
about electric light as a medium. Rather, like electricity itself, its real
message is the way that it extends and speeds up forms of ‘human association
and action’, whatever they are (1968: 16). What is important about electric
light for McLuhan is the way that it ended any strict distinction between night
and day, indoors and outdoors and how it then changed the meanings (remediated)
of already existing technologies and the kinds of human organisation built
around them: cars can travel and sports events can take place at night, factories
can operate efficiently around the clock, and buildings no longer require
windows (1968: 62). For McLuhan, the real ‘“message” of any medium or
technology is the change of scale or pace or pattern that it introduces into
human affairs’ (1968: 16). Driving his point home, and again moving from
technology to communication media, he writes:
The message of the electric light is like the message of
electric power in industry. Totally radical, pervasive, and decentralised. For
the electric light and power are separate from their uses, yet they eliminate
time and space factors in human association exactly as do radio, telegraph,
telephone and T V, creating involvement in depth. (McLuhan 1968: 17)
Also, like the effects of the electric light on the
automobile, McLuhan claims that the content of any medium is another medium
which it picks up and works over (the medium is the message).
McLuhan’s absolute insistence on the irrelevance of content
to understanding media needs to be seen as a strategy. He adopts it in order to
focus his readers upon:
1
the power of media technologies to structure
social arrangements and relationships, and
2
the mediating aesthetic properties of a media
technology. They mediate our relations to one another and to the world
(electronic broadcasting as against one-to-one oral communication or
point-to-point telegraphic communication for instance). Aesthetically, because
they claim our senses in different ways, the multidirectional simultaneity of
sound as against the exclusively focused attention of a ‘line’ of sight, the
fixed, segmenting linearity of printed language, the high resolution of film or
the low resolution of T V, etc.
We should now be in a better position to see what McLuhan
offers us in our efforts to ‘understand new media’, and why his work has been
seen to be newly important in the context of new media technologies:
•
McLuhan stresses the physicality of technology,
its power to structure or restructure how human beings pursue their activities,
and the manner in which extensive technological systems form an environment in
which human beings live and act. Conventional wisdom says that technology is
nothing until it is given cultural meaning, and that it is what we do with
technologies rather than what they do to us that is important and has a bearing
on social and cultural change. However, McLuhan’s project is to force us to
reconsider this conventional wisdom by recognising that technology also has an
agency and effects that cannot be reduced to its social uses.
•
In his conception of media as technological
extensions of the body and its senses, as ‘outerings’ of what the body itself
once enclosed, he anticipates the networked, converging, cybernetic media
technologies of the late twentieth/early twenty-first centuries. He also distinguishes
them from earlier technologies as being more environmental. In his words, ‘With
the arrival of electric technology, man extended, or set outside himself, a
live model of the central nervous system itself’ (1968: 53). This is
qualitatively different from previous kinds of sensory extension where ‘our
extended senses, tools, and technologies’ had been ‘closed systems incapable of
interplay or collective awareness’. However, ‘Now, in the electric age, the
very instantaneous nature of co-existence among our technological instruments
has created a crisis quite new in human history’ (1962: 5). McLuhan’s sweeping
hyperbolic style is much in evidence in that last statement. However, the
evolution of networked communication systems and present anticipations of a
fully functioning, global neural net is here prefigured in McLuhan’s
observations of broadcast culture in the 1960s.
•
McLuhan’s ideas have been seen as the starting
point for explanation and understanding of the widely predicted conditions in
which cybernetic systems have increasingly determining effects upon our lives.
At a point in human history where for significant numbers of people ‘couplings’
with machines are increasingly frequent and intimate, where our subjectivity is
challenged by this new interweaving of technology into our everyday lives, he
forces us to reconsider the centrality of human agency in our dealings with
machines and to entertain a less one-sided view.
It is McLuhan’s view
that these mediating factors are qualities of the media technologies
themselves, rather than outcomes of the way they are used, which is criticised
by Williams and many in media studies
Williams and the
social shaping of technology
We noted at the outset of this section that media studies
has by and large come to ignore or reject the views of Marshall McLuhan in
favour of Raymond Williams’s analysis of similar terrain. In this section we
draw out the major differences in their approaches to the question of
technology’s relation to culture and society.
Human agency versus
technological determination
Williams clearly has McLuhan’s concept of the ‘extensions of
man’ in mind when he writes that ‘A technology, when it has been achieved, can
be seen as a general human property, an extension of a general human capacity’
(1974: 129; our italics). McLuhan is seldom interested in why a technology is
‘achieved’, but this is a question that is important for Williams. For him ‘all
technologies have been developed and improved to help with known human
practices or with foreseen and desired practices’ (ibid.). So, for Williams,
technologies involve precisely what McLuhan dismisses. First, they cannot be
separated from questions of ‘practice’ (which are questions about how they are
used and about their content). Second, they arise from human intention and
agency. Such intentions arise within social groups to meet some desire or
interest that they have, and these interests are historically and culturally
specific.
McLuhan holds that new technologies radically change the
physical and mental functions of a generalised ‘mankind’. Williams argues that
new technologies take forward existing practices that particular social groups
already see as important or necessary. McLuhan’s ideas about why new
technologies emerge are psychological and biological. Humans react to stress in
their environment by ‘numbing’ the part of the body under stress. They then
produce a medium or a technology (what is now frequently called a prosthesis)
which extends and externalises the ‘stressed out’ sense or bodily function.
Williams’s argument for the development of new technologies is sociological. It
arises from the development and reconfiguration of a culture’s existing
technological resources in order to pursue socially conceived ends.
McLuhan insists that the importance of a medium is not a
particular use but the structural way that it changes the ‘pace and scale’ of
human affairs. For Williams, it is the power that specific social groups have
that is important in determining the ‘pace and scale’ of the intended
technological development – indeed, whether or not any particular technology is
developed (see Winston 1998). Williams’s emphasis called for an examination of
(1) the reasons for which technologies are developed, (2) the complex of
social, cultural, and economic factors which shape them, and (3) the ways that
technologies are mobilised for certain ends (rather than the properties of the
achieved technologies themselves). This is the direction which the mainstream
of media studies came to take.
The plural
possibilities and uses of a technology
Where, for the most part, McLuhan sees only one broad and
structuring set of effects as flowing from a technology, Williams recognises
plural outcomes or possibilities. Because he focuses on the issue of intention,
he recognises that whatever the original intention to develop a technology
might be, subsequently other social groups, with different interests or needs,
adapt, modify or subvert the uses to which any particular technology is put.
Where, for McLuhan, the social adoption of a media technology has determinate
outcomes, for Williams this is not guaranteed. It is a matter of competition
and struggle between social groups. For Williams, the route between need,
invention, development, and final use or ‘effect’ is not straightforward. He
also points out that technologies have uses and effects which were unforeseen
by their conceivers and developers. (A point with which McLuhan would agree.)
Overall, Williams’s critique of McLuhan adds up to the premiss that there is
nothing in a particular technology which guarantees or causes its mode of use,
and hence its social effects. By viewing media the way he does, he arrives at
the opposite conclusion to McLuhan: what a culture is like does not directly
follow from the nature of its media.
Concepts of technology
We have noted how broadly, following a basic
(nineteenth-century) anthropological concept of ‘man’ as a tool user, McLuhan
defines a technology and how he subsumes media within this definition without
further discussion. Williams does not. First, he distinguishes between various
stages or elements in a fully achieved technology. The outcome of this process
is subject to already existing social forces, needs and power relations.
In line with the ‘social shaping of technology’ school of
thought (Mackenzie and Wajcman 1999), Williams is not content to understand
technologies only as artefacts. In fact the term ‘technology’ makes no
reference to artefacts at all, being a compound of the two Greek roots techne,
meaning art, craft or skill, and logos, meaning word or knowledge (Mackenzie
and Wajcman 1999: 26). In short, technology in its original form means
something like ‘knowledge about skilful practices’ and makes no reference at
all to the products of such knowledge as tools and machines. So, for Williams,
the knowledges and acquired skills necessary to use a tool or machine are an
integral part of any full concept of what a technology is. McLuhan is largely
silent on this, his attention being fully centred upon the ways in which
technologies ‘cause’ different kinds of sensory experience and knowledge
ordering procedures.
The social nature of
a media technology
Williams takes the technology of writing, which was so
important in McLuhan’s scheme of things, as an example (Williams 1981: 108). He
differentiates between:
•
Technical inventions and techniques upon which a
technology depends, the alphabet, appropriate tools or machines for making
marks, and suitable surfaces for accurately retaining marks;
•
The substantive technology which, in terms of
writing, is a distribution technology (it distributes language) and this
requires a means or form – scrolls of papyrus, portable manuscripts,
mass-produced printed books, letters, or emails and other kinds of electronic
text;
•
The technology in social use. This includes (a)
the specialised practice of writing which was initially restricted to
‘official’ minorities and then opened up, through education, to larger sections
of society. But always, each time this happened, it was on the basis of some
kind of argued need (the needs of merchants, of industrial workers, etc.), and
(b) the social part of the distribution of the technologically reproduced
language (reading) which again was only extended in response to perceived
social needs (efficient distribution of information, participation in
democratic processes, constituting a market of individuals with the ability to
consume ‘literature’, etc.).
As Williams points out, at the time of his writing in 1981,
after some thousands of years of writing and 500 years of mass reproduction in
print, only 40 per cent of the world’s population were able to read and hence
had access to written texts. In this way, Williams argues that having noted the
strictly technical and formal aspects of a technology we are still crucially
short of a full grasp of what is involved. For these basic techniques and forms
to be effective as a technology within a society, we also have to add the
ability to read and to be constituted as part of a readership or market by publishers.
Simply put, writing cannot be understood as a communications technology unless
there are readers. The ability to read, and the control of, access to, and
arrangements for learning to read, are part of the distributive function of the
technology of writing. In this sense, Williams argues, a full description of a
technology, both its development and its uses, is always social as well as
technical and it is not simply a matter of the ‘social’ following the
technological as a matter of ‘effects’. Clearly this is an argument that can be
extended to new media as policy debates about the growing existence of a
‘digital divide’ illustrate. The extent to which the technology can have
transformative ‘effects’ is more or less in relation to other preexisting patterns
of wealth and power.
The concept of a
medium
While McLuhan uses the term ‘medium’ unproblematically and
is quite happy to see it as a kind of technology, Williams finds the term
problematic and he shares with some other theorists (Maynard 1997) an uneasiness
about conflating ‘media’ and ‘technology’. It is often implicit for Williams
that a medium is a particular use of a technology; a harnessing of a technology
to an intention or purpose to communicate or express.
When is a technology
a medium?
Here we might take the much-considered case of photography.
Clearly there is a photographic technology; one in which optical and mechanical
systems direct light onto chemically treated surfaces which then become marked
in relation to the way that configurations of light fall on that surface. This,
however, is not a medium. The manufacture of silicon chips, a technical process
upon which the manufacture of computers now depends, uses this photographic
technology. It is used to etch the circuits on the microscopic chips. This is a
technological process – a technology at work. However, another use of the
photographic technology is to make pictures – to depict persons or events in
the world. This may also be a technology at work. However, when it is said that
these pictures or images provide us with information, represent an idea,
express a view, or in some way invite us to exercise our imaginations in
respect to the contents and forms of the image, then we may say that
photography is being used as a medium. Or, more accurately, the technology of
photography is being used as a medium of communication, expression,
representation or imaginative projection. On this line of argument, a medium is
something that we do with a technology. Clearly, what we do needs to be of an order
that the technology can facilitate or support but it does not necessarily arise
from the technology itself. Having an intention for a technology is not
synonymous with the technology per se. A technology becomes a medium through
many complex social transformations and transitions; it is, in Williams’s
reading, profoundly the product of culture and not a given consequence of
technology.
A problem with binary
definitions
Williams is also wary
about the theoretical implications that the term ‘medium’ has come to carry.
First, he criticises and virtually dismisses it as always being a misleading
reification of a social process. Second, he sees that it is also a term that is
used to recognise the part that materials play in a practice or process of
production, as in artistic processes where the very nature of paint, ink, or a
certain kind of camera will play a part in shaping the nature of an artistic
product (1977: 159).
Medium as a
reification of a social process
When he thinks about the sense in which a medium is a
reification, McLuhan can be seen as very much in the centre of Williams’s line
of fire. Williams uses the following seventeenth-century statement about the
nature of vision to demonstrate what he sees to be the major difficulty, still
present in contemporary thought, with the concept of a ‘medium’: ‘to the sight
three things are required, the Object, the Organ and the Medium’ (1977: 158).
The problem, he argues, is that such a formulation contains
an inherent duality. A ‘medium’ is given the status of an autonomous object (or
the process of mediation is given the status of a process that is separate from
what it deals with) which stands between and connects two other separate
entities: that which is mediated (an object) and that which receives the
results of the mediating process (the eye). With language as his example,
Williams points out that when this concept of a medium is being used, ‘Words
are seen as objects, things, which men [sic] take up and arrange into
particular forms to express or communicate information which, before this work
in the “medium” they already possess’ (1977: 159).
Williams argued against this position – for him the process
of mediation is itself constitutive of reality; it contributes to the making of
our realities. Communication and interaction are what we do as a species. The
‘medium’ is not a pre-given set of formal characteristics whose effects can be
read off – it is a process that itself constitutes that experience or that
reality. So for Williams to argue that ‘the medium is the message’ is to
mistake and to reify an essentially social process taking place between human
agents and their interests as if it were a technological object outside of
human agency. As a theoretical conception which structures thought it necessarily
leaves us with sets of binary terms: the self and the world, subject and
object, language and reality, ideology and truth, the conscious and
unconscious, the economic base and the cultural superstructure, etc.
Medium as material
One way of avoiding this problem is to narrow the definition
of a medium. This is the other direction which Williams’s thought on the
subject takes. He recognises that a ‘medium’ can also be understood as ‘the
specific material with which a particular kind of artist worked’, and ‘to
understand this “medium” was obviously a condition of professional skill and
practice’ (Williams 1977: 159). The problem here, writes Williams, is that even
this down to earth sense of a medium is often extended until it stands in for
the whole of a practice, which he famously defines as ‘work on a material for a
specific purpose within certain necessary social conditions’ (1977: 160). Once
again we see that Williams wants to stress that a medium is only part of a
wider practice, a material that is worked upon to achieve human purposes
pursued in determining social contexts; a means to an end.
No comments:
Post a Comment
Your Comments are Valuable to Us ! Kindly share the experience you had on this blog. Thank You !!!