The Dark Side of Cyberspace 1
Internet Content Regulation and Child
Protection
David Oswell
Abstract This paper considers the question of how to protect our
children in conditions of information and communication technology
centres of action -
convergence. It focuses on three traditional
and in the context of the European
government, industry parent -
Union. Firstly, it argues how the European Commission, in tackling the
issue of internet content regulation and child protection, draws on an
’advanced liberal’ form of governance. Drawing on a particular
example from the UK, it secondly considers how industry has been in
constructed as a responsible actor and thus how the internet industry
the UK has aligned itself with the programmatic objectives of the
European Commission. However, industry has also come up with
the
regulatory solutions which are more ambivalent with regard to of media
European Commission objectives and the traditional principles
and communication content regulation. Thirdly, arguesit that the
European Commission objective of making parents responsible for
children’s internet use in the home faces considerable resistance,
inasmuch as the agency and geography of domestic supervision are
both changing and highly contested.
Introduction Imagine a scene. Alice, has been watching a
a seven-year-old girl,
television documentary tigers India. An onadvertisement in
interrupts
the flow: ’Web TV. Two clicks away from a world of adventure’. The
programme returns. A presenter, public service voice and colonial
dress, sits in the jungle waiting for the tigers to appear. These
animals are scarce. Over the last 100 years 95 per cent of their
population has declined. The jungle is eaten into by building
constructors, engineers, local villagers and European television crews.
Alice, fascinated by the fate of this beautiful creature, has ignored the
calls, from her mother downstairs, to turn the television off and come
down to eat her dinner. Her father is watching the news in the living
room. The programme ends and Alice clicks on her recently acquired
digital web TV to her web browser and goes in search of the Asian
tiger. She types in ’Asian Tiger’ and the search is on. Her excitement
is evident. Her colour printer is switched on to capture, on paper,
those exquisite stripes. Top ten of 374,806 matches. Of those ten,
two refer to tigers, one to all things Korean, two to the Asian
economic crisis, one Sunday Times article by Rajpal Abeynajake and
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
43
four Asian sex links: ’Free XXX hardcore pics of young asian
backdoor girls...’2 The picture is clear.
In this paperI consider the fiercely debated question of how to protect
our children in conditions of information and communication
technology
convergence. However, despite the ostensible simplicity of the question,
I argue that the central terms of this
question are themselves unstable.
What does ’convergence’ mean? What do we mean by ’childhood’?
Who constitutes this community of concerned actors and what is their
role in defining the problem and providing the solutions? What, for
example, is the role of government, industry or parents? However, my
intention in this paper is not to make a postmodern
argument in which
everything is assumed to be in flux, a constant instability. Instead I
show how the stability of these terms is dependent on their effective
mobilisation and translation within a field of government. In particular,
I focus my argument on the context of the European Union and the UK
andI structure my argument according to three traditional centres of
action: government, industry and parents. This rather crude initial
formulation of the problem allows me to make visible some of the
significant changes in content regulation and child protection.
In the first section of the paperI show how the European Commission,
in tackling the issue of internet content regulation and child protection,
draws form of governance that we can call ’advanced liberal’.33 In
on a
doing so, the differentiation of content into illegal and harmful content
and the form of governance adopted have been designed to facilitate
regulation across legal and cultural differences within the European
Union. A central mechanism in this particular form of advanced liberal
governance involved making both the industry and the parent
responsible for content regulation. In doing so there is a clear shift from
traditional forms of Government (involving statutory mechanisms) to
governance (whereby authority is both devolved and aligned). In the
second section of the paper,I show, drawing on a particular example
from the UK, how industry has been constructed as a responsible actor
and thus how the internet industry in the UK has aligned itself with the
programmatic objectives of the European Commission. However, the
industry, in the USA, UK and elsewhere, has also come up with
regulatory solutions which are more ambivalent with regard to the
European Commission objectives and the traditional principles of media
and communication content regulation. I focus in particular on the PICS
protocol and argue that, although such ’technologies’ facilitate
regulation at the level of reception (and hence they are in alignment
with the European Commission objectives), they also, paradoxically,
raise serious questions concerning the agency of regulation (in as much
as PICS involves hybrid human and non-human agency) and concerning
the normative model of society upon which content regulation has been
historically based. In the third section,I argue that the European
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
44
Commission objective of making parents responsible for children’s
internet use in the homefaces considerable resistance.I argue that,
the
although expertise that has been important in configuring parents as
responsible vis-6-vis their children’s television and video consumption is
now being deployed as a resource for parents vis-a-vis
their children’s
internet use, the agency and geography of domestic supervision are
both changing and highly contested. Finally, I conclude by raising
some questions concerning policy and the place of the child. The
research is ongoing and is based on the analysis of policy
documentation, press articles, and interviews with those involved in
policy making and implementation.
Government Government and industry are interested in producing, and reproducing,
a cyber-literate generation which, it is said, will increase national and
regional competitiveness. For example, report by the High-Level
a
Expert Group of the European Commission states in its introductory
that ’[t]he debate on the technological challenges posed by the
chapter
digital convergence of ICTs [information and communication
technologies] follows a long tradition of concern that Europe is lagging
behind in major fields of leading-edge technology such as
semiconductors, microelectronics and other ICTs considered crucial for
its overall competitiveness’. The report continues:
In our interim report we vision which recognised the
set out a
tremendous opportunities new ICTs could offer, such as the potential
for substantial productivity increases and for the emergence of
many new and improved products and services. At the same time,
we warned that converting this potential into actual gains in
productivity, living standards and quality of life would require a
lengthy process of learning and institutional change.4
The home, school and library have become key sites through which the
information society can be construed as a ’learning society’ which in
turn, it is argued, fosters and provides the ground for a competitive
information economy. Unsurprisingly, these sites, and the construction
of an information society therein, have become intensely contested.
There is considerable concern, expressed by government, parents,
teachers and others, that children’s use of the World Wide Web, e-mail,
internet relay chat and other forms of internet communications should
not be detrimental to their well-being. Children should be able to
explore the internet in a safe and secure environment. In the USA
legislative initiatives have been pursued (such as the Communications
Decency Act, 1996, declared unconstitutional in 1997) and in a recent
example Bob Franks and John McCain have put forward the Children’s
Internet Protection Act which would amend the Communications Act of
1934 and require schools and libraries to register with the Federal
Communications Commission that they have installed filtering or
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
45
blocking software so as to protect minors from material deemed
harmful. The American Libraries Association (ALA), which has a
long
history of protecting First Amendment freedoms, has been contesting the
Act on the grounds that it would restrict children’s access to
internet sites which display, for example,
legitimate
sexually
as HIV/AIDS education sites. In the UK and more
graphic material, such
widely in Europe,
children’s internet access at home has been problematised and there
has been a clear hesitancy in resolving the problem through legislative
measures. It is this particular context thatI will
In this section of my paperI discuss how
explore in more detail.
regulatory initiatives in the UK
and Europe have developed, how the problem has been defined and
how regulatory actors have been both constructed and mobilised.
For the European Commission the question of how to
regulate the
internet in order to protect children poses a problem of how to define
problematic content (and acceptable use) and how to regulate with
regard to such matters across
legal and cultural differences between the
member states. Even if we stay
with widely held definitions of child
protection in the UK and types of content which might constitute harm to
the child, there are a number of possibilities seemingly overlooked by
the European Commission: which is harmful to children when
content
consumed by them; which is harmful to members of the child
content
population when consumed by others (ie incitement); harm resulting
from children abused in the process of producing the content (ie child
pornography as abuse and violation); content which is harmful because
of the very nature of the content irrespective of its ’effect’ (ie
discrimination); and content which is harmful to the child neither in the
context of production nor consumption, but which constitutes a violation
of the image of the child (eg morphing of images of children in sexually
explicit situations). These complex and different forms of relations
between image and context of production, distribution and consumption
are ones that could have easily provided the basis for alternative forms
of governmental thinking. In addition, earlier press reporting of child
protection and internet content concerns rarely presented any consistent
distinction between different types of content or constituency of user nor
questioned exactly where harm might occur. The press tended to
construe the issue in terms of a general problem of internet
pornography. In an early example, even The Guardian, a left liberal
UK newspaper, carried the following copy after the James Bulger case:
’Forget the video nasty: the latest moral panic is computer porn’.5
a rather prosaic formulation has been
To all intents and purposes,
constructed by the European Commission. Two important documents in
this respect are the Communication from the European Commission on
Illegal and Harmful Content on the Internet and the European
Commission Green Paper on the Protection of Minors and Human
Dignity in Audiovisual and Information Services. Both documents were
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
46
published November 1996. The European Commission has
in
identified two different types of content: illegal material, which is
prohibited to everyone regardless of age; and harmful material, which
affect the and mental development of children, but which
might physical would
can be consumed by adults. Whereas the former category
constitute ’a general category of material that violates human dignity,
violence
primarily consisting of child pornography, extreme gratuitous and
and incitement to racial and other hatred, discrimination, violence’,
much broader of material of a more ’adult’
the latter constitutes a range
nature.
In the context of the European Union, settling on the distinction between
harmful and illegal content frames a subtle set of negotiations and
mobilisations between the European Commission, member states,
industry and non-govern mental actors and offers a clear direction for
and cultural differences and within
policy. Moreover, given legal across
European member states, the degree agreement among policy
of actors
at the level of the European Commission is perhaps surprising with
regard to the distinction between ’illegal’ and ’harmful’ content and the
different regimes of governance that such a distinction frames. Despite
differences in the legal age of sexual consent across different member
states, and despite the different constructions of childhood that such
legal differences rest upon, the distinction between illegal and harmful
content is both a point of agreement and a means of mobilising
different institutional machineries. In relation to illegal material, member
states are advised for example, in the Communication from the
Commission on Illegal and Harmful Content on the Internet (1996), ’to
ensure the application of existing laws’ and also to launch ’concrete
measures to reinforce co-operation’ with other member states ’in the
context of Justice and Home Affairs’. By contrast, in relation to harmful
material, cultural difference is seen as a significant issue to be taken
into account, but not necessarily overcome: ’It is therefore indispensable
that international initiatives take into account different ethical standards
in different countries in order to explore appropriate rules to protect
people against offensive material whilst ensuring freedom of
expression.’6
Moreover, the child is construed differently in relation to each of the two
problems. In relation to illegal content, the child is constituted as both
image and victim of abuse. In relation to harmful content, the child is
constituted as in danger from consuming particular forms of content. ’It
would be dangerous to amalgamate separate issues such as children
accessing pornographic content for adults, and adults accessing
pornography about children
? For the European Commission the
dominant for the image of the child (for the adult), not the
concern is
image (of the adult) for the child: ’Priorities should clearly be set and
resources mobilised to tackle the most important issues, that is the fight
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
47
against criminal such as clamping down on child
content -
pornography, of the
or use Internet as a new technology for criminals.&dquo;
While the former provides the basis of consensus and a winnable battle
(especially given wider public concern, as presented in the press, about
child abuse, paedophiles and sex tourists), the latter is more
problematic and is seen to need more localised mechanisms of
governance.
The institution of these distinctions and their mobilisation across member
states needs to be seen in the context of the
systems of European
governance deployed in order to establish both ’harmonisation’ and
’subsidiarity’.’ Instead of the application of common, and harmonised,
regulatory standards across the member states, we see an attempt to
facilitate mutual recognition between these states and the principle of
subsidiarity (which states that action should only be taken at the level of
the European Union in policy best dealt with at that level and not at a
national level) deployed. Regulatory measures at a European level take
the form of an advanced liberal governance, such that the governance
of a safe and secure internet, vis-6-vis child protection and internet
content, is conducted through the establishment of responsible economic
and social actors at national and sub-national levels. In this sense, in
using the term ’advanced liberal governance’,I am not simply arguing
that a model of regulation has been taken from telecommunications
(with the focus on liberalisation of markets, economic competition
regulation and industry self-regulation), but suggesting that the form of
regulation adopted draws on deeper, social and historical shifts in the
nature of governance. Following the work of Michel Foucault, Jacques
Donzelot and Bruno Latour, Nikolas Rose has argued that advanced
liberal governance works not through mechanisms of direct control, but
through ’action at a distance’. Rose argues that increasingly society
and the economy are governed by rendering responsible economic and
social actors who work in alignment with the programmatic thinking of
government. The role of experts and the construction of knowledge are
together an important factor in this alignment inasmuch as localised
actors are seen to know how to act appropriately on the basis of freely
chosen and consulted expertise. Industry and parents make choices on
the basis of available knowledge offered by experts and in relation to
concerns as to what is, for example, normal or pathological, healthy or
unhealthy. In this sense, advanced liberalism makes ’freedom’ not the
antithesis of government, but its basis. Moreover, substantial research
in governmentality studies has argued that the ’child’, particularly with
respect to the family and private life more generally, has become a
central mechanism through which individuals and populations are now
regulated.’°
For the European Commission, these advanced liberal forms of
governance provide significant resources for thinking about content
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
48
on the one hand, they allow the possibility of some
regulation because,
form of harmonisation across member states, whilst avoiding lengthy
and difficult negotiations vis-d-vis the construction of trans-European
legislation and, on the other hand, they do not tie industry oradvance consumers
to legislation which might prohibit, or forestall, the intended
toward an ’information society’. European governance gets worked out
in the technical detail, not in the grand schemes of philosophers. Thus
the Green Paper on the Protection of Minors and Human Dignity states
that ’[w]hether regulation or self-regulation is the solution, the
Commission must ensure that the measures adopted are not
discriminatory and that their application is in proportion to the objective
pursued’.&dquo; Furthermore, the Green Paper on the Convergence ofand the
the
Telecommunications, Media and Information Technology Sectors
Implications for Regulation: Towards an Information Society Approach is
much clearer in its support for self-regulation: ’The global nature of the
platform and the difficulty of exercising control within a given Member
State are leading to solutions which draw on self-regulatory practices by
industry rather than formal regulation, accompanied by technological
solutions to ensure that parents take greater responsibility.’’2 In doing
so, the sovereignty of individual national government is not subsumed
into an overarching European super-state, but rather acts as a crucial
relay between supra-, sub- and national levels. 13 As Paul Hirst and
Grahame Thompson, in their work on globalisation, have argued: ’The
nation state is central to this process of &dquo;suturing&dquo;: the policies and
practices of states in distributing power upwards to the international
level and downwards to sub-national agencies are the sutures that will
hold the system of governance together. 114
Industry However, the European Commission’s solution to child protection has
faced difficulties at the level of national member states and, although
these difficulties have to a large extent been overcome, it looks set to
face further problems as the different mechanisms of governance
become fully operationalised. The first problem concerns industry
representation and the second concerns the deeper restructuring of the
principles of content regulation.
Let me deal with the first problem. As stated above, the European
Commission has been keen for industry to become a responsible
economic and social actor. The European Commission wants to
delegate responsibility and authority downwards. It wants industry to
regulate itself and to provide solutions to the problems raised by global
internet communications. In order to do this, industry needs to be able
to represent itself. That is, it needs to construct spokespersons able to
represent the interests of industry in policy discussions at national,
European and international levels and to ’discipline’ its constituents
according to those perceived interests.I will focus my analysis on the
UK.
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
49
Whereas established industrial sectors have established representatives,
the fledgling internet industry, in the early to mid-1990s, had no such
spokespersons. However, in 1996, when concerns about child
protection and internet content were coming to a head, there were no
clearly identified industrial actors who might represent internet service
providers (ISPs) and content providers (although there were competing
groupings). Nor was there an existing industry regulator to take on the
role of ensuring self-regulation (although the existing media and
communications regulators - the Office of Telecommunications (Oftel),
the British Board of Film Classification (BBFC) and the Independent
Television Commission (ITC) - were waiting in the wings). At the time,
industrial actors showed no signs of responsibility (as construed by
government). Industry was vehemently protecting its interests and its
definition of the internet as the land of the wild frontier, a land of
freedom and libertarian values.
An important factor which enabled, in part, the European Commission
to be confident in its reports in November 1996, was the public
so
change of face by the assemblage of companies that constituted the UK
internet industry and its recognition that some form of content regulation
was necessary. This sea change of opinion took place in discussions
over the summer of 1996 and came to fruition in the autumn.I will
focus on one point of discussion. On 25 August 1996 The Observer
splashed on its front pages the news that Demon Internet Service
Provider had facilitated the distribution of child pornography. This
public display of the problem did not yet constitute a ’moral panic’, as
it was far from clear at this point whose interests, apart from some
universal idea of the child in need of protection, were being
represented. Nevertheless, in response to the press article, a loose
affiliation of Internet Content Providers, Internet Service Providers (ISPS)
and consultants (under the heading of the Internet Developers
Association) called a meeting in Kensington, London. At the meeting
were various internet people (content creators, ISPs), journalists, the
police, regulators (BBFC, ITC and Oftel), libertarian pressure groups
and a couple of academics including myself. Various actors within this
scenario made various claims - one after the other - about children,
about the public and about the internet. These constituencies (child,
public, internet) were in both a political and aesthetic sense represented
in their absence and without their consent. Of particular importance
was the address given by a Superintendent of the Metropolitan Police
Clubs and Vice Squad. After he had presented a series of extreme
images of child sexual abuse found on the internet and suggested the
possibility of draconian policing measures, the focus of the assembled
audience was visibly rearticulated in terms of a set of problems
concerning child protection. At the same meeting Peter Dawe, founder
of the ISP Pipex, announced that he was putting money (via the Dawe
Charitable Trust) into a project that would lead to the establishment of
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
50
the internet monitoring body Internet Watch Foundation (IWF). Industry
representation and self-regulation had begun to emerge.
This meeting was of a series of meetings through which a
but one
policy community formedwasand clearly defined spokespersons
constructed. What is clear is that industry self-representation and self-
not a significant issue in the
regulation with regard to the internet was
UK until this moment. Until this moment there were no strong claims to
represent the internet industry or to articulate its interests.&dquo; In order to
sit at the table, the industry needed to represent itself with a single
voice. The Metropolitan Police were perfectly clear on this point.
Policy actors, then, do not simply assemble into policy communities who
then make decisions. On the contary, authoritative actors are
constructed in the mobilisation of a group of decision makers. In
addition, there are substantial differences between the issue networks
which provide a wider constituency of opinion. Child protection groups
such as Childnet International and NCH - Action for Children are more
likely to have the ear of policy makers than libertarian groups, such as
the Campaign for Internet Freedom and Feminists Against Censorship.
It is important, then, that certain groups are stitched out of the policy
process. In this respect, it is significant that civil liberties groups have
been, to a large extent, excluded from the major policy discussions in
this area. 16 This, in turn, helps to explain why child protection issues
vis-6-vis the internet take a particular form in the UK.
The second problem is far more serious and might lead potentially, I
would argue, to a restructuring of the principles of content regulation.
As yet, the European Commission has failed to recognise the far-
reaching consequences of its endorsement of regulatory protocols
developed by industry. With respect to the problem of illegal material,
the mechanisms developed by industry have been relatively
straightforward. The IWF has provided an exemplary model (itself
partly based on the Dutch Mellpunkt model) for initiatives concerned
with monitoring illegal and potentially illegal internet content and with
providing a relay between internet user, ISP, enforcement agencies and
national jurisdiction. A user could come across content that they
considered illegal and report the content to the IWF. The IWF would
then, depending on their judgement as to the nature of the content,
follow clearly defined procedure in terms of reporting the content
provider to the ISP and the police and removing the offending
material.&dquo; In the UK, for example, there is existing legislation which
can be used to prohibit the
consumption and distribution of certain
types of content. This legislation, concerning obscenity, indecency and
the protection of children, could be used (and has been used) against
those who distribute child pornography on the internet. Such legislation
in the UK includes: the Obscene Publications Acts (1959) (1964), the
Protection of Children Act (1978), the Indecent Displays Act (1981 ), the
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
51
UK Telecommunications Act (19$~.) and the Criminal Justice and Public
Order Act (1994). 111
However, with respect to harmful content, the regulatory protocols
developed predominantly in the USA, but used and applied
internationally, have serious ramifications for the principles of content
regulation. The innovation of filtering and blocking software and of
PICS protocols significantly challenge the traditional principles of media
and communications content regulation. In this sense, although the
European Commission has endorsed such initiatives and welcomes their
development and application, it has failed to recognise how these
innovations raise important questions concerning the agency of
regulation (vis-6-vis questions about human and non-human agency) and
concerning the model of ’society’ underpinning content regulation. In
my view the Green Paper on Convergence was rather too sanguine
when in December 1997 it stated that: ’the difficulty of enforcing
safeguards in the context of harmful and illegal content on the Internet
provides another example of how convergence is challenging
traditional regulatory approaches to implementation, whilst not
invalidating the principle that rules are seeking to protect.’’9
Earlier media and communication content regulation was based on an
assumed technological difference between different industries and
delivery systems (eg radio, television, video and telecommunications).
Convergence in general and the internet in particular have thrown into
question the principles of content regulation.2° Film, television and
video regulation in the UK, as in other countries, has focused on
centralised institutional control through bodies such as the British Board
of Film Classification, the Broadcasting Standards Commission (BSC),
the Independent Television Commission and the BBC Board of
Governors. These governmental bodies have relied, and continue to
rely, on uniform notions of classification standards and values which
can be applied to the social body irrespective of the diversity and
heterogeneity of that body; and on particular types of person, such as
’responsible parents’, who will be able to enforce these standards
within the private context of the home.
example, the classification of video according to age criteria
For
(Universal, Parental Guidance, 12 years and over, 15 years and over
and 1 8 years and over) assumes a unitary social body upon which
such classificatory divisions can be applied. These classifications are
derived from particular normative (and normalising) discourses of the
child and family as constructed by, to use Donzelot’s term, ’psy’
expertise and knowledge (ie psychology, psychiatry and paediatrics).
Despite the fact that these classification systems rest upon dividing
practices which differentiate the population into normal and
pathological, they are applied to a bounded social space, namely
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
52
society (or its the ’nation’ or ’population’). It seems good
equivalences,
common sense that one cannot have one system of classification (and its
embedded value system) applied to one social group and a different
system applied to another social group. Similarly, the uniform systems
of classification, as constituted by content regulatory bodies, are
policed through a unitary notion of the ’responsible’ supervisor. Again,
in the case of video content classification, the parent is construed as
responsible by way of a singular vision of ’good parenting’,
’appropriate children’s conduct’ and so on. Content regulatory
agencies cannot approve one form of responsibility for one localised
domestic context and an alternative form for another domestic context.
Thus, irrespective of how such classificatory systems might actually be
used within the home, there is one common standard and a unitary
notion of society (and an equally singular conception of its well-being).
Such thinkinghas appeared common sense, inasmuch as it would
appear that different constituencies within the social body could not,
and should not be able to, adopt different criteria of classification and
regulatory standards. However, much thinking within these traditional
regulatory bodies has failed to take account of the radical implications
of research (albeit limited) within media and cultural studies on how
parents and young people interpret and use classification systems in a
localised domestic context. For example, Julian Wood shows how
video classifications might actually attract children to the content (ie as
’forbidden fruit’) and David Buckingham has shown how children and
young people contest the explicit labelling of particular videos.
Buckingham also points to the way in which labelling systems can
function as important boundary markers, such that a15’ rated video
might act both as a warning that the material might be upsetting to
younger children and also as an index of the video’s stronger and more
exciting content.&dquo; This research suggests that classificatory systems are
not applied wholesale in the context of use (eg the home) nor are they
simply ’used’ differently in different contexts, but that classification
labels take on a significance which is both contextualised and localised
(ie they become different standards).
The internet industry’s attempt to regulate itself,
to align itself with the
programmes of government, and to make itself responsible has involved
the innovation, development and adoption of regulatory protocols which
both establish a new contract with parents and, paradoxically, severely
question the advanced liberal contract with government. A central issue
here, asI stated above, concerns the agency of regulation and the
model of society underpinning regulatory principles. Recent regulatory
innovations internet content foreground the inventiveness
concerning
and variety of governance, not its singularity. Regulation of internet
content is being thought in terms of more dispersed, individualised and
localised forms of governance which do not assume that there is a
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
53
social totality (ie society) upon which, and for which, common
standards can be applied. The new regulatory thinking tends to be
framed,I would argue misleadingly so, in terms of the invention of
’technological solutions’ to ’social problems’. The so-called
’technological solutions’ include filtering and blocking software, such as
Net Nanny, Cyber Patrol, CYBERsitter and SurfWatch, and
labelling
protocols such as Platform for Internet Content Selection (PICS).I will
briefly focus on PICS which is fast becoming an industry standard, has
large scale support across industry and government and is the most
innovative in terms of the issuesI am focusing on in this article.
PICS is a set of technical specifications that help software and rating
services to work together. It is claimed that PICS makes classification
and restriction possible for the internet. In relation to internet content,
the thinking behind the PICS system is that ’labelling’ and ’rating’ are
conceptualised as two distinct functions. Internet sites are tagged with
’neutral labels’ (indicating, for example, nudity, sexual content and
violence) which might then be rated by the user or by a third party
rating agency. Underlying such regulatory thinking is the idea that
there is a transparency between the label and what is labelled (ie the
content). However, it is claimed that PICS does not, in itself, pre-
determine the criteria for labelling systems nor of the use of such
systems:
With its recent explosive growth, the Internet now faces a problem
inherent in all media that serve diverse audiences: not all materials
are appropriate for every audience ... we can meet diverse needs
by controlling reception rather than distribution. In the TV industry,
this realisation has led to the V-chip, a system for blocking reception
based on labels embedded in the broadcast stream ... PICS...
establishes Internet conventions for label formats and distribution
methods, while dictating neither a labeling vocabulary nor who
should pay attention to which Iabels.22
PICS has agency. It ’establishes’, but does not ’dictate’. It is a timid
creature which is spoken for by others. The anti-censorship lobby claim
that PICS works alongside the image of childhood innocence and that it
assists a more insidious form of censorship. The child protectionists and
some within industry claim that far from working for the repressive state,
PICS works for anybody. They claim that the technology allows parents
to restrict their children’s access to sexual or violent images, businesses
to prevent employees from visiting recreational sites during work time,
and government to prohibit access to illegal material. PICS blocks
on ’neither
inappropriate material, but it founds such ’appropriateness’
an objective nor universal measure’. For example, Paul Resnick and
James Miller argue that appropriateness depends on at least three
factors: the supervisor (’parenting styles differ, as do philosophies of
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
54
management and government’), the recipient (’what’s appropriate for
one 15-year-old may not be for an eight-year-old, or even all 15-year-
olds’) and the context (’a game or chat room that is appropriate to
access at home may be inappropriate at work or school’). They
conclude:
Computer software can implement access controls that take account
of all these factors.... The software checks the labels to determine
whether to permit access to particular materials. It may permit
access for some users but not others, or at some times but not
others.23
There are difficult issues here. Firstly, in traditional content
two
regulatory bodies the process of classification has been seen as a
predominantly human endeavour, such that complex issues regarding
and context can be discussed, even if the final decisions
interpretation
sometimes seem unduly simplistic. In contrast, in relation to the PICS
protocol and the classification systems designed to operationalise it,
agency is dispersed within complex networks of human and non-human.
A significant fear is that access to internet content can be denied to
particular constituencies of user and that any transparency and
accountability in the decision-making process will be sealed within the
black box of an expert system. Secondly, although PICS has no say in
how the content is labelled, it does the work for the third party rating
agencies. The rating of sites is pluralised inasmuch as any number of
bodies can act as rating agencies. In this sense, there is potential for
the localisation of classification and labelling criteria, such that a parent
could choose from any number of third party rating agencies adopting
different criteria for classification. The notion of a common set of values
as applied to a unitary social body is clearly foregone. In practice, ’
there is a hegemonic struggle for dominance of labelling and
classification criteria. Currently a consortium of groups including
Childnet International, the Electronic Commerce Forum, the Australian
Broadcasting Authority and the Recreational Software Advisory Council
(RSAC), under the collective heading of the International Working
Group on Content Rating, are seeking to devise a suitable system of
content description and rating which could be international in its
adoption.
Parents The
European Commission’s objective of making parents responsible for
children’s use of the internet is, similarly, met with difficulties at a local
level. The concern to make parents responsible for the governance of
their children’s access to media and communications content is not in
itself a new phenomenon. The history of domestic ICT reception
provides evidence of how normalising discourses at both expert and
popular levels construct the responsibility of the parent 24 Those
concerned with supervision of children’s access to the internet have
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
55
drawn on, and attempt to mobilise,a familiar set of experts and
discourses to assist anxious parents. On the fly-sheet of a recent book
(widely publicised in the UK and USA press), entitled Caught in the
Net: How to Recognise the Signs of Internet Addiction and a Winning
Strategy for Recovery, its author, Dr Kimberly S. Young, is described as
’committed to expanding the body of knowledge about Internet use and
to helping people seek help for their
Internet-dependence problems’.
Young is fighting the ’dark side’. She is on a mission:
For Internet addicts as well as their parents, spouses, friends, and
employers, Caught in the Net offers guidance on where and how to
seek help from counselors, therapists, and other professionals who
take this affliction seriously. For mental health professionals, this
book provides insights into the nature and causes of Internet
addiction and encourages counselors and therapists to expand their
addiction recovery programs to address the specific problems of
Internet addicts.25
The book is addressed on the one hand to a lay readership of parents,
and other relays of governance, and on the other to experts who can
proffer advice and provide techniques to counter the problem.
However, the internet poses a different set of issues for regulation. The
assemblage of actors which provide the conditions for supervision and
responsible parenting have increasingly come to include the non-human
as well as the human. For example, in the pages of the National
Enquirer of the 23 April 1 999 there is an image of a suited man, left
arm on a desk with a computer and a CD-ROM balanced on the
keyboard. The caption for the image reads, ’Det. Dietl Battling Evil’.
The article, entitled ’Parents can protect their kids from Internet porn!’,
quotes the retired detective of New York City’s finest:
There are thousands upon thousands of porno sites on the Internet
that display hard-core sex photos, movies and stories.... Even more
dangerous are the child molesters who use the Internet. Perverts
are not just hanging around the parks. They’re coming into your
homes on your kid’s computers.
We are then told that Detective Dietl has been ’battling evil his whole
life’. The article tells us he ’helped solve two of New York City’s most
horrendous crimes’: the rape of a nun in 1981 and the murder of 10
people, mainly children, in 1984. The man’s credentials spread well
beyond the city’s police force. He has been the subject of a book, One
Tough Cop, and a movie. ’Now Dietl wants to wipe out evil on the
Internet.’ These assembled simulacra come to bear on our perception,
not simply of the internet and the crusade of one brave man, but of the
way in which internet content regulation is construed in the evocative
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
56
reservoir of popular images of the police and police work. However,
Dietl is not offering his services simply as a policeman; more
importantly he is construed as one part in an expert system which can
search through a computer’s memory for key words, images and
computer codes and retrieve evidence (even that which has been
deleted) of the pornographic sites and paedophile ridden chat room
exchanges that might have kept the attention of our vulnerable young.
Although it is clear that Detective Dietl is potentially on to a goodisthing
(using his name and reputation to sell a computer system), there an
interesting tension between the well-established generic conventions of
the public police detective (established through television
persona of the
series, film, detective fiction, articles in the press and crime magazines)
and routinised expert systems that provide the ground upon, and
against, which the detective makes his reputation through intuition and
personality.26 Detective Dietl’s computer program One Tough Computer
COP (which is available for $19.99 by post or at stores including Toys
’R’ Us and Marshalls) is but one of many initiatives designed to tackle
children’s relation to the internet. It is also but one of many ways
through which the ’cybercop’ announces not simply new mechanisms of
policing, but novel means for articulating technology and governance. 27
It offers the possibility of new forms of supervision.
What is suggested in this anecdotal evidence is that supervision can no
longer simply involve the parent, for example, watching the child
watching television or allowing the child to watch television on its own
within the context of specific viewing conditions. In relation to broadcast
television, the ’responsible parent’, we have often been told, would know
the programmes available and they would know how programmes are
scheduled. To put this schematically, the conditions for responsible
supervision have involved ’natural’ perception, clearly labelled and
scheduled content and clearly defined and enclosed spaces of reception.
In relation to the internet there are just too many web sites, discussion lists,
internet relay chat sites and so on. In this new context, responsibility has
involved the construction of hybrid (human and non-human) forms of
supervision, whereby a large element of trust is placed by the parent in
software, such as that provided by Detective Dietl. The parent must trust
the software to stop their child’s access to certain sites, but not others.
Responsibility thus constituted as
is a facet of neither the parent nor the
software alone, but across both.28
Traditional forms of parental supervision are also seriously questioned
with the rise of new internet technologies because the geography of
domestic supervision seems to have shifted and is consequently
contested.29 Traditionally, the living room has been constructed as a
space in which ICTs are both used by the family and supervised therein.
As Jon Courtenay Grimwood, writing in the parents’ pages in The
Guardian, suggests:
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
57
Don’t take the computer away.... Make the computer visible.
Move the PC out of your child’s bedroom. You don’t need to stare
over the kid’s shoulder but he or she does need to know
you’re
there.... Make the child keep a log of all the time spent online....
Encourage other [off-line] activities.3°
These localised techniques are aligned with others such as labelling,
filtering and blocking systems and with legislation, monitoring and
policing. They provide mechanisms, for example, through which a
relation between parent and child can be constructed as a relation of
supervision. In doing so, supervision is construed as a relation of
power between see-er and seen: a panopticon of the living room.
However, Sonia Livingstone and a team of researchers from across
Europe, in a project that has been referred to as the ’new Himmelweit
research’, have argued that the social relations of young people’s ICT
use shifting from the living room to the bedroom and from the family
is
to friendship networks. They talk of the emergence of new ’bedroom
cultures’.3’ For Livingstone and Bovill, the shift in domestic ICT
consumption from a family-centred sitting room to a child and teen-
centred bedroom suggests an innovation in the complex negotiations of
the home. Livingstone and her colleagues are keen to stress that the
amount of time children and young people spend at home with their
ICTs is a consequence of ’a lack of things to do in the area where they
live, their parents’ fears for their safety outside the home and the easy
attractions of an increasingly personalised media environment inside the
home’. And yet press coverage in the UK has, in some cases, added a
more conventional spin.32 For example, The Express, a middlebrow
national newspaper, in an editorial entitled ’Electronic thrills rob
children of a real life’, stated that:
Far more worrying is the survey’s finding that parents are colluding
in the transformation of their children’s bedrooms into multimedia
centres because they see the use of computers, television and
stereos as a safe alternative to the streets. This must isolate children
from their parents as well as the wider environment which teenagers
say they find ’boring’.33
Express, domestic space is articulated through the image of the
For The
bedroom as prison cell. Sitting above an article entitled ’Climate of
fear keeps children prisoners in their own bedrooms’, an image is
presented of three boys, visible through chiaroscuro lighting, in a sparse
bedroom furnished with a computer, television, video game and mobile
phone. On the television screen the words ’GAME OVER’ sit silently.
On top of the television set, three videos are perched. One of them
bears the title, ’SLASHERS’. This is an image, the article tells us, of
’THE INMATES OF BEDROOM WING’.34 Bedroom cultures are re-
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
58
construed here within an older, more insidious set of discourses which
pathologise children and young people’s domestic tCT consumption.&dquo;
The pathologised image of the solitary child sitting in the darkened
sitting room watching family television provides some of the conditions
for a new set of fears about the ’dark side of cyberspace’.
Recent research on the domestic regulation of children’s ICT use
provides evidence that regulation in the home is not simply imposed by
the parent on the child. Regulations are negotiated and resisted, but
also accepted. For example, Livingstone and Bovill show how parents
adopt a range of techniques for regulating their children’s ICT use.
These include ’positive’ strategies, such as parents talking with their
children about a variety of media, and ’negative’ strategies, such as
limiting or controlling viewing.&dquo; Buckingham in his research on
children’s television use importantly shows how children and parents
negotiate the ’rules of viewing’: ’the attempt to find a balance between
shielding your children from undesirable realities and preparing them
for the complex demands of adult life is a dilemma that is incapable of
easy resolution’.37 The space of supervision is also a space of freedom.
It is a space that relies upon and is constructed in order to facilitate the
child’s autonomy. It would be a mistake though to simply conceive of
supervision, in this broad sense, as a question of ’the application of
fixed rules or prescriptions’ nor even as the negotiations between
’parent’ and ’child’.38 Supervision relies on a wider set of associations
(eg ’psy’ experts, regulators and media institutions) which are
mobilised and bear on domestic regulation, and the space of
supervision is itself contested and shifting.
QuestionsI want to conclude with three comments. Firstly, questions concerning
concerning policy internet content regulation and child protection make visible how
and childhood governance is a socio-technical process and that it involves the
assembling of localised practices. In this sense Latour’s notion of the
’oligopticon’, as presented in his recent work on virtuality, is fruitful for
imagining future possibilities of child protection and governance of
internet content. The neologism ’oligopticon’ combines both opticos
(vision) with oligos (small). The oligopticon ’is not what sees everything,
but what sees a little bit’.39 In his discussion of virtual Paris, Latour ’
argues that:
if you multiply all the sites inside a City like Paris ... you end up re-
localising this notion of Society that has escaped us for so long,
before the computer. In other words, when we didn’t have the
computerisation, we couldn’t follow, literally we couldn’t trace the
localisation of Society as a whole. So we were forced to imagine
that there was a whole Society, invisible but structuring us. Now
through the numeration and digitalisation we have the possibility of
_____________
following the whole loci, all the parts of where Paris is produced.40
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
59
Perhaps not Paris, but Brussels. And even though the regimen of
advanced liberalism makes intelligible the construction and alignment of
actors made responsible in the task of
making the internet a safe and
secure space for our children, the
space (society) upon which they act
becomes increasingly unstable. However, this is not to imply that such
a space has become
completely fragmented or that actors cannot still
be mobilised in its name.
Secondly, it seems significant that in regulatory thinking the agency of
the child has remained invisible. The child is spoken for, and
represented, but rarely, if ever, given a voice. In policy calculations
concerning internet content regulation, the child is clearly the object,
rather than the agent, of representation. If the child is constituted
outside of itself, in the field of discourse and government, then agency
cannot simply be assumed. The agency of the child needs to be
mobilised in relation to specific policy networks in order for
democratisation to ensue. Existing actors need to be enlisted and their
interests translated. It might seem far-fetched to talk of children as
policy actors, but if we limit our analysis to the case at hand, then such
a proposition might appear more realistic. In areas where children are
the possible victims of harm and crime, it seems reasonable to suggest
that they should also be able to participate in the decision-making
process. Moreover, any conception of children’s agency, rights or
citizenship must involve an understanding that as children become more
implicated in the decision-making process and become spokespersons,
the terms and conditions of childhood will, in all likelihood, be similarly
reinvented.
Thirdly, I think that it is important theoretically and empirically, to
disaggregate the practices of regulation and not to reduce them to the
image of the panopticon, such that a single point of perception and a
singular line of sight is enabled through the alignment and aggregation
of different, but connected, localised practices. In their own terms, the
parent, the IWF, the police, filtering software and so on see very little.
They are necessarily myopic. In addition, agencies, such as those
construed within the PICS system, see violence, sex and naked flesh
without the advantage of a singular perspective. This activity constitutes
what Virilio has called ’sightless vision’, and yet its regulative effectivity
is possible only inasmuch as it can be mobilised within the perspective
of the parent or other supervisory actors.&dquo; The localised perspective of
the supervisory actor, in this case the parent, attempts to create ’optical
consistency’. it attempts to construct a regular avenue through different
spaces and in doing so to represent the localised activities of one series
of actors in their absence: ’This presence/absence is possible through
the two-way connection established by these many contrivances -
perspective, projection, map, logbook, etc - that allow translation
without corruption’.42 Opening up the black box and revealing the
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
60
potential misalignments between practices is not for the purpose of,
dare I say, ’childish’ forms of resistance, but in order to embed
mechanisms of accountability and democratic participation more
deeply, if they are there at all, within the regulatory process. But the
question remains: in whose name is this to be done?
Notes 1 phrase from Kimberly Young, Caught in the Net: How to Recognise
1 borrow this
the
Signs of Internet Addiction and a Winning Strategy for Recovery, (New York,
John Wiley & Sons 1998). However, the phrase is used widely in discussion of
internet content regulation. For example, the Benton Communications Policy
Mailing List notes, under the heading of ’Police warn parents of the Web’s dark
side’, that the Sheriff of Loudoun County, Virginia, USA, has started on-line
internet safety classes for parents and children (Benton Communication-Rolated
Headlines). Available at: http://www.benton.org/news/extra/ (1 June 1999).
Similarly, C. Skelton uses the phrase in the title of his discussion of race-hate
material on the internet. C. Skelton, ’Network of Hate: the Dark Side of
Cyberspace’, id Magazine, 3, no. 2 (1994), pp. 9-12.
2 These results were from a search using Excite on 30 April 1999.
3 The phrase is from Nikolas Rose. See Nikolas Rose, ’Government, authority and
expertise in advanced liberalism’, Economy and Society, 22, no. 3 (1993), pp.
283-299.
4 High-Level Expert Group, Employment and Social Affairs, Building the European
Information Society for Us All (Brussels: European Commission, 1997), pp. 14-15.
5 The Guardian, 16 April 1994, p. 15.
6 Commission of the European Communities, Illegal and Harmful Content on the
Internet (Brussels: COM(96)487, October 1996), p. 4 and p. 11.
7 lbid, p. 10, italics in original.
8 Ibid.
9 Andrew Barry, ’The European Community and European government:
harmonization, mobility and space’, Economy and Society, 22, no. 3 (1993),
pp. 314-326. See also, Andrew Barry, ’The European network’, New
Formations, 26 (1996), pp. 26-37.
10 Jacques Donzelot, The Policing of Families (London: Macmillan, 1979); Michel
Foucault, History of Sexuality, Vol. 1 (London: Allen Lane, 1978) and
’Governmentality’, Ideology and Consciousness, 6 (1979), pp. 5-21; Nikolas
Rose, Governing the Soul: The Shaping of the Private Self (London: Routledge,
1989); and Valerie Walkerdine and Helen Lucey, Democracy in the Kitchen:
Regulating Mothers and Socialising Daughters (London: Virago, 1989).
11 Commission of the European Communities, Green Paper on the Protection of
Minors and Human Dignity in Audiovisual and Information Services (Brussels:
COM(96) 483, October 1996), p. 22. The principle of ’proportionality’ is a
central resource in European advanced liberal governmental thinking.
12 Commission of the European Communities, Green Paper on the Convergence of
the Telecommunications, Media and Information Technology Sectors and the
Implications for Regulation: Towards and Information Society Approach (Brussels:
COM(97)623, 3 December 1997), p. 30.
13 See Svein S. Anderson and Kjell A. Eliason (eds.), Making Policy in Europe: The
Europeification of National Policy-Making (London: Sage, 1993) and Simon J.
Bulmer, ’The governance of the European Union: a new institutionalist
approach’, Journal of Public Policy, 13, no. 4 (1994), pp. 351-380.
14 Paul Hirst and Grahame Thompson, Globalisation in Question: The International
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
61
Economy and the Possibilities of Governance (Cambridge: Polity Press, 1996),
p. 184.
15 I discuss this in more detail in David Oswell, ’The
place of "childhood" in
Internet content regulation: a case study of policy in the UK’, International
Journal of Cultural Studies, 1, no. 2, 1998, pp. 271-291. With regard to the
discussion of the mobilisation of ’interests’ see Michel Callon and John Law, ’On
interests and their transformation: enrolment and counter enrolment’, Social
Studies of Science, 12 (1982), pp. 615-b25, Barry Hindess, "’Interests" in
political analysis’ in Power, Action and Belief: A New Sociology of Knowledge?,
ed. John Law (London: Routledge, 1986), pp. 112-131, and Steve Woolgar,
’Interests and explanations in the social study of science’, Social Studies of
Science, 11(1981), pp. 365-94.
16 Yaman Akdeniz, ’Who watches the watchmen: Internet content rating systems
and privatised censorship’, Cyber-Rights and Cyber-Liberties (UK). At
<http://www.leeds.ac.uk/law/pgs/yaman/watchmen.htm> (November 1997).
17 Given the size of the IWF, there has been a problem with marketing itself as an
internet monitoring body for fear that too many reports might be forthcoming
from the public.
18 See Ingrid Standen, "’Porning" Privacy in Cyberspace’, paper presented to
CRICT (London: Brunel University, London, 1996) and Christina Murroni and
Nick Irvine, Access Matters (London: IPPR, 1998).
19 Commission of the European Communities, Green Paper on Convergence, p.
30, my italics.
20 See, for example, Jill Hills and Maria Michalis, ’is convergence a purely European
obsession?’, Paper presented to the CCIS/Euricom Colloquium on The Political
Economy of Convergence (London: University of Westminster, September 1999).
21 See Julian Wood, ’Repeatable pleasures: notes on young people’s use of video’
in Reading Audiences: Young People and the Media, ed. David Buckingham
(Manchester: Manchester University Press, 1993), pp. 184-201, and David
Buckingham, Moving Images: Understanding Children’s Emotional Responses to
Television (Manchester: Manchester University Press, 1996).
22 Paul Resnick and James Miller, PICS: Internet Access Without Controls, available
at <http://www.w3.org> (1996).
23 Ibid.
24 See David Oswell, ’And what might our children become?’, Screen, 40, no. 1
(1999), pp. 66-87.
25 Young, Caught in the Net, flysheet.
26 Geoffrey Hurd argues, in a study of the British television series The Sweeny, that
the oppositions between authority/bureaucracy and intuition/technology are
central to the structural characteristics of the television police series. Geoffrey
Hurd, ’The television presentation of the police’ in Popular Television and Film,
ed. Tony Bennett et al (London- Open University Press, 1981), pp. 53-70.
27 The notion of the ’cybercop’ appears more readily in discussions of encryption
than of content regulation. See Puay Tang, ’Multimedia information products
and services: a need for "cybercops"?’ in The Governance of Cyberspace, ed.
Brian Loader (London: Routledge, 1997), pp. 190-208.
28 Although it is not possible to make the argument here,I am not suggesting that
these hybrids are necessarily new, but that they become clearly visible in relation
to problems raised by the internet.
29 Although it has not been argued that the internet is the cause of changes in the
geography of domestic supervision.
30 Jon Courtenay Grimwood, ’Net results’, The Guardian , 1 April 1998, p. 9.
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015
62
31 Sonia Livingstone, ’Children’s bedroom culture’, paper presented to the Second
World Summit on Television for Children, London, UK, March 1998 and Sonia
Livingstone and Moira Bovill, Young People, New Media (London: London
School of Economics and Political Science, 1999). Hilde Himmelweit and her
colleagues at the London School of Economics conducted the first major research
on children and television in the UK, published as Himmelweit et al,
Television
and the Child: An Empirical Study into the Effects of Television on the Young
(Oxford: Oxford University Press, 1958).
32 Sonia Livingstone and Moira Bovill, Young People, New Media: Summary
(London: London School of Economics and Political Science, 1999), p. 5.
33 The Express, 19 March 1999, p. 5.
34 That the press reporting of the Livingstone and Bovill report was concerned with
the governance of young people is without doubt. The Independent, for
example, argued that the Labour cabinet should read the report and spend time
discussing its policy implications (The Independent, 19 March 1999).
35 See David Oswell, ’And what might our children become?’
36 Livingstone and Bovill, Young People, New Media: Summary, p. 38.
37 Buckingham, Moving Images, p. 298.
38 Ibid.
39 Bruno Latour, ’Thought experiments in social science: from the social contract to
the virtual society’, First Virtual Society? Annual Public Lecture (London: Brunel
University, April 1998), p. 6.
40 Ibid, p. 7.
41 See Paul Virilio, The Vision Machine (London: British Film Institute, 1994).
42 Bruno Latour, ’Drawing things together’ in Representation in Scientific Practice,
eds. Michael Lynch and Steve Woolgar (Cambridge, Massachusetts: MIT Press,
1990), p. 28.
Downloaded from con.sagepub.com at UNIVERSITE DE MONTREAL on August 11, 2015