0% found this document useful (0 votes)
25 views14 pages

IJPDMWGPaper

Uploaded by

amazingoffers21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views14 pages

IJPDMWGPaper

Uploaded by

amazingoffers21
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Int. J. Product Development, Vol. 2, Nos.

1/2, 2005 71

Product lifecycle management: the new paradigm for


enterprises

Michael W. Grieves
Industry Research, MIS Department,
Eller School of Business, University of Arizona and Principal,
CoreStrategies Consulting, LLC, USA
E-mail: mgrieves@attglobal.net

Abstract: Product Lifecycle Management (PLM) is a developing paradigm.


One way to develop an understanding of PLM’s characteristic and boundaries
is to propose models that help us conceptualise both holistic and component
views in compact packages. Models can give us both a rich way of thinking
about overall concepts and can identify areas where we need to explore issues
that such models raise. In this paper, the author proposes and discusses two
such related models, the Product Lifecycle Management Model (PLM Model)
and the Mirrored Spaces Model (MSM) and investigates the conceptual and
technical issues raised by these models.

Keywords: PLM; PDM; product lifecycle management; enterprise systems;


virtual space, product data management.

Reference to this paper should be made as follows: Grieves, M.W. (2005)


‘Product lifecycle management: the new paradigm for enterprises’, Int. J.
Product Development, Vol. 2, Nos. 1/2, pp.71–84.

Biographical notes: Dr. Grieves is a Founder of the PLM Development


Consortium at the University of Michigan, Director, Industry Research at the
University of Arizona Eller School of Business MIS Department, and
Chairman of the Board of Visitors School of Business at the Oakland
University. Dr. Grieves has over 35 years experience as an Executive in the
information technology field and is a Board member of a number of public and
private technology companies. His Doctorate is from the Case Western Reserve
University.

1 Introduction

Product lifecycle management or PLM is a paradigm in its early stages. Manufacturing


organisations, especially those in the automotive and aerospace industries, are devoting a
great deal of time, effort, and money in investigating, planning, and even implementing
solutions under the auspices of PLM projects. Software solution providers who have
significant PLM offerings are forming relationships with academic institutions in order to
develop PLM into something more than an acronym that groups previously developed
point solutions. While still sparse, industry media has begun reporting on such efforts
(Gould, 2002; Sussman, 2002) and doing feature stories (Stackpole, 2003).

Copyright © 2005 Inderscience Enterprises Ltd.


72 M.W. Grieves

The information systems’ (IS) sector has historically taken descriptive phrases
and made them serve a more conceptual and complete role. Examples of this are
computer-aided design (CAD), computer-aided manufacturing (CAM), enterprise
resource planning (ERP), supply chain management (SCM), and customer resource
management (CRM). The campaign to conceptualise these paradigms has often been
driven by software vendors who have developed software solutions that claim to embody
the concepts and processes of the respective paradigms, setting the definitions in a
somewhat teleological fashion – the software’s functionality defined the paradigm.
The interest in PLM by practitioners is especially notable at this point in time.
The days of unquestioned adoption of information technology appear to have burst along
with the internet bubble. In spite of assertions drawing linkages between IS expenditures
and productivity increases (Brynjolfsson and Hitt, 1998), technology users, especially
those that are in functional areas not reporting into IS management, complain that
they have not received value from previous information technology projects (Ross and
Weill, 2002). Research also appears to support that the perception that the value of IS
expenditures reside in lower component systems and less in higher-order systems, such as
ERP (Ragowsky et al., 2000).
However, there appears to be serious interest in PLM by manufacturing-based firms,
especially the automotive and aerospace industries, by professional groups, such as the
Society of Manufacturing Engineers (SME), by academic institutions, and by software
solution providers. In spite of, or maybe because of, the newness of the concepts and the
lack of definitional agreement let alone standards, leading chief information officers
(CIOs) and engineering and manufacturing executives at a number of the large
automotive manufacturers and suppliers have embarked on projects in PLM, some to the
extent of forming specific PLM departments.
While not unusual in the early stages of technology formation, there is much
confusion, uncertainty, and ambiguity surrounding the conceptual understanding of PLM
(Bijker, 1995). As explained below, it clearly is too early to develop hard and fast
definitions of PLM and its components. However, it is appropriate to develop models that
may provide a conceptual mapping of PLM. These models act as generative metaphors
(Schon, 1993) and can serve to give an understanding of what PLM could be. These
models can also serve as a high-order map to point the way to the need for further
investigations of processes, structures, and implementing technologies and mechanisms
that will be needed to realise these concepts. This paper hopes to propose some models
that will do just that.

2 Defining PLM?

As stated above, while it is too early to develop a black letter definition of PLM, it is
never too early to propose and ‘try on’ definitions to see their fit. Definitions are usually
an iterative and often times are a dialectical process in which numerous definitions are
proposed, examined, and discussed before a consensus of ‘experts’ agree on a definition
that reflects the characteristics that they view as the most salient and distinctive.
For PLM, some definitions in play are:
“A strategic business approach for the effective management and use of
corporate intellectual capital” – National Institute of Science and Technology
(Fenves et al., 2003).
Product lifecycle management: the new paradigm for enterprises 73

“PLM stands for Product Lifecycle Management, which is a blanket term for a
group of software applications used by engineering, purchasing, marketing,
manufacturing, R&D, and others that work on NPD&I” – AMR Research
“A strategic business approach that applies a consistent set of business
solutions in support of the collaborative creation, management, dissemination,
and use of product definition information across the extended enterprise from
concept to end of life – integrating people, processes, business systems, and
information.” – CIMdata
“Product lifecycle management is an integrated, information-driven approach
to all aspects of a product’s life, from its design through manufacture,
deployment and maintenance – culminating in the product’s removal from
service and final disposal. PLM software suites enable accessing, updating,
manipulating and reasoning about product information that is being produced in
a fragmented and distributed environment. Another definition of PLM is the
integration of business systems to manage a product’s life cycle.”
(Stackpole, 2003)1
As one can see from the definitions, there is some commonality to the definitions, but
there is probably an equal amount of variance. While two of the definitions explicitly
refer to the beginning and end of a product’s life, one of the definitions (AMR Research)
seems to limit PLM to only new product development and integration (NPD&I). While
this would seem to change PLM to product launch management and not product lifecycle
management, this is a tacit perspective of some practitioners, especially if they are
involved in the early stages of developing PLM implementations. It is even possible that
PLM may not make it out of the R&D and engineering departments. However, the
limitation of PLM to NPD&I seems unnecessarily constraining at this early stage, and
contradictive to a literal interpretation of the term PLM.
The other issue here is that, even if there was a shared definition, definitions are as
much constraining as defining. Early-stage paradigms ought not be unnecessarily
constrained. While definitions of established concepts allow users of the definition to fill
in or edge out the definition based on their familiarity with the concept, early-stage
paradigms do not have that characteristic. As a result, they do not require definitions, but
rather they require ‘thick descriptions’ in much the same way that an ethnographer would
use in describing a practice in a culture foreign from ours (Geertz, 1973).

3 PLM visual model

‘Thick descriptions’, even if they were possible for concepts that are not fully developed
and therefore not fully understood, are not the replacement for definitions. What is more
useful at this stage of development is to provide visual models. Visual models give shape
to concepts, yet provide an element of unconstraining ambiguity. Like Schon’s (1993)
generative metaphors, they allow the model users (and critics) to fill out aspects of the
visual model and test them against their vision of the concept.
Visual models give outline and substance to concepts without necessarily overly
constraining the concept. Visual models allow its users to fill in, focus, interpret, and
generate their own understanding of concepts. If theory building should be ‘disciplined
imagination’ as Weick (1989) claims, then visual models provide the tools that enable
this. The requirements for visual models are that they be interesting (Davis, 1971) and
relevant, and neither be absurd nor obvious (Weick, 1989).
74 M.W. Grieves

While this may be a flaw of the terminology, visual models come in all levels of
detail, from general paradigm-setting ones as described in this paper to detailed ones
that describe data structures and relationships (Fenves et al., 2003). While detail
visual models, such as schemas, mappings, and structures are critical for implementation,
high-order visual models are useful for defining overall scale and scope. It may be that
gaps exist between what the ideal high-order model proposes and what is implementable.
However, these visual models do provide a conceptual framework both to those who will
be called upon to perform detailed design and implementation and to those who manage
and approve the projects, so that they have some feel for what the overall scale and scope
entails.
As seen in Figure 1, the product lifecycle management model (PLM model) provides
an overall conceptual view of PLM. Rather than using limited word definitions, the PLM
model provides a visual description of the concepts and relationships embodied in PLM.
As seen in this visual model, PLM begins at the requirements and analysis stage and
continues until the product is removed from service and disposed of. Even though the
product lifecycle constitutes many diverse functional areas, there is an informational core
that functional areas provide information to and receive information from. For example,
product designers provide information on material composition and toxicity that is used
by the recyclers at product disposal.

Figure 1 PLM model

Before a technology can be said to exist, the participants need to stabilise and standardise
it and come to a closure as to its definitions (Bijker, 1995). PLM is not as yet in that
position. However, from many discussions with industry participants, there are four
common themes that appear in discussing PLM. These four themes are reflected in this
model and are:
• PLM is a continuation of functional integration
• PLM centres on an autonomous, common informational basis for this
cross-functionality
• PLM also concerns itself with processes
• PLM substitutes information from other functional areas for the inefficient use of
material, energy, and time.
Product lifecycle management: the new paradigm for enterprises 75

In one respect, PLM is a continuation of the trend to integrate functions within more
comprehensive systems. Early computer applications were standalone applications that
did their function, but provided little cross-functionality. The inventory system tracked
items for inventory control; the payables’ system bought items; and the billing systems
invoiced customers for items. But the items bought did not update the items controlled in
inventory, nor did the items sold. The same item bought, counted, or sold had no
connection within the isolated systems. Each system had its own construct of that
informational item, and these same informational constructs existed in isolation of each
other.
The next step, integrating these systems, did provide for this commonality of
constructs. Manufacturing resource planning (MRP) and enterprise resource planning
(ERP) extended this integration, such that production requirements began an integrated
process that drove scheduling, ordering, receiving, inventorying, vouchering, and billing.
The items and the parts that made that up that item could be controlled until they left the
factory door. The multiple representations of the same items became a singular
representation. However, these systems are primarily production-based systems.
The practitioners’ view of PLM is that it attempts to extend that integration of
functionality back into the design phase and forward into the sales, service, and disposal
phases. As reflected in the PLM model, functions are not isolated areas, but are part of an
overall cycle and products have continuity throughout the lifecycle. PLM views its
objects, i.e., items, parts, components, as objects existing in their own right as
autonomous objects. Autonomous objects mean that these informational objects exist
independent of any system that might operate on them and requires they be defined
independent of any particular system. They are part of this informational core that all
functions use.
The current view within existing IS systems is that the function of that particular
system dictates an informational object’s attributes. Accounting systems have attributes
of items that primarily relate to its value and differentiation, such as quantity, price, and
cost. Weight is tracked primarily for valuing shipping and handling. Colour is an example
of a differentiator.
Engineering and manufacturing systems have attributes that deal with an item’s
physical characteristics, such as size, shape, surface, composition, and weight.
Its dimensions dictate how other items fit with it or how machines that work with it must
be set up and sequenced. Of the two datasets defining the same item, the engineering and
manufacturing one is much more complex and richer than the accounting one, making the
problems of moving between different functional systems much more complicated and
demanding, and the problems of multiple representations of the same item much more
confusing.
PLM is an attempt to consolidate these different views and functional uses of an item.
PLM attempts to create as complete as possible informational-only representation of the
item, regardless of the use. Using Figure 1 as a usage guide, the consolidated
informational representation of an item would allow design engineers to calculate surface
areas, manufacturing engineers to plot tooling radii, accountants to calculate costs, and
recyclers to separate materials for disposal. The system-independent informational
representation should allow all these functional uses to be done without loss of
information, e.g., the item that has recyclable parts designed into it is the same item
available to the recycler at item disposal.
76 M.W. Grieves

PLM is intended to be the informational equivalent of being in physical possession


and having the ability to examine an item, which has its information as an inseparable
and ontological part of itself. Under this premise, anyone with the right instruments can
measure, determine composition, determine manufacturing processes, and note the count
and differentiating elements.
However, as this relates to the third theme, PLM should have the additional capability
of containing functional processes within it. This is what differentiates PLM from PDM
(product data management). As organisations are separated through functional
differentiation (design, engineering, manufacturing, etc.), there is a natural tendency to
organise information along these functional lines. This creates informational silos, where
the information of the different functional areas is isolated from each other. The idea
behind PLM is that it de-silos the functional use of these isolated, functionally oriented
systems. In a PLM environment, the design engineer should be able to define processes
for recycling that will be available to the recycler at product end-of-life. Importantly, he
or she can also develop processes, such that the next design engineer looking for specific
functionality can reuse this already defined part, rather than designing a new one. It is as
if the items discussed above were not only available for inspection, but came with a full
usage manual.
The fourth theme, the substitutability of information for time, energy, and/or material
is the economic driver of PLM. While information, time, energy, and material do not
have comparable measurements, they can be converted to a cost and compared on that
basis. As illustrated in Figure 2, this substitutability is only used as it pertains to the
inefficient use of time, energy and material.2 While information cannot be substituted for
the time, energy, and/or material in designing a part, drilling a hole, or moving a part
from one machine to another, it can be substituted for designing a part that already has
been designed, drilling a hole in the wrong place and having to scrap the part, and routing
parts to machines in an inefficient manner.
Information also does not substitute for all inefficient use of time, energy, and/or
material. We can be inefficient in execution, even when we have the appropriate
information available. However, correcting this inefficiency is where engineering usually
shines. If we have the appropriate information, engineering can usually design
appropriate technologies to utilise such information in eliminating inefficiencies.

Figure 2 Use of time, energy, and/or material


Product lifecycle management: the new paradigm for enterprises 77

Although fuzzy, inexact, and incomplete, this is the current proposed model of PLM as it
continues its evolution. As with ambitious undertakings of this sort, we should expect
that first implementations would fall quite short of the ideal, and they do. With
some exceptions, PLM is currently focused on the early part of the product lifecycle,
i.e., design and manufacturing engineers. This is not in itself detrimental, in that PLM has
to start from the beginning and decisions made at this stage set a substantial amount of
the product’s overall cost (Iansiti, 1998). However, there needs to be a comprehensive
paradigm that encompasses the entire lifecycle in order to prevent PLM from
degenerating into simply a collaborative engineering project.

4 Mirrored Spaces Model

The informational core shown in the PLM Model is what enables PLM. With information
kept in functional silos and unavailable or available in limited form to other functions,
PLM does not take place. But this will require a very different model of looking at data
and information. To better understand this informational core, the author introduces what
he refers to as the Mirrored Spaces Model (MSM) and is shown in Figure 3. The model
consists of three elements: real space, virtual space(s), and a linking mechanism, referred
to as data, and information/process connection real space and virtual space(s).
As described below, this is a model that uses elements that are embedded into our
very way of thinking. However, when used as a model for managing information, it
changes some very fundamental ways of how data are organised and used. Specifically, it
‘de-silos’ data collection and information use, such that data are not organised by their
function, but by the physical object with which they are associated.

Figure 3 Mirrored Spaces Model (MSM)

4.1 Real space


There are ontologically compelling reasons to think in terms of spaces. For real
objects, it is obvious that a space-based model is natural. As much as we attempt to
conceptualise and abstract the world around us, we are inextricably bound to our view of
the world that comes to us via our senses. It is embedded in our very language
(Lakoff and Johnson, 1980). We experience our world in terms of spaces having physical
dimensions that govern our understandings. No amount of abstraction can separate us
from this (Latour, 1999). We constantly think in spatial terms, such as up, down, here,
there, over and under, picking up and placing down (Lakoff and Johnson, 1999).
We think in spatial terms even when thinking about non-spatial concepts, sometimes
unconscious that we are doing this and at other times very deliberately (Grieves, 2000).
78 M.W. Grieves

4.2 Virtual spaces


As humans, we have been able to create virtual spaces in our minds as far back as there is
history. While imperfect and transient, we have the ability to create virtual spaces
and control what happens in those spaces. This is the process we know as imagining
(Casey, 1976). We can build representations of the real world and change these
representations either as things change in the real world, or as we postulate things could
change in the real world.
Language was and is our main mechanism to share these private virtual spaces, and
writing gives them permanence, albeit in static form. The advent of computers allowed
for dynamic spaces that could be shared on a local basis by all connected to that
particular computer system.
The use of computers to create useful virtual spaces was constrained by two
technologies, memory and processing, and enabled by one other, the internet. While
constraining is often considered the complement of enabling (Giddens, 1984), as used
here, constraining refers to the limitations against a threshold functionality. On the other
hand, enabling technologies accelerate or enhance the usage of functional technologies.
For example, adding two additional traffic lanes may remove the constraints so as to
allow traffic to travel at a desired 45 miles per hour. Building highways to new
destinations enables travel that was previously impossible or impractical.
The constraints or limitations with earlier computer systems were such that their
representations of the real world were constrained by data storage requirements to only a
very limited amount of information about real world objects. This necessitated substantial
abstraction to only the coarsest characteristics. For example, early-computerised
information about a part consisted of its part number and limited amount of
characteristics such as dimensions, colour, weight, etc.
In addition, manipulations were limited by processing power. Even if real world
objects could be described sufficiently, the processing power to do anything useful was
unavailable. The author was involved with the Illiac IV during the early 1970s. Even
though it was the most powerful computer in the world, it took 26 hours to process
sufficient data about the real world to predict weather in the next 24 hours. While useful
for validating weather models, it was unable to perform its primary predictive function.
Thanks to Moore’s law and its storage equivalent, these limitations have reached the
threshold where they can accommodate a desired functionality with respect to both
mirroring the description of complex objects and to manipulating them. Mathematical
descriptions of parts are such that they can directly drive manufacturing machinery to
create functional parts without human intervention. In addition, these complex objects
can be combined to form even more complex objects with the correct spatial orientations
within the context of their use. This was and is a shared space where an object could be
created and manipulated and multiple people could agree on its interpretation.
However, it was not until the advent of the internet that easily shared universally
accessible virtual spaces were made possible. While more than one individual
experienced some of the shared spaces, the ability to link multiple individuals into this
shared space, especially if separated geographically, was technically challenging and very
expensive. Connections to this shared space took planning and dedicated resources.
The internet enabled nearly anyone with access to a computer and communication line to
access these shared spaces.
Product lifecycle management: the new paradigm for enterprises 79

This shared space implies a singularity of information. Unlike real space, it is almost
costless to produce and manipulate copies of data objects. While a powerful advantage of
virtual spaces, multiple copies can also be a source of waste and inefficiency.
In engineering design, multiple copies can lead to wasted work being done on old
versions of the object. It can also lead to independent and sometimes incompatible work
that is done on multiple copies of a virtual object having to be reconciled and integrated
into one version.
What common physical parts are to the real world – because they reduce the amount
of information that people are required to deal with – singular virtual objects are to
virtual space. However, as Figure 2 describes, virtual private spaces VS1 … VSn) exist
and can be used to explore alternative designs and lives of those designs. However, PLM
requires that the shared space have a shared and singular representation.

4.3 Linking mechanisms


Once virtual spaces can be built, accessed, and manipulated, the key to MSM and
consequently to PLM is the ability to link the virtual space to its real space counterpart.
This is what will allow PLM to have real utility. It is the ability to access the virtual
representation of an item and know that it substantially mirrors the state of the real world
object.
To make full use of virtual spaces, linking mechanisms are required in both
directions. Data must be collected and transmitted from real space to virtual space.
Information and processes must be organised and used in real space. In both cases, this
currently is a human-intensive activity. With the exception of process control systems,
the interface between the real and virtual spaces has primarily been humans coding and
entering data and requesting data and information via tactile devices, such as keyboards,
touch screens, mice, graphical tablets, etc.
While appropriate for some activities, such as product design, this interface is often
slow, cumbersome, and subject to long lag times between the generation of the data and
its entry in the computer system. In some cases, such as the collection of data from one
computer-based system and its entry into another computer-based system, these data
suffer from being unnecessarily routed through inefficient and unreliable translation
devices, humans.
However, for MSM to reach its full potential, the linkage between the object in virtual
space and its counterpart will have to be robust, accurate, and timely. If the design
engineer designs a part as recyclable at the beginning of the product’s life, the recycling
centre should have that information and the recycling process for that same part at the
end of that product’s life. To use another example from a different time in a product’s
life, torque readings of a car seat installation collected at time of its manufacturing would
be available to thwart a plaintiff attorney’s argument that improper seat installation
contributed to injuries during an accident.

5 Issues and implications

As in all new paradigms, there are many issues and their implications to work through
and ‘puzzle’. This paper discusses only some of them, focusing on issues that are critical
to the development of the paradigm. This paper focused only on the more technical
80 M.W. Grieves

issues, although there are significant social issues such as security, privacy, ownership,
and education and training. These issues are all complex, and in some ways more
problematic than the technical issues. However, most of these issues will have to evolve
and are currently gated by the technical issues.

5.1 Digital representation


PLM requires a digital representation of the object. The requirement will be to carry all
the data about the object such that any observation about the object can be acquired
unambiguously and consistently.
The issue here is that data representation has only been limited by the creativity of
programmers and has been driven by the functionality of the programs that have used the
data. This means that there are huge incompatibilities in different data representations
that will need to be resolved. These incompatibilities take two major forms:
incompatibility of data and incompatibility of structure.
Incompatibility of data refers to the issue of data representations about characteristics
being present in one representation of an object and not being present in another.
For example, one representation carries information about an object’s weight, while
another does not. Incompatibility of the structure refers to the data structures that are used
to represent the object. It may be as simple as that in the structure carrying different
elements in different positions or may be much more complicated as in maths-based
representations of geometry where different mathematical techniques are used to
represent surface geometry.
The first issue, incompatibility of data, is primarily a specification issue. Objects will
tend to get richer over time as more coordination occurs between the various users of the
objects and the information technology designers. There may be some ‘bolting on’, where
data that should have been specified at the object’s creation are added later, but those
should tend to be incorporated in an object’s core definition over time.
The second issue, incompatibility of structure, is a more technical issue. For simpler
structures with discrete elements, such a quantity, weight, unit of measure, etc., standards
such as XML will allow objects to be created without respect to exact structure, with the
programs resolving structure at time of execution. The more difficult problem is
maths-based structures, which have much more complicated structures. In the automobile
industry, suppliers have to deal with at least two maths-based object descriptions used by
the primary manufacturers in their design activities. Semantic interoperability that would
allow the suppliers to be indifferent to the system the object was designed with is a
difficult but necessary requirement for PLM.

5.2 Database organisation


Closely related to digital representation is database organisation. With a real object, all
the characteristics about it must be co-resident with the object itself and is an inherent
property of the object from its inception. With digital representations, this is not a
requirement. Data used in a digital representation can reside in geographically disperse
locations, although logically be part of the same object. With current storage technology,
even what looks to be contiguous data can be spread over different physical devices and
even different physical locations.
Product lifecycle management: the new paradigm for enterprises 81

PLM would seem to require distributed data elements that can be logically grouped
on demand. While there is some support for ‘hub’-based systems, where that data exist in
a single repository, it seems unlikely that all those who were responsible for the creation
of PLM data would hand it over to a central source, even if there was not the hurdle of
incompatible data structures. This means that database systems will need to be able to
logically create the shared space that contains the singular object that corresponds to its
real space counterpart. Database systems will need to create the equivalent of real space’s
four-dimensional coordinates (longitude, latitude, height, and time) that allow us to
uniquely locate an object. Like its real counterpart, databases will need to ensure that no
two virtual objects can occupy the same space.
Database technology will also need to be more flexible and less rigid in structure and
content type. This flexibility will need to be a longitudinal trait. That will allow the
structure of the database to change overtime without requiring manual restructuring.
As noted above, data items may obtain new characteristics over time. These need to be
‘bolted-on’, with little or no database administration. In short, the singularity requirement
of one representation of the object needs to be implemented and enforced through
database technology. Database companies, such as Oracle, are moving in that direction
(‘Face Value: Jolly Boating Weather’, 2003).

5.3 Visualisation
The visual sense is arguably our most important sense. From a purely data perspective,
visual input is our highest bandwidth sense. To do without it, severely limits information
flow. However, the high bandwidth required for visualisation has historically been an
issue. ‘Visualisation’ was enabled by sending low bandwidth symbols, either words or
abstracted images,3 which the recipient used to create mental images (Casey, 1976).
The general low level of capability in individuals and inconsistency between individuals
meant that the level of visualisation was rudimentary at best.
As computing power rapidly increased, visualisation technology, in the form of CAD
technology, rapidly increased in use. However, because of technology limitations, there
was the requirement to trade-off between richness and reach (Evans and Wurster, 1999),
with visualisation (richness) limited to users within local reach of the CAD systems.
However, doubling advances is both processing (Moore’s laws – 18 months) and
communications (fiber law – nine months) is rapidly eliminating the requirement of this
trade-off (Gilder, 2000).
If we are going to make full use of virtual objects, then visualisation of those objects
is a necessity. Otherwise, eliminating our highest bandwidth sense will handicap our
usage of these virtual objects. A technology focus will be for virtual objects to eventually
come with a mechanism to visualise them. A recent announcement by UGS PLM
Solutions (formerly EDS PLM Solutions) called JT Open is an initiative in this area.4

5.4 Technology linking the spaces


PLM development will depend on technology developments that enable data to be
collected in real space and information and process delivered from virtual space. If virtual
space is to be a reflection of real space, then it must be synchronised with real space.
Likewise to be useful in more than an archival role, information and processes must be
available anyplace and anytime. To use the example from above, information about
82 M.W. Grieves

recyclable parts and the processes to recycle them safely needs to be available as the part
is being disassembled for disposal.
The main enabling technologies in this area appear to be sensors, indicators, RFIDs,
and wireless communications. These are all technologies that will facilitate the movement
of data from real space to virtual space and information and processes from virtual space
to real space. Sensors are the technological equivalent of our senses that allow us to keep
our mental spaces in synchronisation with real space. In the example of the car seat,
torque sensors measure the amount of pressure used to tighten seat bolts. While humans
have a good sense of the same measurement when hand tools are used for the same
purpose, the torque sensors are much more exact and provide a level of permanence in
PLM that is difficult to provide in a manual process.
Indicator technology will increase in use as information and processes are conveyed
from virtual space. An example of this is its use in building car dashboards. Alta
Software provides a system that takes the bill of materials for the specific dashboard to be
built and lights up the appropriate parts bins in the appropriate sequence so that
assemblers can build many different style dashboards without having to know in advance
what dashboard type they are building. Here indicators provide information, which parts
to use, and, as an example of process contained within PLM, the sequence in which to
install the parts.
RFIDs are small electronic devices that contain data and transmit those data using RF
frequency. Conceptually, RFIDs are closely related to bar codes. Both contain data about
the item to which they are attached. As anyone who has been shopping at his or her
supermarket can attest to, bar coding has become ubiquitous, with almost every item
being identified by bar code. RFIDs solve three problems of bar codes, placement space,
reading efficiency, and stacking.
Placement space refers to the issue that bar codes have physical requirement for
placement. For some physical objects, bar coding would require more physical space than
is available. Reading efficiency requires that the bar code be physically visible and
accessible to a bar code reader. This practically limits bar codes to external surfaces.
Finally, stacking refers to ability to identify components of an item. While it may be
feasible to have multiple bar codes, because of size requirements, it quickly becomes
impractical.
RFIDs will allow each discrete object to identify itself and update its virtual
equivalent as required. The Auto-ID Center, a non-profit organisation driving RFID
development, anticipates such a scheme with ‘quadrillions’ of man-made objects having a
unique Electronic Product Code (EPC) linking these objects to its respective information
via the internet (Sarma et al., 2002). This will allow data about objects to be collected
throughout its lifecycle with a minimum of human intervention. Using our example
above, when time comes to remove a manufactured part from service, the RFID can
communicate its identity to the recycler who will recycle the appropriate parts having
access to the process the designer created back when he or she designed the part.
Wireless is a key technology for this linkage, because it will enable data to be
collected anywhere and information and processes to be delivered anywhere. While
wireless is not strictly required to implement PLM, the requirement to collect data and
disseminate information and process only where there is wired access would restrict its
usefulness. Wired access generally means human intervention, if only to plug in the
device. In addition to the ‘anywhereness’ that wireless allows, it also allows for
non-mediated communication.
Product lifecycle management: the new paradigm for enterprises 83

6 Conclusion

Product lifecycle management (PLM) is a ‘next-step’ paradigm in the making. It is an


attempt to ‘de-silo’ the data of various functional areas and create autonomous objects
that carry the information of their real world counterpart. The PLM model and the
Mirrored Spaces’ Model (MSM) is a proposed way of conceptualising this paradigm.
If we think of the information about an object being extracted from it and organised so
that it mirrors that object in virtual space, then we can use that information and processes
developed for that object throughout its life to substitute for the expenditure of time,
energy, and physical materials.
MSM is not a unique concept. It mirrors what we do as humans. We create virtual
objects of their real world counterpart, and we update them to reflect the changes that
occur to them. However, we lack scalability. We can neither handle the requirements for
all the objects that exist nor can we share this information very well. Virtual spaces
created and controlled on computers can.
PLM is in its infancy and has many hurdles to overcome to live up to its full potential,
although sub-optimal implementations would still be useful. While the social aspects of
PLM may dwarf the technical issues and deserve their own investigation, this paper has
concentrated on some of the technical issues generated by the proposed models that
might gate the development of PLM. These issues are digital representation, database
organisation, visualisation, and space-linking technologies.

References
Bijker, W.E. (1995) Of Bicycles, Bakelites, and Bulbs, The MIT Press, Cambridge, MA.
Brynjolfsson, E. and Hitt, L.M. (1998) ‘Beyond the productivity paradox’, Communications of the
ACM, Vol. 41, No. 8, pp.49–55.
Casey, E.S. (1976) Imagining: A Phenomenological Study, Indiana University Press, Bloomington,
pp.6–22.
Davis, M.S. (1971) ‘That’s interesting!’, Philosophy of Social Science, pp.309–344.
Evans, P. and Wurster, T.S. (1999) Blown to Bits: How the New Economic of Information
Transforms Strategy, Harvard Business School Press, Boston.
Face value: jolly boating weather (2003) The Economist, 4–10 January, p.53.
Fenves, S.J., Sriran, R.D., Sudarsan, R. and Wang, F. (2003) ‘A product information modeling
framework for product lifecycle management’, Paper presented at the International
Symposium on Product Lifecycle Management, July 16–18, Bangalore, India, pp.1–10.
Geertz, C. (1973) The Interpretation of Cultures, Basic Books, New York.
Giddens, A. (1984) The Constitution of Society: Introduction of the Theory of Structuration,
University of California Press, Berkley.
Gilder, G.F. (2000) Telecosm: How Infinite Bandwidth Will Revolutionize Our World, Free Press,
New York.
Gould, L.S. (2002) ‘PLM: Where product meets process’, Automotive Design & Production,
Vol. 114, No. 6, pp.43–45.
Grieves, M.W. (2000) Business is War: An Investigation into Metaphor Use in Internet and
Non-Internet IPOs, Unpublished EDM diss., Case Western Reserve University, Cleveland.
Iansiti, M. (1998) Technology Integration: Making Critical Choices in a Dynamic World, Harvard
Business School Press, Boston, Mass.
Lakoff, G. and Johnson, M. (1980) Metaphors We Live By’, University of Chicago Press, Chicago.
84 M.W. Grieves

Lakoff, G. and Johnson, M. (1999) Philosophy in the Flesh: The Embodied Mind and its Challenge
to Western Thought’, Basic Books, New York.
Latour, B. (1999) Pandora’s Hope: Essays on the Reality of Science, Cambridge, Harvard
University Press, Mass.
Ragowsky, A., Stern, M. and Adams, D.A. (2000) ‘Relating benefits from using IS to an
organization’s operating characteristics: interpreting results from two countries’, Journal of
Management Information Systems, Vol. 16, No. 4, pp.175–194.
Ross, J.W. and Weill, P. (2002) ‘Six decisions your IT people shouldn’t make’, Harvard Business
Review, Vol. 80, No. 11, pp.84–91.
Sarma, S.E., Weis, S.A. and Engels, D.W. (2002) RFID Systems and Security and Privacy
Implications, Auto-ID Center, MIT, Cambridge, MA.
Schon, D.A. (1993) ‘Generative metaphor: a perspective on problem-setting in social policy’, in
Ortony, A. (Ed.): Metaphor and Thought, 2nd ed., Cambridge University Press, New York,
pp.137–163.
Shannon, C.E. and Weaver, W. (1949) The Mathematical Theory of Communication, University of
Illinois Press, Urbana.
Stackpole, B. (2003) ‘There’s a new app in town’, CIO Magazine, May 15, pp.92–101.
Sussman, D. (2002) ‘Survey says PLM gaining momentum’, MSI, Vol. 20, No. 9, pp.27–30.
Weick, K.E. (1989) ‘Theory construction as disciplined imagination’, Academy of Management
Review, Vol. 14, No. 4, pp.516–531.

Notes
1
Stackpole attributes the bulk of this definition to the University of Michigan Product Lifecycle
Management Development Consortium (PLM DC). The author is a Co-Director of the PLM DC
and developer of the definition.
2
This figure is only meant to illustrate the concept and is not an actual cost calculation for any
particular task. Given the costs spent on information technology systems, the implicit belief that
the cost of information is substantially less than the cost of wasted time, energy, and material is
widely prevalent in practice. Also, while any human task, no matter how inefficient, also uses
information. However, it would be represented equally in both bars; hence, it can be removed
from both bars to focus on the addition of new information substituting for wasted time, energy,
and material. A much fuller treatment of this is contained in the author’s forthcoming book, PLM:
Driving the Next Wave of Productivity through Product Lifecycle Management (McGraw-Hill).
3
‘Colossal cave’ was an example of an early computer game that utilised visualisations generated
by words. ‘Pong’ was an example of an early computer game that used graphic abstraction, two
lines and a dot, to visualise a ping-pong game.
4
See EDS Press Release, Wednesday 19th November 9:02 a.m. ET, Global Manufacturers and
Product Lifecycle Management (PLM) Industry Leaders Unite Behind Common Data Format for
Visual Collaboration and Interoperability (contains quote by author).

You might also like