Ecommerce Technological Aspects
Ecommerce Technological Aspects
E-Commerce is a massive growth area, were colossal sums of money are being made and spent
every day. This is largely to do with the hype of the Internet and on-line shopping. The Internet
is growing exponentially, and will continue to grow for some time to come. This, coupled with
good advertising, can provide a solid foundation from which to launch a stake in the Internet and
e-commerce boom.
Out-of-the-box is an ideal that eliminates, or at least reduces time consuming and complex e-
commerce site building, by providing all of the various functions of an e-commerce solution
from one source. Out-of-the-box is an “instant” low price solution requiring minimal IT
recourses for implementation and operation. It is has a “Lego” mentality by shielding the user
from heavily involved technical programming and scripting etc, simply by assembling the
various software components into an established framework. This is done using a GUI interface,
where the developer moves the software objects around analogous to the child with Lego bricks,
snapping them into place, and building a functional entity. Its features include the following:
Development Tools - Commerce Server 2000 gives developers the power to quickly build and
deploy effective e-commerce sites. Developers get a fast start with a choice of two starter
applications that provide comprehensive e-commerce functionality. - Personalisation,
merchandising, catalogue search, customer service, and business analytics. - Secure user
authentication and group access permissions, purchase order and requisition handling. - Built-in,
online auction capabilities. - Speed up and simplify development efforts with code samples and
in-depth documentation.
Administrative Tools - simplify and centralise administrative tasks such as site configuration,
deployment, operations and maintenance, reducing cost of ownership, and increasing application
availability.
Partners – Microsoft in partnership with independent software vendors, ensure the highest
availability and quality possible for solutions, including credit card validation, taxation, shipping
and handling and content management.
Profile System - helps you manage information about millions of users and groups of users. In
addition, it establishes secure authentication of site to ensure users have access to authorised
areas of your site. You also serve users custom content, such as special pricing or products,
based on the user's profile.
Targeting System - enables one-to-one marketing, determining the most appropriate content
(ads, discounts, up-sells, cross-sells, catalogue data, and more) to provide to a given user in a
given context, based on explicit user preferences.
Push-Pull Technology
Currently, one of the most fashionable technologies within the Internet is “Push” technology.
Contrary to the “Pull' world of web pages where users request data from another program or
computer, via a web browser, “Push” enables services to be targeted at the user, without them
having to initiate the information collection activity. Instead, information finds the user. In other
words, an automated retrieval of data from the Internet, corporate data sources and e-commerce
web sites, is delivered directly to specific user populations in a personalised manner.
“Push” Technology allows you to become an integral part of your customers daily lives by
enforcing your brands and services directly to them every day. Key messages and personalised
information that they have requested, and critical information can be delivered to their desktop,
screen saver, any wireless device, mail account and more. “Push” amplifies and extends your
current Web presence while providing new and valuable services. Your customer is directed
back to your Web site for more in-depth information. This technology eliminates the need to wait
for customers to visit your site, instead, allowing an organisation to take their business to their
customer base.
In order for companies to be able to use this technology, they require their customers to
download and install a piece of client software onto their computers. The software interacts with
the Web, and provides the interface through which context sensitive content is delivered.
E-Commerce is changing the way business is getting done in the Information Age. To gain a
competitive edge, businesses are in need of new ways to get ahead of the competition, new
models and a new infrastructure. To address this need, an inter-organisational electronic
commerce model is being developed.
According to this model, different users are represented by autonomous software agents
interconnected via the Internet. The agents act on behalf of their human users/organisations to
perform information-gathering tasks, such as locating and accessing information from various
sources, filtering unwanted information, and providing decision support.
Currently, existing software agents mostly help to search for product and price information,
validate purchaser’s credit, billing and accounting information. However, soon will come the
time when agents will be able to match buyers and sellers based on some criteria, find prices and
makes bids on behalf of users, notifying of new books or CDs, notifying when specific products
are available at a specific price etc.
A system has been devised that frees a human home owner from having to open their
refrigerator and make a shopping list of low-stock items, and then frees them from having to
leave their home to purchase the required goods!
Granted, the user has to place goods in designated areas for the system to work, e.g. milk would
always have to be in the same place, as would eggs etc. Sensors would then be able to determine
when items needed replaced and subsequently send a message from the fridge, over the Internet,
to the local grocery store. There, the order is packaged up and delivered to the consumer's door,
thanks to the fridge! This is a somewhat contrived method of software agency, since it lacks any
real intelligence, but perhaps in the future, software agents will be able to search databases of
several stores within a given area, and then order the stock form the cheapest provider. Why stop
at milk, indeed why stop at food? Imagine being able to pick up the telephone, dial an
international number, then a software agent interfaces on behalf of the caller, to collect bids from
various carriers, asses the bids and selects the cheapest per minute!
In an e-commerce context, software agents can revolutionise the way we trade on the Web. Perhaps in
the future we will see dramatic developments that will have agents operate without direct intervention
from humans (apart from the initialisation stage), having them take control of auctions or internal state;
negotiate air fares, car prices, book prices and holiday deals etc. Who knows what the future may bring!
Backup
One the most over-looked yet most important aspects of ensuring data integrity, is backup. The
commercial world is replete with horror stories of non-backed up failures. One company, after
losing all records in a fire, were resigned to calling clients and asking them if they owed money
and how much! Not surprising that the company went to the wall.
Backup in simple terms, is storing a copy of your data, so that, in a disaster event, the data can
be restored quickly and completely.
What to Backup
When backing up data, it makes sense to backup only the data that is being created or modified
daily by your organisation. This will include:
· Customer database
· Document templates
· Inventory system
· Presentations
· Users data
· Registry
It doesn't make sense to backup the entire contents of the hard drive; this will take too long and
will use up a substantial amount of magnetic storage, therefore costing more money. Should a
hard drive fail then the operating system and applications can be restored from the original CDs.
Only the data items (and any similar ones) from the bulleted list, should be backed up.
Frequency
How often backups are administered is entirely at the discretion of the organisation. In most
cases, it makes sense to backup data on a daily basis, however, single users or small
organisations, may backup every second day or perhaps once per week.
File Management
Managing backups is made significantly easier, if the data to be backed up is stored centrally,
i.e. on a file server. If data is scattered throughout the organisation, then it becomes much more
difficult for administrators to track the data. Granted, it is still possible, but administrators cannot
legislate for users who create new data objects without informing backup operators. So, all in all,
it is far simpler for users to be informed of designated storage areas. Should they fail to comply
with the guidelines, then loss of data shall be their own responsibility.
Devices Hardware
Like frequency of backups, there are no hard and fast rules regarding the deployment of a
backup device or backup media type. If your organisation is small, and you work with very few
files, it may be possible to save these to floppy disks. SOHO (small office home office) users
may use CD burning technology to backup files, others may use an external zip drive, like the
Iomega Zip range.
These devices connect to the parallel port or the USB port and are like floppy drives, except that
the earlier version holds 100mb and the newer version holding 250mb. It is also possible to
obtain internal versions of these drives, which require a spare 3.5-inch drive bay and an IDE
connector. Also, some users favour a removable hard disk drive for backup.
However, it is generally accepted that tape drives are the devices of choice, and are certainly by
far the most popular, because they have the ability to store great capacities of data at relatively
low cost. There are a variety of media types, the most common are listed here.
· Digital Linear Tape (DLT) Among the newest technologies available, can store 70 gigabytes
of data on a single tape, and boasts transfer rates of 5Mbps.
· Quarter-Inch Cartridge (QIC) Among the oldest of the tape formats. Capacity ranges from
40 Mb to 5 gigabytes. QIC tapes are among the most popular tapes used for backing up personal
computers, but rarely used for backing up network servers.
· Digital Audio Tape (DAT) High-speed format most commonly seen in 4mm variety. Tapes
are slightly larger than a credit card and can hold from 2 to 24 gigabytes of data. Transfer rates of
about 2Mbps are supported.
Procedure
The backup process is a straightforward one. The administrator waits for a period of inactivity
on the network, usually after working hours when all users have gone home for the night. This is
so, for at least two reasons; the network will perform sluggishly if the server is engaged in an
intensive backup operation, and users who have open files, will not be backed up, therefore a full
set will not exist.
The tape drive, installed on the server (or attached externally to the server), has a tape inserted.
Next, the administrator opens his backup utility software, which displays all files and folder on
the server hard drive, similar to that of the Windows Explorer. He then selects the various
folders, sub folder and files that he wishes to backup. He gives the back up set a name, ensures
that the tape drive is the target device, and then clicks the "Run backup" or "Ok" button. The
process is started. Once the backup is complete, the administrator removes the tape from the
drive, and stores it offsite. Offsite storage is important, because the same flood or fire, which
would endeavour to destroy equipment, will also take out the backups, nullifying the entire
process.
Devices Hardware
Like frequency of backups, there are no hard and fast rules regarding the deployment of a
backup device or backup media type. If your organisation is small, and you work with very few
files, it may be possible to save these to floppy disks. SOHO (small office home office) users
may use CD burning technology to backup files, others may use an external zip drive, like the
Iomega Zip range.
These devices connect to the parallel port or the USB port and are like floppy drives, except that
the earlier version holds 100mb and the newer version holding 250mb. It is also possible to
obtain internal versions of these drives, which require a spare 3.5-inch drive bay and an IDE
connector. Also, some users favour a removable hard disk drive for backup.
However, it is generally accepted that tape drives are the devices of choice, and are certainly by
far the most popular, because they have the ability to store great capacities of data at relatively
low cost. There are a variety of media types, the most common are listed here.
· Digital Linear Tape (DLT) Among the newest technologies available, can store 70 gigabytes
of data on a single tape, and boasts transfer rates of 5Mbps.
· Quarter-Inch Cartridge (QIC) Among the oldest of the tape formats. Capacity ranges from
40 Mb to 5 gigabytes. QIC tapes are among the most popular tapes used for backing up personal
computers, but rarely used for backing up network servers.
· Digital Audio Tape (DAT) High-speed format most commonly seen in 4mm variety. Tapes
are slightly larger than a credit card and can hold from 2 to 24 gigabytes of data. Transfer rates of
about 2Mbps are supported.
Procedure
The backup process is a straightforward one. The administrator waits for a period of inactivity
on the network, usually after working hours when all users have gone home for the night. This is
so, for at least two reasons; the network will perform sluggishly if the server is engaged in an
intensive backup operation, and users who have open files, will not be backed up, therefore a full
set will not exist.
The tape drive, installed on the server (or attached externally to the server), has a tape inserted.
Next, the administrator opens his backup utility software, which displays all files and folder on
the server hard drive, similar to that of the Windows Explorer. He then selects the various
folders, sub folder and files that he wishes to backup. He gives the back up set a name, ensures
that the tape drive is the target device, and then clicks the "Run backup" or "Ok" button. The
process is started. Once the backup is complete, the administrator removes the tape from the
drive, and stores it offsite. Offsite storage is important, because the same flood or fire, which
would endeavour to destroy equipment, will also take out the backups, nullifying the entire
process.
Access Rights
It is entirely possible for all users to view any part of the server or network they so choose. This
immediately raises a red flag, because this is a massive security hazard. On the whole, employees
can be trusted, however there is always the odd one or two that can't be. So, measures must be
taken against this small minority who can cause considerable damage, through file deletion
and/or theft. It is possible to carry this out by denying all users from accessing sensitive parts of
the network. This can include drives, folders and individual files. Also, users can have access
denied to devices like printers that may be used to print sensitive material.
Various operating systems use different methods for repealing and giving rights.
File Management
File management is also a very important part of maintaining security and integrity on the
network. There are several areas where this concept can be applied.
Besides protecting the overall system, each user also requires security for their files. This is an
area that ought not to be overlooked, users are just as important as any other aspect of the
organisation. When users are given a login, it is important to ensure that they are given their own
private area of the server in which to store their files, called a home directory. Permissions
should then be applied to the folder as above, in order to protect users files from any all other
users on the system. This will maintain privacy, and will prevent unauthorised access,
modification or deletion of users personal and important files.
Version Control
One of the great benefits of IT is the ease of sharing files. However, if users are working from
different applications like MS Word and Lotus Word Pro, then compatibility issues are
introduced. In this case, operations will be at best cumbersome and at worst unworkable.
Conversation between the two formats is possible, however, users with basic skills, will not
possess the know-how to make such conversions, therefore disrupting the network's smooth
operation. Also, it is important to ensure that the organisation's application packages are
consistent in terms of their version or release. For example, Word97 will be unable to open files
created in Word2000, and again integrity is compromised and disorder introduced.
Read Only
Deleting important files can bring down an entire system. Granted, this is not possible in a
client/server network, because any administrator worth his salt will have prevented access to any
portion of the hard disk drive containing system files etc. by employing permissions and so on.
However, the suggestion is still a worthwhile one, because users may well be using a peer-to-
peer network or a standalone machine. Enabling the read only attribute, can prevent files from
being deleted and will maintain the integrity and security of the system.
To do this, right-click the file in question, select properties from the pop-up menu. On the screen
that appears, ensure that the general tab is selected, and then click in the Read-only checkbox.
Hidden Files
Besides making files read-only it is also possible to hide them from view. Even if the user opens
the folder containing the files, they cannot be seen. This technique is widely used as a default
setting for important operating system files at installation. Files like msdos.sys and IO.sys, which
are crucial to the system, are hidden, and thus cannot be manipulated or deleted.
Integrity
Plug-ins
These days the Web can support many different file formats and media types, like Flash movies, midi,
MP3, QuickTime movies etc. If the Web browser accessing the site is not capable of running these
formats, then a range of problems can occur.
· Browser freezes
One may think that these problems lie with external users accessing the site, and does not directly affect
the company. Even if this were the case, it hardly covers the organisations reputation in glory. The idea
of a Web site is to encourage people to visit, not deter them. So it is in the organisation's interest to
ensure that the site is well designed and functions properly.
It is not just external users that may suffer, because members of the organisation may well access the
site from a computer within, therefore affecting the company directly. Testing should be carried out on
all aspects of the site and the most up-to-date browsers installed (they are free). If the latest browsers
do not contain the required plug-in, it is best that the administrator downloads and installs it to all
machines that access the web. This will shield users from any hassles involved in installing the plug-in,
but most importantly, to protect them from any unpredictable and undesirable results that may cause
disruption to their system. It is unlikely that such disruption can cause permanent file loss or damage,
but a trouble machine will not permit the user to save current work, therefore losing it.
Compatibility
There are many compatibility issues that threaten the integrity of a company website.
Different browsers can display Web pages differently. This is unavoidable. However, in spite of the slight
differences in how the pages are displayed, it is important that they can be viewed. It is well
documented that a Web site appearing perfect in one browser, often will not display in another. This is
particularly true with Netscape Communicator, because it is very unforgiving of even the slightest
mistakes in the coding. Website components and coding, like Cascading Style sheets, HTML and
JavaScript often perform differently in different browsers, sometimes not at all. In extreme cases, these
components can cause a system to hang.
Also, some sites cannot be viewed on monitor resolutions of 800 x 600 dpi or less. Other factors, like
how long it takes to view a page on the screen, are affected by the computer's modem speed, the older
the modem the longer it takes to load the Web page. These facts bring the integrity of the site into
question.
Web sites should be viewed and tested on as many different Browser brands as possible (e.g. Netscape
and Explorer), and also as many different versions as possible. Furthermore, testing should be carried
out on a range of different resolutions and accesses should be attempted with modems of varying
speeds. It is imperative that the site's integrity can be maintained on as many different configurations,
combinations and platforms as possible. It is important to cater for as much diversity as possible,
ensuring that the company appeals to as wide an audience as possible.
These stringent measures will test third party websites, guaranteeing their integrity.
Since our data is at great risk, other methods besides those currently used, need to be formed
since crackers are continually negotiating their way through the web of security infrastructure
standing in their way.
An object, say a car that has been stolen, is easily detectable. From the instant that you leave
your house and look in the general direction of your parked car, you know immediately that it
has been stolen. However, in an IT scenario, electronically stored information does not have to
be removed but simply copied. The point in all of this is simple, stolen data (through copying)
may not be detected until the thief uses the data and the ramifications are realised.
Digital Watermarking
Digital watermarking is an adapted version of paper watermarks to the digital world. Digital
watermarking describes methods and technologies that allow us to hide information, for example
a number or text, in digital media, such as images, video and audio. The embedding takes place
by manipulating the content of the digital data. The hiding process has to be such that the
modifications of the media are indiscernible.
There are a number of possible applications for digital watermarking technologies and this
number is increasing rapidly. For example, in the field of data security, watermarks may be used
for certification, authentication, and conditional access. This means that the data could only be
used in certain situations and recognised by certain systems running a watermark detection
program attempting to validate and recognise the watermark. Say, for example, database records
containing highly sensitive material are stolen. The information could not be accessed, because
the data would have to be validated by the target systems detection program. Since the
watermark cannot be validated by the thief's machine, it is then rendered useless.
Furthermore, using digital watermarks can prevent authorised users from changing the data. For
integrity verification purposes, the data could be run through the watermark detection program
and changes to the data could be exposed.
Networks
A network is basically the interconnection of related parts, grouping them together in common
functionality. It is a system of lines or channels that cross or interconnect various points, called nodes.
These nodes can be stations on a rail network, or cities connected by roads. We make us of networks
everyday, the telephone network being another example.
Computer networks are the same in principle to that of all other networks, in that they interconnect
computers and other peripherals, as opposed to cites or stations.
LANs
Computers, when interconnected in this way, in the same geographical region, are called Local Area
Networks (LANs). These networks usually belong to a single company or organisation, and occupy the
same building or campus.
The diagram below shows a company LAN that has mini LANs in each department with their own server
machine. Each mini LAN is connected to a central hub/switch, which also facilitates the connection of
the main server. In a situation such as this, each department would have their own hardware and
software requirements controlled by their own local server. This prevents the main server and the
network over all, from becoming congested, thus improving performance. The technique is known as
sub-netting.
WANs
Wide Area Networks (WANs) are the interconnection of multiple networks spread over a much wider
graphical region. This can be across a city, country, continent or even the entire globe. In order to
facilitate the connection of these smaller networks to each other, the use of a communication
infrastructure is sought. A telecommunications company like Mercury or BT, to mention only two,
provides the services for such connections, albeit at a price.
Communication Infrastructures
There are various technologies available, each varying in performance and cost.
PSTN
The cheapest by far is the public systems telephone network (PSTN). This is the same media that
carries telephone conversations, and in terms of performance, though workable for the home
user, is not a viable commercial solution. It has a bandwidth of 56Kbps using compression
techniques, however, due to various environmental factors like line quality and control features,
this capacity is theoretical. Access to this medium is acquired through the use of a device called a
modem. A modem connects the PC to the phone line and acts as an analogue to digital converter.
IDSN
ISDN (Integrated Services Digital Network) is a faster more reliable solution, that makes use of a
fibre optic transmission media (cable). The key features of ISDN are:
All digital interfaces - no need for analogue to digital conversion equipment (modems) ·
Very fast call set-up time as opposed to 30 seconds for modem connections
ISDN2 - This solution offers 2x64Kpbs channels (called B channels) and one 16Kpbs channel
used or control purposes. Under certain circumstances it is possible to use the control channel
(called the D Channel) to carry data, providing a total of 144Kpbs
ISDN30 - This solution provides 30B channels, allowing for a total cable capacity of 2Mbps,
depending on the amount of channels in use at any one time. As each channel is brought into use
the greater the capacity, but also the greater the cost.
xDSL
The latest in digital solutions, come in the form the xDSL (x Digital Subscriber Line) range, and
is a more likely solution for the business user. xDSL, like ISDN, it is an always-on system
eliminating the need for dial-up. xDSL comes in several flavours. Depending on location and
requirements, one of the following should be considered:
Cable
At 512Kbps cable boasts even greater speeds than some of xDSL applications. This type of
Internet connectivity uses coaxial cable, the same cable that carriers TV pictures into the home.
The PC is connected to the cable box via a length of cable going into the PC's network card (for
home users).
When these services are enlisted, it is important to understand, that although the connection
behaves like it is a dedicated link between the two entities, it, in fact is not. The data is routed
through many switching boxes and over many different cable segments, before it reaches its
destination (see PPTP diagram).
Why can't we simply do our business on a day to say basis on standalone machines, without
enlisted the services of networks? The answer to that question is simple, networks provide so
many benefits, that they simply cannot be ignored. Granted, their design, implementation and
cost, do afford us barriers, but these barriers are far from being insurmountable. In fact, with
correctly skilled professionals in place, the barriers are really not barriers at all. The only real
obstacle is actually cost. However, the benefits far outweigh the consequences imposed by cost.
· Application Sharing - Groups of users can get access to the set of applications installed to the
server. This eliminates the need to install programs on multiple machines. Also, the server is able
to keep track of how many users are accessing any program, and can prohibit access to users as
licensing permits.
· Device Sharing - Groups of users are able to take advantage of printers, scanners, fax
machines and other devices that can be attached to a network. Companies can buy much fewer
devices and spend more on each one, so that better capabilities, and higher levels of service are
available. Also, it means that costly devices will be utilised more in a shared environment, thus
justifying their high costs.
· Compatibility - Since software applications are installed and maintained centrally, it means
that users will have access to a standard set of tools and will eliminate diverse formats. Upgrades
carried out by administrators need only be performed once on the server and the new software is
available to all on the network
· Security - Each user can only gain access to a network by virtue of an account. Each machine
will display a screen asking the user for a username and password before gaining access to the
network and its resources. Also, the ability to apply permissions on shared items and data items,
prohibits users from gaining unauthorised access to sensitive materials or devices that they are
not permitted to use. Accounts can also be configured to force users to change their password at
regular time intervals, and prevent them from logging onto certain machines or at certain times.
· Internet Access - With the proper equipment and software in place, it is possible to connect a
network to the Internet either as part of the Internet, or to simply allow users to access the
Internet from their place of work. This is advantageous, since it gives users a much wider base
for acquiring information and other resources like drivers and software utilities etc.
Network Types
Networks fall into two major types: peer-to-peer and client/server (sometimes called server-
based).
Peer-to-Peer Networking
This is a simple network configuration that requires some basic know-how to set up. Each of the
interconnected machines share dual capability and responsibility on the network. That is to say,
that each machine serves a dual purpose or role, i.e. they are both clients and servers to some
extent.
The server capability of the machines is very basic. The services provided by each, is no more
than the ability to share resources like files, folders, disk drives and printers. They even have the
ability to share Internet access.
However, the server functionality of these machines stops there. They cannot grant any of the
benefits mentioned previously, since these are functions provided only by a dedicated server
operating system.
Because all machines on the network have equal status, hence the term peers, there is no
centralised control over shared resources. Sharing is endorsed or repealed by each machine's
user. Passwords can be assigned to each individual shared resource whether it is a file, folder,
drive or peripheral, again done by the user.
Although this solution is workable on small networks, it introduces the possibility that users may
have to know and remember the passwords assigned to every resource, and then re-learn them if
the user of a particular machine decides to change them! Due to this flexibility and individual
discretion, institutionalised chaos is the norm for peer-to-peer networks.
Security can also be a major concern, because users may give passwords to other unauthorised
users, allowing them to access areas of the network that the company does not permit.
Furthermore, due to lack of centralisation, it is impossible for users to know and remember what
data lives on what machine, and there are no restrictions to prevent them from over-writing the
wrong files with older versions of the file. This of course cripples attempts to organise proper
backups.
It may appear that peer-to-peer networks are hardly worthwhile. However, they offer some
powerful incentives, particularly for smaller organisations. Networks of this type are the cheapest
and easiest to install, requiring only Windows95, a network card for each machine and some
cabling. Once connected, users can start to share information immediately and get access to
devices.
As a result, networks of this type are not scalable and a limit of no more than 10 machines is the
general rule.
Advantages
Network Design
A vital part of network design is to use a layered reference model. In other words, processes (like
creating a program), can be more easily managed if they are broken down into layers or modules,
where each of the layers communicate with the layer directly above and beneath itself. This
permits designers to work at any stage in the development of a project and to divide the design of
the network into more manageable chunks.
In the early/mid 80s the International Standards Organisation provided such a model. The model,
called the Open Systems Interconnect Seven Layer Model, aided network designers and vendors
in standardising networking protocols and equipment. The model also provides an invaluable
tool used to aid students in the understanding of networks and how it all fits together. The
following is a very brief description of each layer.
· Application Layer
This layer interacts with the user to create the message to be sent over the network. It provides
the link between the user's application package and the communications system. Services that are
supported at this layer include:
· Presentation Layer
Ensures that machines with the different data representations can still pass the same meaning
from one user to another. This layer also provides facilities like compression/decompression
encryption/decryption and terminal emulation.
· Session Layer
· Establish a connection
· Maintain the connection
· Transport Layer
· Network Layer
Adds unique addressing information to packets so that they are routed to the correct receiving
station on another network. It is responsible for:
Responsible for creating, transmitting and receiving data frames. A checksum for error detection
is added to the frame and is sent to layer 1 for transmission
· Physical Layer
Concerned with moving data between the stations and the medium that connects the stations.
This layer defines the electrical (i.e. voltage) and mechanical (i.e. pin wiring) requirements for
connecting to equipment to the medium
All networking related concepts and devices, operate at one or more of these layers,
allowing designers to categorise problems and tackle them logi
Topology
A network topology is the physical layout of the network. In networking there are three main
topologies in use; Bus, Star and Ring.
Bus
The bus topology is by far the most popular method for connecting computers. All components
of the bus topology are connected via a backbone which is a single cable segment connecting all
computers in a straight line (theoretically). On bus networks, the signal transmitted by a
computer, is propagated along the entire length of the network, and is thus called a broadcast
system, because all other nodes hear the transmission.
Star
The star topology is when each network component is connected by a cable segment to a central
hub. Some confusion is found with regards to the star topology, with two descriptions being
applied. Firstly, it is stated that the signal sent from one computer to another, is received by the
hub, and the message directed to the intended node. This method is therefore known as a directed
system. Secondly however, some sources say that all nodes connected to the hub hear the
transmission with only the intended node actually downloading the packet, thus a broadcast
system. The first description is most likely the correct one.
Ring
Ring topology networks are created when a computer is connected directly to the next computer
in line, forming a circle of cable. As each computer receives the signal, it acts on it, regenerates
it, and passes it along. Signals travel in only one direction on the ring. This topology is used by
Token ring networks, see later.
Intranets
Many companies are now turning to intranets as a means of sharing information among
company employees. Like all larger networks, it is based on the client /server model, with a
server machine at its heart. An intranet is a network that runs principally like the Internet,
however, it remains private and is not accessible to the public.
When users access the network, they are greeted with a Web browser interface. That is to say,
rather than using the standard Windows desktop, users access files, databases, e-mail, printers
and other resources via the Web browser software, just as though they were surfing the Web.
In order for your company to set up an intranet, they will require a server machine as mentioned,
which must be configured as a Web server. That is to say, it requires having Web server software
installed. This is not a problem for companies using Microsoft WindowsNT, or later, because
this software comes complete with Web server software free, called IIS (Internet Information
Services).
The server is configured as a Web server, and the company website is uploaded. It is also a
requirement to have TCP/IP installed. TCP/IP is a protocol suite that allows computers to
communicate and transfer data. TCP/IP is the protocol used by the Internet itself. So the
components required for an intranet are:
Server machine, configured as a Web server. Browser software, Internet Explorer TCP/IP
protocol suite.
Extranet
Although an intranet is a private network, and not accessible to the public since it is not attached
directly to the Internet, it is sometimes required to give outside users access to its services. These
users would be an authorised group, perhaps customers, clients, partners or mobile users. These
users would access the intranet via the normal Internet by a non-public means and would require
to login using a username and password. The login associated with the user, will determine what
range of access will be afforded them.
Security of this type is managed by a hardware and software combination that surrounds the
company's resources and protects the network, called a firewall. This is a dedicated machine that
intercepts all incoming traffic and filters through only traffic that is permitted. Its purpose is to
prevent unauthorised external access to the network.
Network Architecture
A network's architecture generally defines its overall structure, including its topology, physical
media, and channel access method. The following is a brief summary of the more popular
architectures used in networking to day.
Ethernet
Developed by Xerox, Ethernet is the most popular network architecture today. It has many
advantages, including ease of installation and lower costs. Ethernet is generally less expensive
than most other architectures. Another reason that it is so popular is that it can support the use of
many different media types (cable). Ethernet uses a channel access protocol called CSMA/CD
(Carrier Sense Multiple Access with Collision detection). Simply put, this protocol oversees the
transmission of data across the wire. If a machine is transmitting, it is not possible for another
machine to transmit at the same time. It must wait till the medium is free. If it does transmit, the
data sets will collide, causing a garbled signal. So the role of Carrier Sense is to be able to detect
if the wire is available. Multiple Access permits multiple machines to share the wire, while
Collision Detection takes care of any collisions that do occur, and provide the machines involved
another chance at transmitting their various data again. Ethernet networks run on a bus topology,
or more accurately, a star-bus, which physically is a star, but logically operates like a bus, i.e. a
broadcast system. Ethernet operates at speeds of 10Mbps and newer standards support 100Mbps,
these are the two categories that divide Ethernet, based on transmission speeds and media use.
10 Mbps Standards
· 10Base5: Ethernet using thick coaxial cable with a maximum segment length of 500mtrs
· 10Base2: Ethernet using thin coaxial cable with a maximum segment length of 185mtrs
· 10Base-T: Ethernet over unshielded twisted-pair (UTP) cable with a maximum cable length of
100mtrs
· 10Base-F: Ethernet over fibre-optic cable with a maximum cable length of 2000mtrs 100 Mbps
Standards
· 100VG-AnyLan: Emerging Architecture that is a mixture of Token Ring and Ethernet. Uses
Fiber and UTP. Cable lengths of 100, 150 and 2000 meters.
Token Ring
The Token Ring architecture was developed by IBM in the mid-1980's, providing users with
fast, reliable transport. Token Ring is so called, because an empty data frame continually
circulates the network, and any node wishing to transmit, would seize the token, and put its data
onto the network with the address of the intended node. By using the token passing channel
access method, token ring networks ensure that all computers get equal time on the network.
As the frame circulates the network, each node examines the address field of the frame in order
to determine whether or not the frame is intended for it. If not, the node allows the frame to pass
to the next node. This process continues until the intended node receives the frame, at which
point the node takes a copy of the data and releases the frame back onto the network. When it
returns to the originating node, the frame is removed and a new frame is generated. The new
empty frame is left to circulate the network until a node waiting to transmit seizes it.
Unlike Ethernet, there are no collisions, so data seldom has to be re-sent. Because all computers
on the network have equal access to the token, traffic is consistent and token ring handles
increases in network size gracefully.
The newest versions of token ring operate at speeds of up to 16Mbps. Because collisions never
occur, token ring can handle larger packets sizes than Ethernet. This allows large blocks of data
to be transferred. Token ring networks run on a ring topology, or a star-ring topology, i.e. a
physical star configuration, but the hub device logically operates like a ring were the token is
circulated from port to port in an infinite loop.
This architecture is installed in high-demand networks, and is a very reliable solution. FDDI
uses the token-passing channel access method while using dual rings for fault tolerance. That is
to say, if one cable breaks, the other is used to work around the problem, thus keeping the
network alive. FDDI transmits at 100Mbps and can include up to 500 nodes over a distance of
100km (60miles). FDDI networks are wired as a physical ring, it has no hubs, and machines are
generally directly connected to each other by means of fibre-optic cabling.
Unlike token ring, machines in this solution are not required to wait for the token to make a full
circle before transmitting another token. When a computer possessing the token has more than
one data frame to send, it can send additional tokens before the initial frame completes its
journey. This allows data to be transmitted around the network more quickly. Also, once the
computer has finished sending its token, it can immediately pass the token along. Again, it need
not wait for the token to complete its circuit around the network. FDDI network use a ring
topology, and occasionally a star-ring.
Unlike token ring networks, FDDI permits administrators to assign priorities to certain nodes,
for example a server running time-sensitive data or video.
Remote Access
Running a network affords us another tremendous benefit, remote access. Remote access allows
company personnel to log into the network from any location, providing they have a modem and
a telephone connection. It is even possible to connect to the company LAN by using cellular
communications techniques. This requires a mobile phone with an in-built modem connected to
the users laptop. The phone-to-laptop connection can be done via a cable or infrared interface.
The Nokia 7110 is a good example. This type of connection is currently very slow, and it is
recommended for sending and receiving e-mails only.
If remote access is required from a permanent location, e.g. home users, then a faster technology
can be installed, like ISDN, ADSL etc. This will allow teleworkers (those who work from home)
to connect to the network and us its resources as though they were directly connected at an office
location in the company's premises. This technique is known as Virtual Private Networking
(VPN).
Special software is required to facilitate this process called VPN software. This is installed on
the users remote system and not only manages the connection, but ensures that the connection
remains private. Even though the user is using a public communications medium (the Internet),
their transmissions are shielded from others. This is done using a protocol called PPTP (point-to-
point tunnelling protocol). This protocol gives the data security, by encapsulating the data in
encrypted packets, in effect building a tunnel that the data passes through, shielding it from other
users. This is what puts the term "Private" in VPN.
Today, there are many programs available to assist with network management. These programs
can help identify conditions that may lead to problems, prevent network failures, and
troubleshoot problems when they occur.
Monitoring Applications
One program, "Netcracker", is a design application that allows network creators to design a
simulated version of their network before putting the real thing together. Another example is
"ConfigMaker", by Cisco. This program allows the designer to configure network components
by using proper operating system syntax and then tests the implementation as a simulation.
These programs are priced in the thousands rather than the hundreds. However, this is justified
by the amount of time that can be saved by eradicating problems before network installation.
While these programs allow us to build and monitor networks, they are not a comprehensive
solution, and monitoring software should be used in order to continually check the on-going
status of the network. There are many software monitoring packages available. Sun
Microsystems have an entire range, from small LAN management to Enterprise Network
management packages. "Solstice Site ManagerTM 2.3" is one example: a state-of-the-art method
for managing sites of up to 100 nodes. It simplifies management of network resources to keep
the network running at peak efficiency.
Causes
There are many factors that can inhibit the performance of a network, leading to a situation
called "bottlenecking". - a sharp and notable reduction in performance. This can be caused by
equipment not capable for the demands that are being placed on the network. Equipment like
network cards, hubs, repeaters etc. Also, the bandwidth of the cable may not be sufficient for
traffic demands. Users can cause slowdown by playing resource hungry games across the LAN,
or engaging in heavy Internet downloads, like MP3 files and video. The problems can also be
caused by poor LAN organisation, where all nodes populate a single segment, and therefore a
single collision domain. In other words, the network is like a small room packed with lots of
people all talking at the one time, leading to chaos.
Baselines
It is recommended that a baseline be established that will assist the network administrator in
monitoring performance. A baseline defines a point of reference against which to measure
network performance and behaviour when problems occur. In other words, it has to be
established what is "normal" for your network, before it can be determined what is "abnormal".
A baseline can be established by using performance-monitoring software. There may be no need
to buy expensive management software. Users running Windows servers are provided with
integrated management tools at no extra charge. They do not provide the same range or
capability of the higher end solutions, but they are still powerful tools. These tools allow the
administrator to view various logs that maintain error, security and system information. Other
tools can track processor, disk and memory usage and analyse protocol performance.
Solutions
Trends gathered by these tools can indicate the problems previously mentioned, and can help the
administrator prescribe solutions to the problem.
- Increasing memory
- Subnetting (breaking the network into smaller more manageable chunks using routers).
- Preventing users from running power hungry games or applications across the network.
Acceptable Performance
The philosophy of networking is providing the best service at the cheapest price. It is not
difficult to have a high-performance network. All that is required is the best equipment, the best
technologies, the best methodologies and the best personnel to tie it all together. However, in the
real world this is seldom, if ever the case, due to costs. Therefore, a trade-off is sought, and ideal
performance gives way to acceptable performance. As users, we demand the best; we want the
fastest access to resources and faster links to the Internet. We want our applications to run better,
and we want more bandwidth to run multimedia applications. Cost constraints prevent this from
always being possible. In fact, our requirements as users (playing power-hungry games over the
LAN) are often sacrificed to support business needs. Organisations are not willing to spend huge
amounts of money simply to keep its users happy, preferring systems that suit business needs,
and get the job done.
In light of the recent tragic events of 9-11, security in all its forms (including security against
cyber intrusion and attack) is more important than ever. Strong encryption technology plays
a key role in such security, helping individuals, businesses, and governments protect
sensitive or personal information against willful or malicious theft. Not surprisingly, then,
nations have increasingly adopted policies that encourage the widespread availability of
encryption tools for consumers. At the same time, they have successfully worked to permit
law enforcement to access encrypted communications in certain critical instances, while
rejecting calls for encryption products to be undermined through the building of “back-door”
government keys.
A firewall is essentially a filter that controls access from the Internet into a computer
network, blocking the entry of communications or files that are unauthorized or potentially
harmful. By controlling Internet “traffic” in a network, firewalls protect individuals and
organizations against unwanted intrusions, without slowing down the efficiency of the
computer or network’s operations. They also limit intrusions to one part of a network from
causing damage to other parts, thereby helping to prevent large-scale system shutdowns
brought on by cyber attacks. Not surprisingly, then, firewalls have become a key component
of computer systems today, and their architecture comprises some of the most state-of-the-
art e-commerce technology available in today’s marketplace.
But, computer security, or cyber security, is more than encryption, and it requires more than
a onetime fix. It is an ongoing process requiring the adoption of strong security policies, the
deployment of proven cyber security software and appliances-such as antivirus, firewalls,
intrusion detection, public key infrastructure (PKI), and vulnerability management, as well as
encryption-and, in the case of larger organizations, the existence of trained security
professionals. These professionals, in turn, must be continually retrained in order to ensure
that they are able to address and combat the evolving nature of cyber threats.
Strong security tools alone, however, cannot protect users against threats in each and every
instance. Dedicated hackers and criminals will always seek new ways of circumventing even
the most effective security technologies. That is why it is critical that strong laws be put in
place to deter such activities. In particular, where needed, laws should make it illegal to
defeat, hack, or interfere with computer security measures, and penalties for these crimes
should be substantial.
As is the case with copyright laws, however, strong words in a statute are not enough.
Effective antihacking and computer security laws must:
Although the government should create a strong legal framework against cyber crime, it
should not intervene in the marketplace and pick e-commerce technology “winners” by
prescribing arbitrary standards in the security field. Such intervention would do little more
than freeze technological development and limit consumer choice. Instead, the development
and deployment of security tools should be determined by technological advances,
marketplace forces, and individual needs, and should be free of regulation.
M-COMMERCE
Introduction
Rarely has a new area of business been heralded with such enthusiasm as "mobile
commerce", that is the conduct of business and services over portable, wireless devices.
Due to the astronomical growth of the Internet users, maturation of the Internet
technologies, realization of the Internet's capabilities, the power of electronic commerce,
and the promising advancement of wireless communication technologies and devices,
mobile commerce has rapidly attained the business forefront. An m-commerce application
can be B2B, B2C or any other of the classifications available with e-commerce world. M-
commerce, although not fully mature, has the potential to make it more convenient for
consumers to spend money and purchase goods and services. Since wireless devices travel
with the consumer, the ability or perhaps temptation to purchase goods and services is
always present. This is clearly a technique that can be used to raise revenue. Also, the
successful future of m-commerce depends o! n the power of the underlying technology
drivers and the attractiveness of m-commerce applications.
Internet use has grown to such a level on the strength of PC networks. Due to the huge
base of installed PCs, which is predicted to grow in a faster pace in the days to come,
electronic commerce and other communication applications are bound to thrive further.
Also, these computing systems will have greater power and storage capability, the best ever
price-performance ratios, more powerful and sophisticated applications will likely emerge for
desktop computing and the Internet. However, there are two major limitations on PCs. First ,
users have to sit in front of them, PCs, even portable-notebook computers, have to load
software, dial into and connect with a network service provider and await for the initial
process to be accomplished before launching an Internet application.
It is predicted that by 2004, the installed base of mobile phones worldwide will exceed
1 billion - more than twice the number of PCs at that time. In addition that, there will be a
huge increase in other wireless portable devices, such as wireless PDA. The advantage with
these wireless devices is they do not need no booting process and thus facilitating
immediate usage of them. This makes them attractive for quick-hit applications.
Wireless Technologies
Just as the TCP/IP and the general purpose Web browsers are being the current
principal drivers of Internet growth and this in turn makes disparate devices to connect
themselves and communicate and interoperate. Similar protocols, technologies and software
will play a very important role in heterogeneous wireless devices to interoperate without any
complexity. In the recent past, a common communications technology and uniform interface
standard for presenting and delivering several distinct wireless services on wireless devices
- Wireless Application Protocol (WAP) have emerged. The WAP specifications include a micro-
browser, scripting language just like JavaScript, access functions and layered communication
specifications for sessions, transport and security. These specifications enable interface-
independent and interoperable applications. Many of the wireless device manufacturers,
service and infrastructure providers have started to adopt the WAP standard.
The transmission rate of Current access technologies (2G), such as TDMA, CDMA and
GSM, is dramatically slower ( between 10 and 20 Kbps) than the dial-up rates of desktop PCs
connected to the Internet. 2G technology has steadily improved, with increased bandwidth,
packer routing and the introduction of multimedia. The present state of mobile wireless
communications is often referred to as 2.5G. It is believed that by the year 2003, 3G
wireless technology will be available for use. This, in addition to higher bandwidth rates, can
take the transmission speed up to 2 Mbps. 3G is expected to facilitate: enhanced multimedia
(voice, data, video, and remote control) transmission, usability on all popular modes (cellular
telephone, e-mail, paging, fax, video-conferencing and Web browsing), routing flexibility
(repeater, satellite, LAN) and operation at approximately 2 GHz transmit and receive
frequencies.
Wireless Application Protocol (WAP) is a technical standard for accessing information over a
mobile wireless network. A WAP browser is a web browser for mobile devices such as mobile
phones(called "cellular phones" in some countries) that uses the protocol.
Before the introduction of WAP, mobile service providers had limited opportunities to offer interactive data
services, but needed interactivity to support Internet and Web applications such as:
The Japanese i-mode system offers another major competing wireless data protocol.
The WAP standard described a protocol suite allowing the interoperability of WAP equipment and
software with different network technologies, such as GSM and IS-95 (also known as CDMA).
.+------------------------------------------+
.| Wireless Application Environment (WAE) |
.+------------------------------------------+ \
.| Wireless Session Protocol (WSP) | |
.+------------------------------------------+ |
.| Wireless Transaction Protocol (WTP) | | WAP
.+------------------------------------------+ | protocol
.| Wireless Transport Layer Security (WTLS) | | suite
.+------------------------------------------+ |
.| Wireless Datagram Protocol (WDP) | |
.+------------------------------------------+ /
.| *** Any Wireless Data Network *** |
.+------------------------------------------+
The bottom-most protocol in the suite, the WAP Datagram Protocol (WDP), functions as an adaptation
layer that makes every data network look a bit like UDP to the upper layers by providing unreliable
transport of data with two 16-bit port numbers (origin and destination). All the upper layers view WDP as
one and the same protocol, which has several "technical realizations" on top of other "data bearers" such
as SMS, USSD, etc. On native IP bearers such as GPRS, UMTS packet-radio service, or PPP on top of a
circuit-switched data connection, WDP is in fact exactly UDP.
WTLS, an optional layer, provides a public-key cryptography-based security mechanism similar to TLS.
WTP provides transaction support (reliable request/response) adapted to the wireless world. WTP
supports more effectively than TCP the problem of packet loss, which occurs commonly in 2G wireless
technologies in most radio conditions, but is misinterpreted by TCP as network congestion.
This protocol suite allows a terminal to transmit requests that have an HTTP or HTTPS equivalent to
a WAP gateway; the gateway translates requests into plain HTTP.
M-Commerce Applications
Active Applications
M-commerce transactions point to online shopping Web sites tailored to mobile phones
and PDAs which are being equipped with the capabilities of browsing, selection, purchase,
payment and delivery. These sites also include all the necessary shopping features, such as
online catalogs, shopping carts, and back office functions as currently available for desktop
computers. Leading online booksellers already started the commercial activities for wireless
devices. Another important m-commerce transaction is to initiate and pay for purchases and
services in real time. The highest volume of m-commerce transactions using wireless
devices in the days to come is bound to occur on the side of micro-transactions. When
individuals reach for their e-cash-equipped mobile phones or PDAs instead of coins to settle
micro transactions, such as subway fees, widespread use of digital cash will become a
reality.
The second important one is regarding digital content delivery. Wireless devices can
retrieve status information, such as weather, transit schedules, flash news, sports scores,
ticket availability and market prices, instantly from the providers of information and
directory services. Digital products, such as MP3 music, software, high-resolution images
and full-motion advertising messages, can be easily downloaded to and used in wireless
devices when the 3G transmission technology becomes usable. The proposed arrival of
better display screen and higher bandwidth will surely trigger the development of innovative
video applications. This will help wireless users to access, retrieve, store and display high-
resolution video content for a time of entertainment, product demonstration and e-learning.
The last major application of m-commerce is telemetry services, which include the
monitoring of space flights, meteorological data transmission, video-conference, the Global
Positioning System (Global Positioning System), wildlife tracking, camera control robotics,
and oceanography. Thus in the near future, wireless phones and appliances can be used by
people to contact and communicate with various devices from their homes, offices or any
where at any time. For example, delivery drivers will ping intelligent dispensing machines or
users can transmit messages to activate remote recording devices or service systems.
Passive Applications
This type of applications seems manifold and exciting. Instead of using dedicated cash
cards for automatic collection of toll charges, digital cash can be used by integrating cash
cards with mobile devices. Mobile users can easily pay and record payment of toll, mass-
transit, fast-food, and other other transactions
Nowadays mobile users can send and receive short text messages up to 160 characters
that show up on the user's display screen. As digital convergence becomes more
commonplace, all kinds of mail, such as e-mail, fax documents and digitized voice mail, can
be received passively. Thus it is felt that in near future there will be many novel services for
mobile users for a fixed fee. Further on, users may be tempted for some services free of cost
for viewing audio or video advertisement delivered to their wireless devices. Any kind of
security breach, illegal intrusion, unusual event or unacceptable condition will trigger
automatic notification to users irrespective of location. Airline companies are testing this
technology to alert frequent air passengers regarding seat availability and upgradation, to
notify the changes made in the timings etc. through wireless devices.
Passive m-commerce telemetry is the foundation of still another form of interactive
marketing. Stores will be able to market their products and services by constantly
transmitting promotional and inducing messages and doling out something towards getting
the attention of both passers-by and remote mobile users.
Conclusion
As m-commerce applications and wireless devices are evolving rapidly, one will take
forward the other one towards empowering innovation, versatility and power in them. There
are a number of business opportunities and grand challenges of bringing forth viable and
robust wireless technologies ahead for fully realizing the enormous strength of m-commerce
in this Internet era and thereby meeting both the basic requirements and advanced
expectations of mobile users and providers.
There are news articles and pictures displaying people, who are ordering things over
the Internet while waiting for a bus, downloading merchant coupons on their PDAs as they
enter a store or bidding for the last table at a hot restaurant by digital phone in a spur-of-
the-moment auction. Actually this process represents a tip of a very big iceberg. The advent
of m-commerce, as widely referred to among the users, has far-reaching implications. But
there are many limitations in the technologies that Once its relevant technologies get
matured, widely available and competent, the host of portable devices will be ready to
handle the bigger transactional activities not envisioned so far successfully apart from these
minor activities.