0% found this document useful (0 votes)
101 views12 pages

What Is Enterprise Hardware?

The document discusses enterprise hardware, legacy systems, and RAID. It defines enterprise hardware as hardware designed for large organizations to meet demands of availability, compatibility, reliability, scalability, performance, and security. It describes legacy systems as outdated systems that still meet original needs but don't allow for growth. It also explains what RAID is and how it works to provide data redundancy and improve performance through striping and mirroring of data across multiple disks.

Uploaded by

shanks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views12 pages

What Is Enterprise Hardware?

The document discusses enterprise hardware, legacy systems, and RAID. It defines enterprise hardware as hardware designed for large organizations to meet demands of availability, compatibility, reliability, scalability, performance, and security. It describes legacy systems as outdated systems that still meet original needs but don't allow for growth. It also explains what RAID is and how it works to provide data redundancy and improve performance through striping and mirroring of data across multiple disks.

Uploaded by

shanks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

NOTES

From a hardware perspective, enterprise systems are the servers, storage and associated software
that large business use as the foundation for their IT infrastructure. These systems are designed
to manage large volumes of critical data. These systems are typically designed to provide high
levels of transactions performance and data security.

WHAT IS ENTERPRISE HARDWARE?

When it comes to IT Hardware – Enterprise generally refers to Servers, Data Storage Devices,
Networking Equipment, and other hardware used to build your IT infrastructure. One of the
biggest issues with definitions surrounding Enterprise Hardware, is that in most cases they leave
us with more questions than when we started.

In reality, each one of those items (Servers, Storage, Networking, etc.) each have a specific use
case, and simply placing the word “Enterprise” on the product does not define the meaning. Lets
give Enterprise Hardware some meaning.

Enterprise IT, also known as enterprise-class IT, is hardware and software designed to meet the
demands of a large organization. In comparison to consumers and small companies, an enterprise
has greater requirements for:
 Availability
 Compatibility
 Reliability
 Scalability
 Performance
 Security

WHAT IS LEGACY SYSTEM?


A legacy system is outdated computing software and/or hardware that is still in
use. The system still meets the needs it was originally designed for, but doesn’t allow
for growth. What a legacy system does now for the company is all it will ever do. A
legacy system’s older technology won’t allow it to interact with newer systems.  
As technology advances, most companies find themselves dealing with the issues
caused by an existing legacy system. Instead of offering companies the latest
capabilities and services — such as cloud computing and better data integration — a
legacy system keeps a company in a business rut. 
A legacy system, in the context of computing, refers to outdated computer
systems, programming languages or application software that are used instead of
available upgraded versions.

Legacy systems also may be associated with terminology or processes that are no
longer applicable to current contexts or content, thus creating confusion. In theory,
it would be great to be able to have immediate access to use the most advanced
technology. But in reality, most organizations have legacy systems - to some
extent. A legacy system may be problematic, due to compatibility issues, obsoletion
or lack of security support.

A legacy system is also known as a legacy platform.


The reasons are varied as to why a company would continue to use a legacy system. 

 Investment: Although maintaining a legacy system is expensive over time,


upgrading to a new system requires an up-front investment, both in dollars and
manpower.
 Fear: Change is hard, and moving a whole company —or even a single
department — to a new system can inspire some internal resistance. 
 Difficulty: The legacy software may be built with an obsolete programming
language that makes it hard to find personnel with the skills to make
the migration. There may be little documentation about the system and the
original developers have left the company. Sometimes simply planning the
migration of data from a legacy system and defining the scope of requirements
for a new system are overwhelming. 

Problems caused by legacy systems


A legacy system can cause a myriad of problems, such as exorbitant maintenance
costs, data silos that prevent integration between systems, lack of compliance to
governmental regulations, and reduced security. These issues eventually outweigh the
convenience of continuing to use an existing legacy system.

1. Maintenance is costly (and futile)


Maintenance is to expected with any system, but the cost of maintaining a legacy
system is extensive. Maintenance keeps the legacy system running, but at the same
time, the company is throwing good money after bad. The status quo is maintained, but
there’s never a chance for growth with the legacy system.
At some point, there won’t be any more support for a legacy system and there won’t be
any more updates. If the system fails, there’s nowhere to turn.  
Think of a weak dam with holes that you keep plugging and plugging, yet water keeps
seeping through. A legacy system continues to cost a company money for maintenance
while never providing new and innovative services. 
2. Data is stuck in silos
Data silos are a byproduct of legacy systems. Many older systems were never designed
to integrate with each other in the first place, and many legacy software solutions are
built on frameworks that can’t integrate with newer systems. This means that each
legacy system is its own data silo.
In addition to siloing the data they contain, legacy systems keep the departments that
use them out of data integration happening in the rest of the organization. If one team
maintains a legacy system while the rest of the company upgrades, that one team is
isolated from business intelligence and insights being created in integrated systems.

3. Compliance is much harder


Organizations today must abide by strict sets of compliance regulations. As these
regulations continue to evolve, a legacy system may not be equipped to meet them. 
Compliance regulations like the GDPR, for example, require a company to know (and
prove) what customer data they have, where it is, and who is accessing it. Companies
with customer data need to maintain well-governed records, which is much harder (if not
impossible) in outdated, siloed systems.

4. Security gets weaker by the day


A data breach can cost a company dearly, and legacy systems are more vulnerable to
hackers than newer systems. Legacy systems by definition have outdated data
security measures, such as hard-coded passwords. That wasn’t a problem when the
system was built, but it is now. 
A legacy system not only leaves a company behind with old technology, it can also
seriously damage a company’s reputation by putting data at risk of a breach. At some
point, a vendor no longer supports the legacy system or provides much needed
updates, opening the legacy system up to a security risk. Even if a critical update is
available, installing it can be risky and is postponed for fear of breaking the system. As
technology advances, risks increase for legacy systems.

5. New systems don’t integrate


As a company matures, adding new systems is necessary to stay competitive in today’s
world. But the older technology of a legacy system may not be able to interact with a
new system. A department still using a legacy system won’t receive all the benefits that
a new system offers. 
Developing processes to make the systems work together is cumbersome and still
leaves the company open to security risks. This causes an inability for technological
growth within a company.
What Does Redundant Array of Independent Disks
(RAID) Mean?
Redundant array of independent disks (RAID) is a method of storing
duplicate data on two or more hard drives. It is used for data backup, fault
tolerance, to improve throughput, increase storage functions and to enhance
performance.

RAID is attained by combining two or more hard drives and a RAID controller
into a logical unit. The OS sees RAID as a single logical hard drive called a
RAID array. There are different levels of RAID, each distributing data across
the hard drives with their own attributes and features. Originally, there were
five levels, but RAID has advanced to several levels with numerous
nonstandard levels and nested levels. The levels are numbered RAID 0,
RAID 1, RAID 2, etc. They are standardized by the storage networking
industry association and are defined in the common RAID disk data format
(DDF) standard data structure.

Techopedia Explains Redundant Array of


Independent Disks (RAID)
RAID was first patented by IBM in 1978. In 1987 a team of electrical engineers and computer
science specialists from the University of Berkley in California defined RAID levels 1 through 5.
Their work was published by the Association for Computing Machinery's Special Interest Group
on Management of Data in 1988. It was called a case of redundant arrays of inexpensive disks
(RAID). The objective was to combine multiple inexpensive devices into an array, which
featured more storage, dependability and faster processing. Later, RAID marketers eliminated
the term "inexpensive" so there was not a low cost association by consumers and changed the
term to “Independent.”

RAID is mostly used for data protection allowing a continuation of two data copies, one in each
drive. It is often used in high end servers and some small workstations. When RAID duplicates
data, a physical disc is in RAID array. The RAID array is read by the OS as one single disc
instead of multiple discs. The RAID objective for each disc is to provide better input/output (I/O)
operations and enhanced data reliability. RAID levels can be individually defined or have
nonstandard levels, as well as nested levels combining two or more basic levels of RAID.

How RAID works

RAID works by placing data on multiple disks and allowing input/output (I/O) operations
to overlap in a balanced way, improving performance. Because the use of multiple disks
increases the mean time between failures (MTBF), storing data redundantly also
increases fault tolerance.

RAID arrays appear to the operating system (OS) as a single logical drive. RAID
employs the techniques of disk mirroring or disk striping. Mirroring will copy identical
data onto more than one drive. Striping partitions helps spread data over multiple disk
drives. Each drive's storage space is divided into units ranging from a sector (512 bytes)
up to several megabytes. The stripes of all the disks are interleaved and addressed in
order.

What is NAS?
An NAS device is a storage device connected to a
network that allows storage and retrieval of data from a
central location for authorized network users and varied
clients. NAS devices are flexible and scale out,
meaning that as you need additional storage, you can
add to what you have. NAS is like having a private
cloud in the office. It’s faster, less expensive and
provides all the benefits of a public cloud on site, giving
you complete control.

NAS systems are perfect for SMBs.

 Simple to operate, a dedicated IT professional is


often not required
 Lower cost
 Easy data backup, so it’s always accessible when
you need it
 Good at centralising data storage in a safe, reliable
way

With a NAS, data is continually accessible, making it easy


for employees to collaborate, respond to customers in a
timely fashion, and promptly follow up on sales or other
issues because information is in one place. Because NAS
is like a private cloud, data may be accessed remotely
using a network connection, meaning employees can work
anywhere, anytime.

What Is a Storage Area Network


(SAN)?
A Storage Area Network (SAN) is a specialized, high-speed network that
provides block-level network access to storage. SANs are typically composed
of hosts, switches, storage elements, and storage devices that are
interconnected using a variety of technologies, topologies, and protocols.
SANs may also span multiple sites.

A SAN presents storage devices to a host such that the storage appears to be
locally attached. This simplified presentation of storage to a host is
accomplished through the use of different types of virtualization.

SANs are often used to:

 Improve application availability (e.g., multiple data paths)


 Enhance application performance (e.g., off-load storage functions,
segregate networks, etc.)
 Increase storage utilization and effectiveness (e.g., consolidate storage
resources, provide tiered storage, etc.), and improve data protection and
security.
 SANs also typically play an important role in an organization's Business
Continuity Management (BCM) activities.

SANs are commonly based on Fibre Channel (FC) technology that utilizes


the Fibre Channel Protocol (FCP) for open systems and proprietary variants
for mainframes. In addition, the use of Fibre Channel over Ethernet (FCoE)
makes it possible to move FC traffic across existing high
speed Ethernet infrastructures and converge storage and IP protocols onto a
single cable. Other technologies like Internet Small Computing System
Interface (iSCSI), commonly used in small and medium sized organizations as
a less expensive alternative to FC, and InfiniBand, commonly used in high
performance computing environments, can also be used. In addition, it is
possible to use gateways to move data between different SAN technologies.

Why storage area networks are important


Computer memory and local storage might not provide enough storage, storage
protection, multiple-user access, or speed and performance for enterprise applications.
So, most organizations employ some form of a SAN in addition to network-attached
storage (NAS) for improved efficiency and better data management.
Traditionally, only a limited number of storage devices could attach to a server, limiting
a network's storage capacity. But a SAN introduces networking flexibility enabling one
server, or many heterogeneous servers across multiple data centers, to share a
common storage utility. The SAN also eliminates the traditional dedicated connection
between a file server and storage—and the concept that the server effectively owns and
manages the storage devices—eliminating bandwidth bottlenecks.
A SAN is also optimal for disaster recovery (DR) because a network might include many
storage devices, including disk, magnetic tape and optical storage. The storage utility
might also be located far from the servers that it uses.

The SAN frees the storage device so that it isn't on a particular server bus. It attaches
storage directly to the network, so storage is externalized and functionally distributed
across the organization. The SAN also centralizes storage devices and the clustering of
servers, potentially achieving easier and inexpensive centralized administration,
lowering the total cost of ownership.
Typically using block-level storage systems, SANs allow data-moving applications to
perform better by transmitting data directly from the source to the target with little server
intervention. But organizations can use any file systems appropriate for their
infrastructures. SANs also allow multiple hosts to access multiple storage devices
connected to the same network in new network architectures. A SAN can offer the
following benefits:
Improved application availability
Storage exists independently of applications, and it's accessible through multiple paths
for increased reliability, availability and serviceability.
Better application performance
SANs offload and move storage processing from servers onto separate networks.
Central and consolidated
SANs make simpler management, scalability, flexibility and availability possible.
Remote site data transfer and vaulting
SANs protect data from disaster and malicious attacks with a remote copy.
Simple centralized management
SANs simplify management by creating single images of storage media.

What Does Enterprise Storage Mean?


Enterprise storage refers to a centralized data depository that is designed for
the needs of a large organization. Enterprise storage performs the same
functions as smaller scale data storage solutions, but is more reliable and
fault tolerant. Enterprise storage can also be scaled up to serve a large user
base and heavy workloads without significantly slowing down the system.

Techopedia Explains Enterprise Storage


Although enterprise storage does provide huge amounts of storage, the
selling point for most organizations is the high availability of data and the
system's overall reliability. Enterprise storage is used for critical systems and
data that would result in a business halt if it were inaccessible or destroyed.
Like many enterprise-class solutions, there is no standard to which a storage
system can be compared in order to classify it as enterprise.

What Does Blade Server Mean?


A blade server is a compact, self-contained server that consists of core
processing components that fit into an enclosure with other blade servers. A
single blade may consist of hot-plug hard-drives, memory, network cards,
input/output cards and integrated lights-out remote management. The
modular design of the blade server helps to optimize server performance
and reduce energy costs.

Techopedia Explains Blade Server


Blade servers are designed to overcome the space and energy restrictions of
a typical data center environment. The blade enclosure, also known as
chassis, caters to the power, cooling, network connectivity and management
needs of each blade. Each blade server in an enclosure may be dedicated to
a single application. A blade server can be used for tasks such as:

 File sharing
 Database and application hosting
 SSL encryption of Web communication
 Hosting virtual server platforms
 Streaming audio and video content

The components of a blade may vary depending on the manufacturer. Blade


servers offer increased resiliency, efficiency, dynamic load handling and
scalability. A blade enclosure pools, shares and optimizes power and cooling
requirements across all the blade servers, resulting in multiple blades in a
typical rack space.

Some of the benefits of blade servers include:

 Reduced energy costs


 Reduced power and cooling expenses
 Space savings
 Reduced cabling
 Redundancy
 Increased storage capacity
 Reduced data center footprint
 Minimum administration
 Low total cost of ownership

Blade servers continue to evolve as a powerful computing solution, offering


improvements in terms of modularity, performance and consolidation.
What is High Availability?
When it comes to measuring availability, several factors are salient. These include
recovery time, and both scheduled and unscheduled maintenance periods.

Typically, availability as a whole is expressed as a percentage of uptime defined by


service level agreements (SLAs). A score of 100 percent characterizes a system that
never fails.

Why is High Availability Important?


To reduce interruptions and downtime, it is essential to be ready for unexpected
events that can bring down servers. At times, emergencies will bring down even the
most robust, reliable software and systems. Highly available systems minimize the
impact of these events, and can often recover automatically from component or
even server failures.

What Does High Availability (HA) Mean?


High availability refers to systems that are durable and likely to operate
continuously without failure for a long time. The term implies that parts of a
system have been fully tested and, in many cases, that there are
accommodations for failure in the form of redundant components.

Techopedia Explains High Availability (HA)


A lot of analysis of high availability in a system involves looking for the
weakest link, whether that is a specific piece of hardware, or an element of
the system, such as data storage. To enable more durable data storage,
engineers seeking high availability can use a RAID design. Servers can also
be set up to switch responsibilities to a remote server if necessary, in a
backup process known as failover.
Although good design factors into high availability, it's also important that
each piece of hardware be evaluated for durability. Here, specific metrics
from vendors are helpful in determining for exactly how long a piece of
hardware is estimated to function in a particular system. Here, metrics like
mean time between failures (MTBF) become useful to engineers.

What is High Availability (HA)? - Definition from Techopedia

What Does Scalability Mean?


Scalability is an attribute that describes the ability of a process, network,
software or organization to grow and manage increased demand. A system,
business or software that is described as scalable has an advantage because
it is more adaptable to the changing needs or demands of its users or
clients.

Scalability is often a sign of stability and competitiveness, as it means the


network, system, software or organization is ready to handle the influx of
demand, increased productivity, trends, changing needs and even presence
or introduction of new competitors.

Techopedia Explains Scalability


To further understand scalability, here are two examples. First, a basic anti-
virus program can become premium and be used by enterprises through
downloading certain add-ons or paying for subscription. Because more
resources may be added to it, it is considered scalable. On the other hand,
more computers and servers can be added to a network in order to increase
throughput or intensify security. This makes the network scalable.

What Does Interoperability Mean?


Interoperability is the property that allows for the unrestricted sharing of
resources between different systems. This can refer to the ability to share
data between different components or machines, both via software and
hardware, or it can be defined as the exchange of information and resources
between different computers through local area networks (LANs) or wide
area networks (WANs). Broadly speaking, interoperability is the ability of
two or more components or systems to exchange information and to use the
information that has been exchanged.

What is Scalability? - Definition from Techopedia


Techopedia Explains Interoperability
There are two main types of interoperability:

1. Syntactic Interoperability: Where two or more systems are able to


communicate and exchange data. It allows different software
components to cooperate, even if the interface and the programming
language are different.
2. Semantic Interoperability: Where the data exchanged between two or
more systems is understandable to each system. The information
exchanged should be meaningful, since semantic interoperability
requires useful results defined by the users of the systems involved in
the exchange.

What is Interoperability? - Definition from Techopedia

You might also like