Technology Positioning Statement Report

5.1.5 Server Hardware

Description: High-capacity server CPUs and hard drive systems used for general purpose and most business applications.

Category: 5 - Servers   Subcategory: 1 - General Purpose Servers
Old Category: Enterprise Network – Hardware – Servers

Vision

RetirementContainmentCurrentTacticalStrategic
     

Standards

Industry UsageSC Usage
  

Performance Metrics

CPU speed, memory bus speed, cache size, tpm-C performance baselines, expandability.


Usage and Dependencies

Industry Usage: This branch of IT has advanced much more rapidly in the past year than analysts had predicted. The demonstration of operational 1.5 GHz chips, along with plans for 64-bit architectures, clearly shows the road ahead, namely that the market will see 2 GHz or 3 GHz speeds within the next two years, together with speed increases in other areas, producing some very fast infrastructure hardware.

Multiprocessors vs. clusters:  Distributed Shared Memory computers (DSMs) have arrived to challenge mainframes. DSMs scale to 128 processors with 2-8 processor nodes. As shared memory multiprocessors (SMPs), DSMs provide a single system image and maintain a “shared everything” model. Large scale UNIX servers using the SMP architecture challenge mainframes in legacy use and applications. These have up to 64 processors and a more uniform memory access.

In contrast, clusters both complement and compete with SMPs and DSMs, using a “shared nothing” model. Clusters built from commodity computers, switches, and operating systems scale to almost arbitrary sizes at lower cost, while trading off SMP’s single system image. Clusters are required for high availability applications. Highest performance scientific computers use the cluster (or MPP) approach. High growth markets, such as Internet servers, online transaction processing (OLTP) systems, and database systems can all use clusters.

The mainline future of DSM may be questionable because of three factors: Small SMPs are not as cost-effective unless built from commodity components; large SMPs can be built that without the DSM approach; and clusters are a cost-effective alternative for most applications to SMPs, including DSMs, for a wide scaling range. Nevertheless, commercial DSMs are being introduced that compete with SMPs over a broad range. -- Gordon Bell, Catharine van Ingen, Jan. 7, 2001, Microsoft.

64-bit processors: There must be a compelling business need to upgrade to 64-bit processors; there are tradeoffs involved. One possible business need is related to the implementation of Public Key Infrastructure (PKI) with Secure Socket Layer (SSL) encryption. "Despite the widespread reliance on these algorithms, they have one significant drawback: They are very compute-intensive and known to have a significant impact on server performance. This is especially true in the case of short transactions, which are typical of e-Commerce. The Intel® Itanium™ processor has several features that can help to speed up security solutions such as SSL." (from Intel).

On the other hand, most applications that run on 64-bit Windows-based computers will need to be ported from a 32-bit application platform in order to take full advantage of the new benefits of the 64-bit platform. "Although increased demand on the high-end along with technical advances is making 64-bit a reality, it is anticipated that the industry as a whole will not fully embrace the 64-bit world entirely for many more years.... The challenge for processor manufacturers is to find a way to offer customers all the advantages of 64-bit processing in a market friendly fashion while making the conversion from 32-bit efficient and inexpensive. Unfortunately, the 64-bit solutions proposed by some processor manufacturers leave customers facing a potentially disruptive and ultimately expensive transition to the new architectures." (from AMD).

SC Usage: The SC hardware foundation rests on its server infrastructure. The current configuration includes several data-redundancy configurations (noted in the TPSes addressing security), built on a series of robust machines. These typically are configured as dual-processor servers based on Compaq/Intel CPUs running at 333 MHz or higher, with 256 MB of RAM, CD-ROM drives, and high-capacity, high-reliability disk storage, such as clustering and RAID technologies provide. Servers are connected to the network at a minimum of 100 Mbps, and they can be managed remotely.

SC Application Impacts: Indirect support for all applications. CPU performance is often the key driver affecting response time to the end user, although this should be verified in testing because many other factors could also be causes of latency.

Last Update: Valid Until:
3/28/20014/28/2001

References

Previous TPS Report
64-bit Windows


List all Categories

Administer the Database