Make your own free website on Tripod.com

CET2176C Lecture #2 - The Server Project and the Planning Phase

Materials:
Lecture Only
Objectives:
The student will become familiar with:
The general Phases of an IT Project,
The phases as they relate to a server project,
The Planning Phase of the Server project.
Competency:

The student will become familiar with the general organizational procedures for conducting an IT project and the specific methodology employed in a server project. The student will understand that since the server is a special role general purpose computer and the principle node on the network, that special attention should be paid to its planning, design, and implementation.

  1. Since economics play a significant role in the design and implementation of the network and its principle nodes; the servers, it stands to reason that the network project as a whole, and the server project of particular interest here, should not be treated lightly. As such the network and the servers on it need to be treated as projects. Formally speaking a project, like the construction of a bridge for example, undergoes phases from the occurence of the idea to build the bridge, through to the completion and opening of the bridge for public use. An IT project can be broken down into the following phases:

    • Proposal Phase - This is where the idea for the new project is first proposed. All targets for the project are laid out forming a general specification of what the project should accomplish; a wish list.
    • Planning Phase - This is where the specific details of the project are developed that will meet or beat the target criteria.
    • Prototype Phase - This is where the specifics are assembled into the first working system to test if it can meet the minimum target criteria; "proof of concept"
    • Pilot Phase - This is where the first prototype(s) are placed into limited and controller operation to ensure that they will meet minimum target criteria while in operation.
    • Production Phase - This is where the positive results of the Pilot Phase are implemented into widespread active service for the organization. Systems are brought online and made fully operational.
    • Evaluation Phase - This is where all Production Phase troubleshooting takes place. Any recommendations from users are collected and considered for implementation.
    • Modification/Correction Phase - All feedback generated in the evaluation phase are collected and formulated into a modification and/or corrections that are needed. At this point these modifications and corrections are implemented.
    • Maintenance Phase - As the longterm subsequent support phase of the project begins an ongoing cycle of evaluation and modification phases will continue until the project achieves at the very least a satisfactory level of functionality and performance. As this is reached, the longterm project maintenance phase settles into its routine of regular maintenance procedures accompanied by periodic evaluations which will include an evaluation of the serviceability of the project always looking toward the possibility of its upgrade and ultimate replacement by the next future project.
  2. The proposal phase especially for small to medium size networks can be as simple as the owner of company declaring that the organization needs a network or it needs a completely new network. The proposal phase for large organizations generally includes discussions of complex target criteria including average to maximum client load capacity, total data throughput capacity at the WAN links, etc. Overall however, the proposal phase reflects the specific needs of the organization. Once this "wish list" has been assembled, it is the IT department's task to convert this raw set of target criteria into real world designs. While the server project is but a single component of the potentially much larger network project of the organization and will affect that project and be affected by it, we will try to isolate the server project and treat it as a specific IT project from the proposal phase to the maintenance phase. Lecture #1 dealt with factors that will directly influence the number of and overall expectations of the server(s) on the network in a way answering some of the basic proposal phase questions. Now, the planning phase can begin in earnest.

  3. The Planning Phase of the server project will be based on the determinations and expectations established in the proposal phase, and can be broken down into these factors:

    1. Number Of Servers: Decide on the number of and types of server that will be implemented.
    2. Server Framework: Decide on the specific logical and physical server framework that will be implemented.
    3. Server Environment: Establish a minimum acceptable suitable location for the server(s). Factors include:
      1. Security: Computers that are physically accessible to anyone are by definition completely unsecured. Any information that hold can be taken by anyone who can gain physical access to them.
        1. Controlled Access: The server should be in a access controlled environment, or simply put, a locked room for which all keys are accounted for.
        2. Access Monitoring: No unauthorized person should be able to gain entry to the server room without this event being recorded including signout sheets for the server room keys and motion detector/door contact alarms activated after hours.
      2. Environmental Conditions: The server's physical environmental conditions should be kept under control at all times.
        1. Air Conditioned/Ventilated: Electronics will function longer if kept as cool and dry as possible, provisions should be in place to deal with widespread power outages.
        2. Ionization/De-ionization Atmospheric Control and/or other Anti-Static Considerations: Example: ESD Atmospheric Control Anti-static floor covering and/or workbench.
        3. Removal of Carpet: Primary cause of ESD in computer environments is the scuffling of rubber soled shoes on synthetic fiber carpets in extremely low humidity (read: air conditioned) environments.
      3. Site Ergonomics and Functionality:
        1. Ample room to access the server and its internal components
        2. Ample AC Power: Check with the facilities engineer to ensure that the wall receptacles can provide the maximum expected current load of all equipment in the server room and to add additional circuits if needed.
        3. Ample storage of all required tools/supplies: Including backup devices, media, device driver disks, documentation, PC technician's tool kit, anti-static wrist straps/mats, etc.
    4. Establish the Server Operating System: Factors to be considered include:
      1. Meets Minimum Security Requirements: No PC operating system does. However, a competent technician certified in the operating system chosen should be able to configure it to the minimum acceptable security requirements.
      2. Supports all intended roles of the server
      3. Supports all intended hardware components of the server: While most hardware has drivers for the current Microsoft products that only means that they exist, it is an entirely different issue as to whether they are well written and will not cause problems with other device's drivers or software. For Windows 2000 Microsoft recommends using only hardware from the Windows 2000 Hardware Compatibility List (HCL) for Windows Server 2003, only "Approved" devices (and their drivers) should be used from the ratings given by the Windows Hardware Quality Labs (WHQL). Server 2003 Logo Certified Hardware
      4. Supports all intended additional software
      5. Meets minimum mission level (reliability/availability expectations)
      6. Supports interoperability with all other systems: including network connectivity devices (i.e. Routers) other types of server and clients.
      7. Support staff are certified in the operating system
    5. Server Hardware
      1. Case and Power supply:
        1. Regular PC Chassis: Usable for smaller low-end servers
        2. Rackmount Server Chassis: Standard EIA/TIA 568 network infrastructure rackmountable (19" wide) server chassis available in 1U to 27U and beyond. Each 1U of height of the rack is 1.75" but a 1U rackmountable chassis will be 1/32 inch shorter than this (1.719") so that there is enough gap between the devices to prevent them from physically impeding each other from being installed or removed.
        3. Cabinet Chassis: Large "doublewide" enclosures that allow for two vertical stacks of 5 ¼" internal drive bays instead of one vertical stack of such bays found typically in "regular" PC towers.

          This server enclosure or cabinet chassis is a typical doublewide
          although only half of the facing width is used for the six removable
          hard drive hotswappable module racks

        4. Power supply form factors: Aside from ATX 1.x, 2.x, 3.x server power supplies are also available in 1U, 2U, etc. rackmount chassis form factors.
        5. Power supply Wattage: Servers will generally need significantly more power than the average end-user PC and should be equipped to handle well above the maximum expected load to prevent load-stressing the power supply.
        6. Single or Multiple Power supplies: Redundant power supplies eliminate a single point of failure and provide the server with one of many possible high availability solutions.
        7. Other power supply features: Integrated Surge Protection, integrated load balancing and integrated fail-over technologies for coordinated functionality in multiple power supply based systems featured as availability solutions.

          Rear view of the same Dell PowerEdge Series server, the dual
          800Watt redundant system power supplies are clearly visible

      2. Motherboard
        1. Chipset: Determines number of and types of CPU's and their features that are supported, determines types of expansion buses and number of expansion slots available, determines the amount, type, channels, and other technologies of supported RAM, determines the performance of the system.
        2. Sockets/Slots and Supported CPU's

          Interior of the same server cabinet showing the motherboard
          with dual Socket 8 (one occupied, one empty) support for one
          or two Pentium Pro microprocessors

        3. Memory Module Slots: Can determine the ideal initial and ultimate expansion module sizes that can be chosen in order to take advantage of the full capabilities of the motherboard chipset.
        4. Expansion Bus Slots: Determine the ultimate limit of how many internal peripherals can be attached to the system and what expansion bus interfaces they will be forced to use.
        5. BIOS Features: ACPI features such as Wake-On-LAN are made possible through the BIOS implementation, dual BIOS (redundant EEPROM), Virtualization Technology, (dual independent virtual BIOS supports independent OS including rebooting one while not affecting the other), etc.
        6. Integrated Peripherals: Generally undesirable in server motherboards, that way if any device fails it can be removed completely from the system and be replaced evne if this forces an upgrade due to discontinuation of the original component.
      3. CPU
        1. Number of Processors/Number of Cores: Primary measure of overall system performance; how many physical processor cores are installed on the motherboard. This is determined by: chipset support for multiple CPU's and multiple core CPU's, the number of sockets/slots actually installed on the motherboard, number of physical cores within the CPU(s). There is a slight processing advantage (avg. 30% depending on the actual workload requirements of the application) in having multiple virtual CPU cores Intel calls this technology, one core emulating two, HT - Hyperthreading.
        2. FSB Throughput: Primary measure of individual CPU performance; how fast can the CPU transfer data to/from the core to the outside chipset/cache.
        3. Levels, Amounts, and Speeds of cache: The more cache, and the faster it is, the more time the CPU can spend executing instructions at core speed rather than executing "wait states" waiting for regular RAM to deliver the data.
        4. Core Architecture: Certain core designs are more efficient than others; i.e. Pentium III Tualatin was a noticeably faster microprocessor than the Pentium 4 Northwood (2nd generation P4) due entirely to its superior core architecture.
      4. RAM
        1. Memory Technology: Includes: EDO, SDRAM, DDR, RDRAM, DDR2 and DDR3 NOTE: The JEDEC - Joint Electron Device Engineering Council, has already published the standards for GDDR5 used in high end graphics cards. These technologies eventually emerge in some modified form as the standard for main memory as well, DDR3 is a modification of the original technology used in video cards governed by the JEDEC GDDR3 specification.
        2. Memory Amount: The maximum expansion capacity of the motherboard for RAM is a long-term life span consideration, but the total amount installed at the introduction of the server can be less depending on the circumstances as long as it is enough so that the server can run smoothly and efficiently. Generally with 32-bit operating systems, the more RAM the better.
        3. Memory Speed: Most of the modern memory technologies are available in different speeds. Given the choice however, more is better than faster and dual channel is better than either other consideration.
        4. Non-ECC or ECC: Error Correction Code capable modules can repair single-bit errors on-the-fly and greatly improve the reliability of servers in general and especially servers whose operating systems and software have a large RAM "footprint" (i.e. Windows and its enormous kernel most of which is taken up by all of the graphics) ECC is a priority hardware technology in mission critical servers.
        5. Registered vs. Non-Registered: Enhanced RAM modules that have lower power consumption requirements and place less strain on the motherboard dynamic RAM controller (north bridge of the chipset) allowing it to access more addresses per second with less possibility of corrupting the data or forcing the insertion of "wait states" to avoid corrupting the data. POST detects "registered" memory modules and identifies these as very high speed and very resilient memory modules. Some BIOS'es support more total RAM when it is all registered than when it is not because of its superior capabilities in cooperation with the north bridge.
        6. Fully buffered RAM: The latest version of registered memory module which is superior and therefore supports greater reliability and speed in conjunction with north bridges that support it. Registered and fully buffered memory modules allow the system to support significantly more RAM and are an important consideration in total maximum memory capacity of the system.
        7. single channel vs. dual channel vs. both In Dual Channel memory systems: it is the north bridge dynamic RAM controller's feature, not the RAM module. Such systems can mount two identical RAM modules in pairs of slots at a time and the DRAM controller draws data from them simultaneously effectively doubling the DTR - Data Transfer Rate, of main memory. Dual channel memory based motherboards use the same memory modules as single channel systems but main memory is physically twice as fast. Dual channel is a preferable (more cost effective) solution than buying faster modules. Some motherboards can through BIOS settings change the treatment of the installed memory modules from single channel to dual channel and back again, making the system much more flexible to RAM module upgrades.
      5. Storage Controllers
        1. ATA vs. SATA: ATA is quickly becoming deprecated. SATA controllers are fully backwards compatible with all ATA controllers. 40-pin connectors are often included with integrated motherboard SATA controllers to allow the attachment of optical drives that are not fast enough to need SATA anyway (economically more efficient purchase than SATA optical drives at the time of this writing). SATA does not have the classic four device limitation of standard ATA controllers.
        2. (S)ATA vs. SCSI: While the (S)ATA standards are prolific and well supported and generally less expensive than any contemporary SCSI technology, SCSI is usually a far superior storage technology and usually fulfills the storage needs of servers where (S)ATA technologies fall short. Even early 80's 8-bit SCSI was faster than the leading end-user storage technology (ST-506/412) and supported up to seven hard drives. Current standard UltraWide320 controllers support 320MB/sec transfer rates while SATA-II supports 300MB/sec, but the controller also supports up to 15 devices. Most SATA-II controllers do not include this many connectors for hard drives.
        3. RAID: While the (S)ATA controllers do now offer integral hardware level RAID capabilities, they may be relatively limited in RAID disk group management and performance. SCSI hardware-level RAID controllers set these standards and are the better choice for high availability systems.
      6. Storage Devices - Hard Drives
        1. Number and Capacity: In general it is more economically efficient to buy one large hard drive than it is to buy two or more small drives that total the same capacity. However, multiple drives are needed to construct a RAID which is a high availability storage solution.
        2. Performance: measured in average seek time and read/write transfer rates these are influenced directly by RPM, interface bandwidth, and platter utilization.
        3. Local vs. Remote: In general the OS will always support local attachment technologies including (S)ATA and SCSI far more reliably than it will remote technologies such as PXE (Network boot up) or removable storage technologies such as USB.
        4. Fixed vs. Hotswappable: Hotswappable local hard drive storage technologies come at a premium, but used in concert with high availability storage technologies such as RAID's allow the system not only to continue functioning during a hard drive failure, but to have it replaced and regenerated while up and running as well.
      7. Storage Devices - Optical Drives: There are several factors that may influence installing optical drives on the server including:
        1. Security: the local optical drive is the modern day equivalent of a diskette drive, it provides an avenue by which the system can be booted to a portable bootable disc and therefore an alternate OS and therefore the possibility of bypassing the resident operating system's entire security system.
        2. Ease of software installation: the local optical drive provides the simplest and one of the fastest methods of delivering large amounts of data to the server including the installation of the operating system and other software.
        3. Disaster Recovery/Prevention: the local optical drive provides one of the simplest and most reliable methods for booting up the server and performing a full system restoration from backup discs. Or for booting the server and performing the backup operation as well as other critical maintenance operations like running an anti-virus scan on the entire drive(s).
        4. Ease of Installing/Copying Distributions: the local optical drive provides one of the simplest and easiest methods of introducing large amounts of data that will be moved to the server's hard drive to serve as network accessible data distributions for other servers and clients on the network.
      8. Storage Devices - Tape Backup Drives: Many factors must be weighed when considering installing a tape backup drive on the server including:
        1. Capacity: Modern backup tapes do have very large capacities which may be suitable for systems with enormous amounts of data to be backed up.
        2. Technological Life Span: Many historical tape drive technologies came and went within very short periods of time stranding users of those transient companies products.
        3. Proprietary system: Many tape backup drives and even the media for them are proprietary forcing the user to buy new media only from the manufacturer, fortunately most of these have disappeared from the market (see Technological Life Span above).
        4. Generic system level drivers vs. proprietary backup software: Generic system level drivers allow the drive to be recognized by the operating system and allows any backup software to work with the drive. Proprietary software may suffer the same problems that proprietary hardware and media do (see Technological Life Span above)
        5. Speed: Modern large capacity backup tapes may take a very long time to fill during a full system backup. Is the organization prepared to pay at least one support technician to see it through to completion?
        6. Reliability: Backup tapes have always been, for the most part, one of the least reliable storage media used in the PC. As such, the decision to use them must consider the problem of testing the backups for viability.
      9. Network Interface Controllers: Factors involved in selecting the server's NIC(s) include:
        1. Network Type: Ethernet, FastEthernet, GigEther, 802.11 wireless or combinations as needed. Subtypes include: 10BASE2 vs. 10BASET or 100BASETX vs. 100BASEFX, for example.
        2. Adapter features: expansion bus interface, onboard CPU, onboard EEPROM, onboard cache and the amount, networking specific features such as autosense (of the Ethernet frame type) or 802.1q (VLAN support) to name just two.
        3. single vs. multiple adapters: Depending on the server's intended roles, more than one NIC may be needed. Multiple NIC's may also serve as integral components in high availability systems, such as the dedicated interconnectivity between cluster server physical nodes each of which also has another NIC that attaches to the client access network connectivity device(s).
      10. Video Controller: This is one of the few peripherals where the investment can be compromised on the server. Even the weakest modern video card has sufficient horsepower to handle any demands placed on it by a server. Remember that the server should never be used on a daily basis by anyone. Because it is going to be locked away in a closet it will have almost no application execution demands placed on it, meaning that the only concern with the video controller is making sure that its resolution will match that of a terminal services workstation.
      11. Sound Controller: This is another peripheral that only needs to be functional, not state-of-the-art in order to provide the server with minimal yet complete operating system and software functionality.
      12. External/Removable Peripherals: Servers have external peripherals normally not considered by end-users on their systems. Yet they are an integral part of basic server design and implementation:
        1. KVM Switch: KVM switches can be used to attach more than one server to a single Keyboard, Video display, Mouse combination. This is done for two primary reasons: the server is normally not used to such an extent that it makes no sense to dedicate these peripherals to it, and in the case where there are multiple servers in the same server room, it makes sense to invest in a single set of these external peripherals and it also reduces the amount of physical space occupied by them
        2. Powerline Appliances: These include: Surge Protector, Line Conditioner, and UPS - Uninterruptable Power Supply UPS's in particular often have a data interface with the host system such that in the event of a power failure they can signal the system that this has occurred. The UPS software can then initiate a controlled shutdown rather than suffer a sudden power loss when the batteries wear out. Because the UPS can keep the system running albeit for a relatively short period of time, they do allow the system to keep running through the majority of common power company interruptions and are the cornerstone of any desired level of availability based system.
        3. External storage: This includes SCSI drives, eSATA drives, USB drives and FireWire drives. Any cables, hubs, terminators and adapter connectors anticipated should be stored with the server as well.
        4. Standard external I/O Peripherals: including: keyboard, mouse, and display do not have to be high-end, just reliable and serviceable.
      13. Additional Technologies: there are many more server specific technologies too numerous to cover in detail here (they will be covered in later modules) including but certainly not limited to:
        1. Hotplug PCI: Revision of standard PCI that allows individual expansion cards to be installed/removed while the system is up and running. Hotplug PCI is therefore a high availability expansion bus solution. System board, BIOS, and operating system must all be 100% compliant for this to function properly.
        2. PCI-Express: Third generation general purpose expansion bus that is replacing standard PCI as the standard expansion bus. PCI will become deprecated over the next few years. Because of this, the server should be based on this bus. PCI-Express architecture provides for extremely high data throughput for individual peripherals and the bus as a whole and is the preferable solution for high-end servers.
    6. Final Planning Stage:
      1. Specific role functions, OS compatibility, hardware compatibility should be researched and verified.
      2. Overall hardware/OS/software/driver compatibility should be researched and verified.
      3. Server node-to-server node and server node-to-client node compatibility and interoperability should be researched and verified.
  4. It can be seen then that quite a lot of information must be considered when researching the server specifications as the general proposals are being translated into the specific server requirements. This module provides the student with a general outline of the server project phases, each subsequent lecture will examine the various phases and specific details within them as the course progresses.

Review Questions
  1. List and describe the general phases of an IT project:























  2. List and describe briefly the six general steps of a server planning phase:

















  3. List and describe the factors involved in planning for the server's environment:























  4. List and describe the factors involved in selecting the server's operating system:




















  5. List and describe briefly the 13 major categories of server hardware outlined in this lecture:






































  6. List the major categories of server hardware outlined in this lecture, that specifically mention that they can offer availability solutions:











  7. Server hardware components are generally more expensive than the average end-user PC equivalents. Name all peripherals mentioned in the server hardware outline above that can actually be less expensive and explain why this is so.




















  8. List the two basic benchmarks of hard drive performance. List the three physical parameters that affect these benchmark measurements:














  9. List and describe the three powerline appliances mentioned. Which is considered an availabilty solution? Why?














  10. Contrast the server's external peripherals with those of the average end-user PC. Which are usually not found at all on end-user systems? Which could be the about the same? Which would actually be most likely superior on end-user systems? Explain.














  11. Tape backup drives are extremely common on servers. List and describe the criteria that must be considered when selecting one:




















  12. Tape backup drives are historically the least reliable storage device on any computer system. Which device from the outline above offers a viable although smaller capacity alternative? List and describe this type of drive's advantages and disadvantages. Might a designer go with the tape drive vs. this drive based on one of these disadvantages alone?




















  13. Which RAM module feature is a high availability solution? Which RAM module technology can increase the total maximum capacity of the motherboard when these are used? Which is more important in choosing RAM: the amount or the speed?




















  14. The final stage of the planning phase is potentially the most difficult (if not impossible) Why?








Copyrightę2000-2008 Brian Robinson ALL RIGHTS RESERVED