Hp Vc Vmware

Embed Size (px)

Citation preview

  • 8/13/2019 Hp Vc Vmware

    1/13

    Using HP Virtual Connect on HP BladeSystem

    c-Class with VMware Infrastructure 3.5

    Introduction......................................................................................................................................... 2Virtualization in the enterprise ............................................................................................................... 2

    HP Virtual Connect........................................................................................................................... 2VMware Infrastructure ...................................................................................................................... 4HP BladeSystem c-Class and Emulex LightPulse Fibre Channel Mezzanine Card ...................................... 5HP Virtual Connect and NPIV ............................................................................................................ 5HP Virtual Connect with VMware Infrastructure 3.5 .............................................................................. 5

    Usage scenarios.................................................................................................................................. 5New server deployment.................................................................................................................... 5Rapid recovery ................................................................................................................................ 7

    Functional testing............................................................................................................................... 10Summary .......................................................................................................................................... 12For more information.......................................................................................................................... 13

  • 8/13/2019 Hp Vc Vmware

    2/13

    Introduction

    HP has worked with customers across the world and brought together server, network and storageadministrators to find a better answer to traditional approaches such as complex cable management,SAN, LAN administration as well as server administration. What if, for every new server you addedor replaced, the infrastructure was ready to go when you plugged it in and powered it up? What ifyou could do it all with compatibility and familiar industry standards?

    HP Virtual Connect is designed to enable a new way to add, move or replace server blades withoutimpacting networks or requiring multiple experts at each step. Virtual Connect is a major innovationfor customers and their businesses. Virtual Connect provides a better way for IT organizations to worktogether and provide these benefits to all parties involved without the traditional compromises.

    NPIV stands for N_Port ID Virtualization, an industry-standard specification developed by theInterNational Committee for Information Technology Standards (INCITS). It is the capability for a SANdevice to understand multiple N_Port IDs sharing a single physical N_Port. In practical terms thismeans that multiple HBAs can share a common link to the SAN fabric, reducing the number of SANfabric ports required for end devices. It effectively allows port aggregation outside the SAN switch.

    Customers using virtualization technologies could see an immediate compounding of functionalitybetween Virtual Connect and VMware Infrastructure. Customers using VMware Virtual Infrastructure

    products already realize the power of flexible infrastructures; they already realize the benefits ofshedding the traditional constraints that tie an operating system to a physical server.

    This white paper is a collaboration between Hewlett-Packard, VMware and Emulex to demonstratecommon usage scenarios, explaining how these complementary technologies can deliver the mostcomplete and most agile datacenter available in an industry-standard environment.

    Target audience: This paper is intended for IT decisions makers and administrators seeking to reducecosts and improve manageability while increasing flexibility within their datacenters. This paperassumes an understanding of the basic principles of VMware virtualization technology and products.

    This white paper describes testing performed in March/April 2008.

    Virtualization in the enterpriseHP Virtual Connect

    HP Virtual Connect makes IT change-ready by virtualizing physical connections both those betweenthe server and the SAN and also between the server and the LAN. Change may be necessitated bysystem failures, a physical server migration or even the simple addition of a new server. In atraditional IT environment, change takes time and coordination between server, SAN and LANadministrators. The time to make required changes may take hours, days or even weeks to scope thechange, coordinate the change plan and implement. Virtual Connect eliminates this barrier tochange.

    Virtual Connect interconnect options include a 1/10Gb Ethernet module and a 4 Gb Fibre Channelmodule. These modules provide an abstraction layer between the server and both the LAN and SANfabrics. During initial setup, Virtual Connect allows a server administrator to define a series of profilesthat can be tied to the server bays, instead of the physical servers, within the HP BladeSystem c7000or c3000 enclosure. Virtual Connect assigns Ethernet Media Access Control (MAC) addresses andFibre Channel Worldwide Names (WWNs) to these bays. This allows the server administrator toprovide LAN and SAN administrators with all the potential MAC addresses and WWNs before anyservers are even inserted in the enclosure. LAN and SAN administrators can then configure theEthernet and Fibre Channel access policies and, still no servers have been inserted. Then, finally,when a server is inserted, the physical MAC addresses and WWNs of the local adapters are

    2

  • 8/13/2019 Hp Vc Vmware

    3/13

    replaced with VirtualConnect-assigned IDs as defined in the slot profiles. The operating system, oncebooted, sees and utilizes these Virtual Connect assigned IDs. These IDs remain consistent within theenclosure bay, even if a server is replaced. There are no changes required on the LAN or SANaccess policies or cabling.

    It is important to note that the MAC and WWN values assigned by Virtual Connect become theactual hardware-level IDs of the blade. There is no translation or hiding of the hardware ID as trafficpasses through the Virtual Connect module data paths. The blade BIOS, NIC and HBA firmwareutilities, blade operating system software and all upstream devices see exactly the same values. At the

    hardware component level, each NIC and HBA has two registers for storing IDs: the default factory-created unique ID, and a second one that is controlled by Virtual Connect. If a Virtual Connect profileis present when the blade is inserted, Virtual Connect instructs the hardware to only use the secondone.

    Virtual Connect Ethernet functionality allows the administrator to connect any blade NIC to any VirtuaConnect Network, and connect Virtual Connect Networks to uplinks to networks outside the chassis.

    A Virtual Connect Network is conceptually similar to a VLAN (virtual LAN), isolating traffic betweenspecified ports. It can also be compared to the virtual switch functionality of VMware ESX that isimplemented at the hypervisor level. A Virtual Connect Network can also be defined to have noexternal uplinks, creating a private, in-chassis network path for functions such as VMware VMotionand cluster heartbeat.

    Virtual Connect modules are not switches; rather the Virtual Connect modules aggregate the LAN andSAN connections running from each of the blade servers in the enclosure. External communicationsare then routed over a few uplinks on the Virtual Connect Ethernet and Fibre Channel modules to thefirst layer of switches in the LAN and SAN managed networks. This BladeSystem architectureeliminates at least one required layer of switch management and also simplifies cabling for theenclosure.

    Virtual Connect has two primary functions that complement one another. First is the ability of VirtualConnect to directly control, at the hardware level, what MAC and WWN IDs are used by a givenblade in a given slot. Second is the ability to arbitrarily connect any data port to one or more uplinks.

    In many blade environments, there is compromise when it comes to virtualization. Best practices

    suggest numerous network interface cards, while many blades are limited in the number of NICs theycan provide. HP BladeSystem c-Class and Virtual Connect, can have up to 12 NICs and they can beutilized within a single server while maintaining redundant connections to Fibre Channel. Thus, in a

    VMware Infrastructure environment, full redundancy across the Service Console, Virtual Machine andVMkernel networks can be achieved as outlined in Figure 1.

    3

  • 8/13/2019 Hp Vc Vmware

    4/13

    Figure 1.Network mapping between VMware Infrastructure and HP Virtual Connect

    In traditional environment, this series of redundant connections per server means numerous switchports to manage. For example, 16 rack mount servers with 12 network interface cards each wouldrequire 192 network ports. With HP Virtual Connect, an enclosure of eight full-height servers with 12NICs and a full complement of Virtual Connect modules becomes a single logical device to configure.This environment can be made completely redundant with as few as 8 cables linked to the first layerof managed switches. As with all efforts at cable reduction, planning must be done to ensureadequate total bandwidth for the expected I/O load. One of the most attractive features of VirtualConnect is its ability to dynamically add or remove uplinks without interrupting the flow of data. Forexample, if an uplink set (commonly referred to as link aggregation or Etherchannel) of two Gigabituplinks is over-utilized, a third uplink to that set can be added with no downtime.

    VMware Infrastructure

    VMware Infrastructure is a state of the art server virtualization and virtualization managementplatform. VMware Infrastructure consists of several components, which include VMware DRS, VMwareHigh Availability (HA), VMware Consolidated Backup (VCB), VMware Storage VMotion, VMware

    VirtualCenter, VMware VMotion, VMware Virtual SMP, VMware Update Manager and VMware ESXServer. Together, the suite delivers comprehensive virtualization management, resource optimization,application availability and operational automation capabilities. However, this paper will focus onlyon the components that are directly impacted by the presence of Virtual Connect, questions aroundNPIV and Virtual Connect, the operation and configuration of the thin, hypervisor operating system,

    VMware ESX Server and the management interface, VMware VirtualCenter.

    4

  • 8/13/2019 Hp Vc Vmware

    5/13

    VMware Infrastructure 3.5 provides support for NPIV. NPIV or N-Port ID Virtualization enables eachFibre Channel HBA to register multiple Virtual ports (VPorts), identified by Worldwide Port Names(WWPN) with the fabric. VPorts can then be assigned to each virtual machine (VM). NPIV is part ofthe Fibre Channel protocol standard published by the ANSI T11 committee.

    Storage administrators, using VMware ESX Server 3.5 Virtual Center, and raw device mapping(RDM), with NPIV can create virtual machines that are easier to manage and maintain. Since all VMshave a unique identity to the SAN, SAN administrators can follow best practices such as fabriczoning and array-level LUN masking to implement security in a VMware deployment. Additionally,

    common SAN management tools that utilize the WWPN can be used to perform tasks such as qualityof service (QOS), bi-directional association of storage and virtual machines, and charge back.

    HP BladeSystem c-Class and Emulex LightPulse Fibre ChannelMezzanine Card

    Depending on the model, each HP BladeSystem c-Class server blade can accommodate two or threeEmulex LightPulse Fibre Channel Mezzanine Cards (HP Part Number 403621-B21). This dual-channel4Gb/s Fibre Channel Mezzanine card offers superior performance and throughput for blade systems.It has been tuned for a Virtual Connect deployment with a powerful centralized managementframework featuring scalable deployment, management, and diagnostic capabilities.

    HP Virtual Connect and NPIV

    SAN connectivity via Virtual Connect Fibre Channel modules aggregates the HBA traffic throughoversubscribed ports. Currently, the oversubscription can be configured (by increasing the number ofphysical uplinks) from 16:1 to 8:1 to 4:1. Each dual-port HBA mezzanine card physically links via thechassis midplane to two Virtual Connect Fibre Channel modules, one port per module. Each VirtualConnect Fibre Channel module can have 1, 2, or 4 physical uplinks to the SAN fabric. The precisemapping of how an HBA connects to a physical uplink is dependent on the number of uplinks but isnot currently user-configurable.

    Virtual Connect Fibre Channel modules require that the SAN fabric support NPIV functionality. Mostnew SAN switches support this specification, and many older SAN switches can achieve it with a

    firmware update. Please consult your SAN switch documentation for details.

    HP Virtual Connect with VMware Infrastructure 3.5

    Virtual Connect is a natural complement to VMware Infrastructure as it increases system flexibility andreduces downtime. A number of common scenarios are outlined below.

    Usage scenarios

    New server deployment

    In a traditional environment, the addition of a new server requires the coordination of SAN, LAN andserver administrators. In order to bring a host online, whether to start a new service or expandcapacity to an existing service, a server must be ordered and delivered, the WWNs of any FibreChannel host bus adapters and MAC addresses of any network interface cards must be inventoriedand passed to the SAN and LAN administrator(s), and the administrators must prepare the datacenterinfrastructure to accept the new server.

    Even after the access policies are in place, adding the new server still must wait on the creation ofLUNs and the proper cabling of networks. This process must be repeated for each server that is

    5

  • 8/13/2019 Hp Vc Vmware

    6/13

  • 8/13/2019 Hp Vc Vmware

    7/13

    SAN LUNs were already visible to the hosts post-install without the need to modify zoning, cablingor LUN presentation

    Four Virtual Connect modules were inserted into the enclosure. Switch bays 1 and 2 housed VirtualConnect Ethernet modules, while switch bays 3 and 4 housed the Virtual Connect Fibre Channelmodules. The Virtual Connect Manager was used to configure a domain using Virtual-Connect-assigned MAC addresses and WWNs. Server profiles were defined for bays 1-6 and 9-14. Twonetworks were defined. The first was called consolevmkerneland would be reserved for the Service

    Console and VMkernel networks. The second network was calledvms

    and would serve as the networkfor virtual machine communication. Each network was mapped out of a Virtual Connect module uplinkin switch bays 1 and 2. Two SAN fabrics were assigned, and a SAN administrator presented Virtual-Connect-assigned WWNs to an HP StorageWorks 8000 Enterprise Virtual Array (EVA8000) SANprior to any servers being inserted into the enclosure. The Fibre Channel modules were connected toan external HP B-series 2/8 switch. A single cable per module was connected and all connectionsfrom the EVA8000 were connected to the same switch.

    Two servers were inserted into slots 9 and 11 of the HP BladeSystem c7000 enclosure. The HPIntegrated Lights-Out (iLO) feature of each blade was used to install VMware ESX Server 3.5 usingiLO virtual media. Once installed, a cluster named VirtualConnect was created in VirtualCenter andthe two hosts were added to the cluster. Once licensed, networking was configured for each host andas, expected, all networks were available and no LAN administrator assistance was required.

    Under Storage Configuration in VirtualCenter, the Add Storage link was selected and, after selectingto add a new LUN, all LUNs presented during the initial domain setup were visible to the hosts andrapidly added.

    In this usage scenario, the amount of time it takes to add and ESX Server host to an environment isdramatically reduced. Virtual Connect allows for the presentation of Ethernet networks and SANfabrics to hosts before they are ever brought online. The pre-provisioning of WWNs, MAC addressesand associated connectivity ensures that as capacity demand expands, the involvement of the SANand LAN administrators is not needed after the initial configuration.

    Rapid recovery

    The rapid recovery usage scenario refers to the replacement and reconfiguration of a server blade toreplace an existing system that is being re-provisioned or has failed. In this case, the goal is to addthe replacement host and restore the physical resources assuming the identity of the original physicalhost. In this case, neither the profile nor the configuration is new; both were previously applied to theoriginal host. This is a common usage scenario that demonstrates, very clearly, the power of theabstracted Virtual Connect and ESX Server Interfaces.

    7

  • 8/13/2019 Hp Vc Vmware

    8/13

    Figure 3.Physical configuration of the systems used for the rapid recovery usage scenario.

    The server profile for this test was configured to present any server in the bay with a networkconnection to both the service console and Virtual Machine networks. Boot from SAN parameterswere entered during profile creation using the Array Node WWN provided by the SAN administratorduring initial setup. Figure 4 details the SAN parameters using in the Virtual Connect profile.

    Figure 4.FC SAN connections defined in the test server profile.

    The specific intent in testing this usage scenario using HP Virtual Connect is to determine if a serveradministrator can, in a preconfigured Virtual Connect domain, replace an ESX Server host withoutneeding to change settings at the SAN and LAN levels and have the replacement host assume the roleand identity of the server that was replaced. The steps involved for this test were:

    8

  • 8/13/2019 Hp Vc Vmware

    9/13

    1. Create and apply Virtual Connect profile to chassis slot.2. Invoke a simulated total system failure by removing power to the server.3. VMware HA (High Availability) should redistribute the virtual machine from the failed host to the

    other host in the cluster.4. The failed blade is replaced with an un-configured blade.5. The system should, with no administrative involvement, boot to the Virtual-Connect-assigned LUN,

    assume the identity and configuration of the previous host and appear in the cluster as the failed

    server. Once booted and rejoined in the cluster, VMware DRS should redistribute some virtualmachines to the new, replacement host.

    The test environment for the rapid recovery usage scenario differed slightly from the serverdeployment scenario. While switch bays 1 and 2 continued to house Virtual Connect Ethernetmodules and switch bays 3 and 4 housed Virtual Connect Fibre Channel modules, three servers wereused one placed in server bay 3, another in server bay 11 and the third was held aside as thestandby replacement server. Prior to installing the servers, the Virtual Connect Manager was used toconfigure a domain using Virtual Connect assigned MAC addresses and WWNs. Virtual Connectserver profiles were defined for bays 3 and 11. In the profiles, two networks were defined; the firstwas called cons ol evmker nel , which would be reserved for the Service Console and VMkernelnetworks, and the second network, called vms, would serve as the network for virtual machine

    communication. Each network was mapped out of a Virtual Connect module uplink in switch bays 1and 2. Two SAN fabrics were assigned, and a SAN administrator presented Virtual Connectassigned WWNs to an HP Storage EVA8000 SAN prior to any servers being inserted into theenclosure. The fibre channel modules were connected to an external HP B-series 2/8 switch with NPIVactive on all ports. A single cable per module was connected and all connections from the HPStorageWorks EVA8000 were connected to the same switch.

    Two severs were inserted into the HP BladeSystem c7000 enclosure one in slot 3 and the other inslot 11 the additional, spare blade was prepared as a replacement, but not inserted into theenclosure. As mentioned, the Virtual Connect profile for bay 3 configured the server for boot fromSAN and presented parameters for a boot LUN of the host. After ESX Server installation, the hostswere added to the VirtualCenter and a cluster named VirtualConnect was created. In the host

    configuration, the existing VMFS volumes were added to the storage configuration of both hoststhrough the Storage Configuration option in the Virtual Infrastructure Client user interface. SelectingtheAdd Storagelink, all LUNs presented during the definition of SAN access policies should bevisible to both hosts. The VMFS volumes were added to the hosts, allowing both hosts to access thevirtual machines stored on the volume. Then, HP added both the powered-on hosts and 6 Microsoft

    Windows Server 2003 Enterprise Edition-based virtual machines to the cluster and both the hostsand the virtual machines were powered on. VMware DRS was allowed to place each of the virtualmachines (VMs) between the hosts, as the VMs were powered on.

    After a steady state was reached, the server in bay 3 was pulled from the enclosure to simulate acatastrophic failure. The test engineer waited for VMware HA to recover the virtual machines on theremaining host. Then, the host that was removed from the c-Class enclosure was replaced with the

    spare, un-configured server.Since Virtual Connect modules were managing the physical Fibre Channel and Ethernet addresses,the host was able to see the boot LUN exposed to the WWN of the enclosure bay and boot with theconfiguration of the previous host. As the host boots, the network connectivity is already in place toallow the ESX Server configuration stored on the boot LUN to restore network connectivity to

    VirtualCenter and re-associate the ESX Server host with VirtualCenter and its associated VMware HAand VMware DRS clusters. After the system has reestablished its role within the cluster, DRSautomatically distributes the load to the replacement host, as though the replacement were the originasystem.

    9

  • 8/13/2019 Hp Vc Vmware

    10/13

    This usage scenario is an example of the level of flexibility created in the datacenter by combiningVMware Infrastructure 3 or 3.5 and HP Virtual Connect. Without any reconfiguration, test engineerswere able to replace a physical system and have that system completely restore the resources of thefailed host. After the initial configuration, there were no changes required in VirtualCenter, nochanges in the SAN fabric, no changes in the network configuration or connectivity and no changesrequired in the applications. The pre-provisioning of SAN volumes and Ethernet segments, based on

    Virtual Connect managed profile addresses, combined with the hardware independence andautomation of Virtual Infrastructure allowed the system to recover and restore without any

    administrative changes.

    Functional testing

    In addition, general testing and validation of core VMware Infrastructure 3 or 3.5 functionality wasconducted. This testing included ensuring the expected behavior of:

    VMware VMotionVMware Distributed Resource Scheduling (DRS)VMware High Availability (HA)To test these core features, a cluster was configured within VirtualCenter with two HP ProLiant BL465cservers. DRS and HA were not enabled during creation. Ten virtual machines were created on twodifferent datastores. Five virtual machines were assigned to each server.

    To test VMotion, 1 virtual machine was migrated between host 1 and host 2. After a brief restingperiod, the server was migrated back, both migrations successfully completed.

    In order to test DRS, the engineers used a CPU load tool to create load within two VMs on ESX Serverhost 1 (VCtest1-2k3 and VCTest3-2k3). DRS was activated within the cluster, set to fully automatic andthe CPU load tool was launched from the command line of both virtual machines. The expectedbehavior would be for the scripts to generate sufficient load to trigger a DRS rebalancing of loadacross hosts and for the VirtualCenter to use VMotion and separate these virtual machines ontoseparate hosts. This behavior was confirmed with both the Virtual Connect Ethernet and Fibre Channe

    modules in place and mapping the ESX Server physical address.

    Last, to test VMware HA, the feature was activated at the cluster level. The iLO feature of ESX Serverhost 2 was used to power the server down abruptly. The virtual machines from host number 2 weresuccessfully restarted on host 1, yielding six active virtual machines as shown in Figure 5. Both VirtualConnect modules were present and managing the physical addresses.

    10

  • 8/13/2019 Hp Vc Vmware

    11/13

    Figure 5.Six virtual machines powered on host 1 after host 2 simulated failure

    As an extended test of functionality, it was expected that VMware DRS would automatically

    redistribute the virtual machines between the two hosts when the failed host was brought back onlinewithout interruption of the server of the virtual machine. The behavior was observed and confirmed asshow in Figure 6.

    11

  • 8/13/2019 Hp Vc Vmware

    12/13

    Figure 6. Tasks showing successful VMotions carried out between host 1 and host 2

    All testing for the core functionality of VMware ESX Server 3.5 and VMware Infrastructure 3 passed

    with servers configured in a Virtual Connect domain without issue or specific configuration withineither ESX Server of the virtual machines.

    Summary

    HP Virtual Connect and VMware Infrastructure 3.5 combine to bring the ultimate in flexibility toenterprise computing. When combined, customers achieve new levels of change readiness andsimplification of processes.

    12

  • 8/13/2019 Hp Vc Vmware

    13/13

    For more information

    HP and VMware on HP.com, http://www.hp.com/go/vmware

    HP BladeSystem, http://www.hp.com/go/blades

    HP Virtual Connect primer. http://h71028.www7.hp.com/ERC/downloads/4AA0-5821ENW.pdf

    HP Virtual Connect for c-Class BladeSystem User Guide,

    http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00865618/c00865618.pdfEmulex LightPulse Fibre Channel Mezzanine card,

    http://h18004.www1.hp.com/products/blades/components/fibrechannel/emulex/index.html

    Emulex NPIV Technology, http://www.emulex.com/solutions/virtual/virthba.jsp

    To help us improve our documents, please provide feedback at www.hp.com/solutions/feedback

    2008 Hewlett-Packard Development Company, L.P. The information containedherein is subject to change without notice. The only warranties for HP products andservices are set forth in the express warranty statements accompanying suchproducts and services. Nothing herein should be construed as constituting anadditional warranty. HP shall not be liable for technical or editorial errors oromissions contained herein.Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.

    4AA2-0362ENW, June 2008

    http://www.hp.com/go/vmwarehttp://www.hp.com/go/bladeshttp://h71028.www7.hp.com/ERC/downloads/4AA0-5821ENW.pdfhttp://h20000.www2.hp.com/bc/docs/support/SupportManual/c00865618/c00865618.pdfhttp://h18004.www1.hp.com/products/blades/components/fibrechannel/emulex/index.htmlhttp://www.emulex.com/solutions/virtual/virthba.jsphttp://www.hp.com/solutions/feedbackhttp://www.hp.com/solutions/feedbackhttp://www.emulex.com/solutions/virtual/virthba.jsphttp://h18004.www1.hp.com/products/blades/components/fibrechannel/emulex/index.htmlhttp://h20000.www2.hp.com/bc/docs/support/SupportManual/c00865618/c00865618.pdfhttp://h71028.www7.hp.com/ERC/downloads/4AA0-5821ENW.pdfhttp://www.hp.com/go/bladeshttp://www.hp.com/go/vmware