vmwarews-blog
vmwarews-blog
VMware Solutions
9 posts
Don't wanna be here? Send us removal request.
vmwarews-blog · 8 years ago
Text
HPE ProLiant DL580 Generation9
HPE ProLiant DL580 Generation9
Overview
HPE ProLiant DL580 Generation9 (Gen9)
The HPE ProLiant DL580 Gen9 Server is the Hewlett Packard Enterprise four socket (4S) enterprise standard x86 server offering commanding performance, rock-solid reliability and availability, and compelling consolidation and virtualization efficiencies.
Supporting Intel® Xeon® E7-4800/8800 v4/v3 processors, the HPE DL580 Gen9 offers enhanced processor performance, up to 6 TBs of memory, greater IO bandwidth (9 PCIe Gen3.0 slots), and 12 Gb/s of SAS speeds. HPE ProLiant DL580 Gen9 has security and data protection features for system resiliency that your business can depend on. All, making it ideal for mission-critical enterprise, business intelligence, and database applications.
Whether needed for highly virtualized or cloud based deployments, with intelligence and simplicity of automated management with HPE OneView and HPE iLO 4 your business can achieve agile and lower cost infrastructure management.
Front View
1.
CPU Memory Drawer Handle
2.
Quick removal access panel
3.
Drive bays 6-10
4.
Systems Insight Display
NOTE: Drives installed in these bays require the optional SAS backplane and cables.
NOTE: Optional NVMe drives are only supported in Drive bays 6-10.
6.
Power On/Standby button and system power LED button
8.
UID button
10.
Drive bays 1-5
5.
Health Status LED
12.
Video connector
7.
NIC status LED
9.
Fans 1-4
11.
Discovery services connectors
13.
USB connectors (2)
Rear View
1.
Serial connector
2.
Video connector
3.
Power Supplies 1-4
4.
PCI expansion slots 1-9
5.
Dedicated iLO connector
6.
FlexibleLOM ports 1-4
7.
USB connectors (4)
NOTE: Port configuration is dependent on the installed FlexibleLOM and may differ from what is shown in the illustration.
Internal View
1.
CPU Memory Drawer
2.
Memory Cartridges 1-8
3.
Processors 1-4
4.
FBWC capacitor slots 1-4
5.
I/O Card Auxiliary Power Connectors
6.
Power Supplies
7.
PCI expansion slots 1-9
8.
FlexibleLOM connector
9.
SPI board (houses iLO chip, FBWC, microSD slot, 2x USB slots, TPM, top SAS backplane connector)
10.
MicroSD Slot
What's New
New Fiber Channel HBA’s up to 32GB HPE NVIDIA Quadro M6000 24GB GPU HPE NVIDIA Tesla M60 RAF Dual GPU Module HPE Dual 8GB microSD USB Kit for redundancy in boot environments New 100Gb Omnipath networking card
Standard Features
NOTE: For more information regarding Intel Xeon processors, please see the following http://www.intel.com/xeon.
Processor One or more of the following depending on model
Model
CPU frequency
Cores
L3 Cache
Power
QPI
DDR4 Hz
E7-8893v3
3.2GHz
4
45MB
140W
9.6GT/s
1866
E7-8891v3
2.8GHz
10
45MB
165W
9.6GT/s
1866
E7-8890v3
2.5GHz
18
45MB
165W
9.6GT/s
1866
E7-8880v3
2.3GHz
18
45MB
150W
9.6GT/s
1866
E7-8880Lv3
2.0GHz
18
45MB
115W
9.6GT/s
1866
E7-8870v3
2.1GHz
18
45MB
140W
9.6GT/s
1866
E7-8867v3
2.5GHz
16
45MB
165W
9.6GT/s
1866
E7-8860v3
2.2GHz
16
40MB
140W
9.6GT/s
1866
E7-4850v3
2.2GHz
14
35MB
115W
8.0GT/s
1866
E7-4830v3
2.1GHz
12
30MB
115W
8.0GT/s
1866
E7-4820v3
1.9GHz
10
25MB
115W
6.4GT/s
1866
E7-4809v3
2.0GHz
8
20MB
115W
6.4GT/s
1866
E7-8893v4
3.2GHz
4
60MB
140W
9.6GT/s
1866
E7-8891v4
2.8GHz
10
60MB
165W
9.6GT/s
1866
E7-8890v4
2.2GHz
24
60MB
165W
9.6GT/s
1866
E7-8880v4
2.2GHz
22
55MB
150W
9.6GT/s
1866
E7-8870v4
2.1GHz
20
50MB
140W
9.6GT/s
1866
E7-8867v4
2.4GHz
18
45MB
165W
9.6GT/s
1866
E7-8860v4
2.2GHz
18
45MB
140W
9.6GT/s
1866
E7-4850v4
2.1GHz
16
40MB
115W
8.0GT/s
1866
E7-4830v4
2.0GHz
14
35MB
115W
8.0GT/s
1866
E7-4820v4
2.0GHz
10
25MB
115W
6.4GT/s
1866
E7-4809v4
2.1GHz
8
20MB
115W
6.4GT/s
1866
Chipset
Intel® C602J Chipset
Intel® Xeon® E7-4800/8800v3 Processor Family
Intel® Xeon® E7-4800/8800v4 Processor Family
NOTE: For more information regarding Intel® chipsets, please see the following URL: http://www.intel.com/products/server/chipsets/
On System Management Chipset
HPE iLO (Firmware HPE iLO4 2.0) 4GB NAND NOTE: Read and learn more in the iLO QuickSpecs.
Memory
One of the following depending on model
Type:
HPE SmartMemory
DDR4 Registered (RDIMM) and Quad Rank Load Reduced (LRDIMM)
DIMM Slots Available
96
Maximum Capacity
6TB
(96 x 64GB)
NOTE: Hewlett Packard Enterprise memory from previous generation servers are not qualified or warranted with this HPE ProLiant Server. HPE SmartMemory is required to realize the memory performance improvements and enhanced functionality listed in this document for Gen9. For additional information, please see the HPE SmartMemory QuickSpecs.
NOTE: LRDIMM and RDIMM are distinct memory technologies and cannot be mixed within a server.
NOTE: Depending on the memory configuration and processor model, the memory speed may run at 1866MHz, 1600MHz, or 1333MHz. Please see Memory Population Table below or the Online Memory Configuration Tool at: HPE Server Memory Configurator.
Network Controller
FlexibleLOM
HPE ProLiant Gen9 servers offer a new flexible network technology - FlexibleLOMs, which offers customers a choice of 1Gb, 10Gb, or 10Gb base-T Ethernet or converged networking in their embedded adapter:
HPE Ethernet 1Gb 4-port 331FLR FIO Adapter
HPE Ethernet 1Gb 4-port 366FLR FIO Adapter
HPE FlexFabric 10Gb 2-port 533FLR-T FIO Adapter
HPE FlexFabric 10Gb 2-port 534FLR-SFP+ FIO Adapter
HPE Ethernet 10Gb 2-port 546FLR-SFP+ FIO Adapter
HPE FlexFabric 10Gb 2-port 556FLR-SFP+FIO Adapter
HPE FlexFabric 10Gb 2-port 556FLR-T FIO Adapter
HPE Ethernet 10Gb 2-port 560FLR-SFP+ FIO Adapter
HPE Ethernet 10Gb 2-port 561FLR-T FIO Adapter
HPE Ethernet 10Gb 2-port 562FLR-SFP+FIO Adapter
NOTE: For additional details see the Networking Section of this document.
NOTE: Wake-on-LAN feature is not supported on the DL580 Gen9 with FlexibleLOMs.
Expansion Slots
Primary Riser (Standard)
Expansion Slots #
Technology
Bus Width
Connector Width
Bus Number
Form Factor
Notes
1
PCIe 3.0
x16
x16
C0/03/0
Full length/full height
Proc 4
2
PCIe 3.0
x16
x16
C0/02/0
Full length/full height
Proc 4
3
PCIe 3.0
x16
x16
80/03/0
Full length/full height
Proc 3
4
PCIe 3.0
x8
x16
80/02/0
Full length/full height
Proc 3
5
PCIe 3.0
x8
x16
80/02/1
Full length/full height
Proc 3
6
PCIe 3.0
x16
x16
40/03/0
Full length/full height
Proc 2
7
PCIe 3.0
x8
x16
40/02/0
Full length/full height
Proc 2
8
PCIe 3.0
x8
x16
40/02/1
Full length/full height
Proc 2
9
PCIe 3.0
x16
x16
00/03/0
Full length/full height
Proc 1
NOTE: PCIe slot availability is dependent on the number of processors installed.
NOTE: Inserting cards with PCI bridges may alter the actual bus assignment number. NOTE: Slots are enumerated differently based on OS. MS OS's enumerate from lowest to highest Device ID by bus (starting with the lowest bus).
Slot 2 PCIe Riser (Optional 3-slot)
719073-B21
Expansion Slots #
Technology
Bus Width
Connector Width
Bus Number
Form Factor
Notes
4
PCIe 3.0
X16
X16
16
Full-height, full-length slot
Proc 2
5
PCIe 3.0
X16
X16
20
Full-height, full-length slot
Proc 2
6
PCIe 3.0
X8
X8
23
Full-height, half-length slot
Proc 2
NOTE: Bus Width Indicates the number of physical electrical lanes running to the connector.
NOTE: When populating the second optional riser slot, the second processor must be installed. NOTE: All slots support PCIe cards to 150W or more depending on card but an additional Power Cord Option is required (PN 669777-B21). See Option Section below for offering.
NOTE: Double wide PCIe cards are only supported in risers with the Processors leveraging the High Performance Heatsink. For Processors requiring double wide GPU support please order the GPU enablement kit (719082-B21).
NOTE: Up to 9 slots supported; all full-length/full-height. Standard: 4 PCI-E 3.0 x8, 5 PCI-E 3.0 x16.
Internal Storage Devices One of the following depending on model
Hard Disk Drive Backplane
Internal SAS lower and upper backplanes support up to ten SFF hard disk drives or solid state drives or Express Bays (NVMe drives)
NOTE: For Factory Integrated models, five (5) lower drive bays ship standard and five (5) upper drive bays can be ordered separately with a drive backplane kits. For Pre-configured models, please refer to the pre-configured models section below.
Diskette Drive
None
Optional Optical Drive
External
Hard Drives
None ship standard
Maximum Internal Storage
CAPACITY
CONFIGURATION
Hot Plug SFF SAS
20TB
10 x 2.0TB
Hot Plug SFF SAS SSD
38.4TB
10 x 3.84TB
Hot Plug SFF SATA SSD
38.4TB
10 x 3.84TB
Hot Plug SFF NVMe PCIe SSD
8TB
5 x 1.6TB
NOTE: Optional support for 5 NVMe SSDs or 5 SFF drives are also available.
Power Supply
One of the following depending on model
HPE 1500W Common Slot Platinum Plus Hot Plug Power Supply
HPE 1500W Common Slot 48VDC Hot Plug Power Supply
NOTE: 1500W power supply supports high line voltage only.
HPE 1200W Common Slot Platinum Plus Hot Plug Power Supply
NOTE: A minimum of two (2) power supplies are required. Four (4) 1500W power supplies offers N+N redundancy for highly loaded configurations.
Prior to making a power supply selection it is highly recommended that the HPE Power Advisor is run to determine the right size power supply for your server configuration. The HPE Power Advisor is located at: http://www.hp.com/go/hppoweradvisor.
The Hewlett Packard Enterprise Common Slot (CS) power supplies allow for commonality of power supplies across a wide range of ProLiant and Integrity servers, as well as HPE Storage solutions, and are designed to provide the highest power supply efficiency without degrading system performance. Hewlett Packard Enterprise CS power supplies are tested by the Electric Power Research Institute (EPRI) and certified through the ECOS 80 Plus power supply program. HPE CS power supply options provide efficiency ratings of up to 94% (80 Plus Platinum) and are available. All Hewlett Packard Enterprise Common Slot power sources are UL, CE Mark Compliant, hot-plug and support redundant configurations.
HPE CS Platinum Plus power supplies are required when enabling HPE's Intelligent Power Discovery (IPD) solution. IPD is the first technology to create an automated, energy-aware network between IT systems and facilities. This allows your company to reclaim millions of dollars in wasted power capacity and downtime costs across data centers. For more information on the Hewlett Packard Enterprise IPD solution, go to HPE Intelligent Power Distribution Units.
NOTE: Mixing of power supplies in the same server is not supported. All power supplies must be of the same output and efficiency rating. If non-matched power supplies are inserted you will get errors and operation will fail.
Required Cabling
Server Power Cords
Server ships with high-voltage server to PDU power cord.
NOTE: If customers require a local power cord, they can check the power cord matrix for the appropriate county specific SKU. Please see the following power cord matrix for details: http://www.hp.com/go/powercordmatrix.
System Fans One of the following depending on model
Non-redundant
Redundant
4 Hot Plug Fans (eight rotors with N+1 redundancy)
Interfaces
Serial
1 back
Video
1 front; 1 back
FlexibleLOM Network Ports
Selection of FlexibleLOM options
iLO 4 Remote Management
1 x 1Gbe dedicated
Micro SD Slot
1 internal
NOTE: Dual microSD USB option kit also available (741279-B21).
USB 2.0 Ports
8 total: 2 front; 4 back; 2 internal
Operating Systems and Virtualization Software Support for ProLiant Servers
Microsoft Windows Server
Red Hat Enterprise Linux (RHEL)
SUSE Linux Enterprise Server (SLES)
VMware
CentOS
Oracle Linux
NOTE: For more information on Hewlett Packard Enterprise Certified and Supported ProLiant Servers for OS and Virtualization Software and latest listing of software drivers available for your server, please visit our Support Matrix at: http://www.hp.com/go/ossupport and our driver download page http://h20565.www2.hpe.com/hpsc/swd/public/readIndex?sp4ts.oid=8090151.
Upgradeability
Upgradeable to 4 processors
Up to 96 DDR4 DIMM slots
9 PCIe Gen3 expansion slots
NOTE: PCIe slot availability is dependent on the number of processors installed. Please refer to the "Expansion Slots" section for more details.
10 SFF internal HDD/SSD SAS drive bays or 5 SFF Internal Express Bays + 5 SFF Internal HDD/SSD drive bays
NOTE: Five drive bays come standard. An optional five SAS drive or five Express backplane kit with five drive bays can be ordered for ten drives.
5 NVMe SSD drive bays
4 redundant hot-plug power supplies
Industry Standard Compliance
ACPI 2.0. Compliant
PCIe 2.0 Compliant
PXE Support
WOL Support
NOTE: Not supported with FlexLOMs on DL580 Gen9.
Physical Address Extension (PAE) Support
Microsoft® Logo certifications
USB 2.0 Support
ASHRAE A3/A4
NOTE: For additional technical thermal details regarding ambient temperatures, humidity and features support please visit: http://www.hp.com/servers/ASHRAE.
Graphics
Integrated Matrox G200 video standard
16 MB Flash 256 MB DDR 3 with ECC (112 MB after ECC and video) HPE iLO 4 On System Management Memory
16 MB Flash 256 MB DDR 3 with ECC (112 MB after ECC and video)
HPE Server UEFI/Legacy ROM
Unified Extensible Firmware Interface (UEFI) is an industry standard that provides better manageability and more secured configuration than the legacy ROM while interacting with your server at boot time. HPE ProLiant Gen9 platform defaults to UEFI and can be factory or field configured for Legacy BIOS Boot Mode.
NOTE: The UEFI System Utilities function is analogous to the HPE ROM-Based Setup Utility (RBSU) of legacy BIOS. For more information, please visit http://www.hpe.com/servers/uefi.
UEFI enables numerous new capabilities specific to HPE ProLiant servers such as:
Secure Boot Operating system specific functionality Support for > 2.2 TB (using GPT) boot drives USB 3.0 Stack Embedded UEFI Shell Mass Configuration Deployment Tool using RESTful API for iLO 4 PXE boot support for IPv6 networks Boot support for option cards that only support a UEFI option ROM Network Stack configurations NOTE: For UEFI Boot Mode, boot environment and OS image installations should be configured properly to support UEFI. NOTE: UEFI FIO Setting (758959-B22) can be selected to configure the system in Legacy mode in the factory for your HPE ProLiant Gen9 Server.
Form Factor
4U rack
Embedded Management
HPE Integrated Lights-Out (HPE iLO)
Monitor your servers for ongoing management, service alerting, reporting and remote management with HPE iLO. Learn more at http://www.hpe.com/info/ilo.
UEFI
Configure and boot your servers securely with industry standard Unified Extensible Firmware Interface (UEFI). Learn more at http://www.hpe.com/servers/uefi.
RESTful API
RESTful API for iLO 4 is Redfish 1.0 conformance for simplified server management such as configuration and maintenance tasks based on modern industry standards. Learn more at http://www.hpe.com/info/restfulapi.
Intelligent Provisioning
Hassle free server and OS provisioning for 1 or few servers with Intelligent Provisioning. Learn more at http://www.hpe.com/servers/intelligentprovisioning.
Embedded Remote Support
The Hewlett Packard Enterprise embedded remote support, when used with Insight Online direct connect or HPE Insight Remote Support, allows HPE ProLiant servers to transmit hardware events directly to Hewlett Packard Enterprise or a Hewlett Packard Enterprise Authorized Partner for automated phone home support. Learn more at http://www.hpe.com/info/insightonline/explore.
Server utilities
Smart Update
Optimize firmware and driver updates with Smart Update solutions including Smart Update Manager (SUM) and Service Pack for ProLiant (SPP). Learn more at http://www.hpe.com/servers/smartupdatemanager.
HPE Systems Insight Manager (HPE SIM)
HPE SIM allows you to monitor the health of your HPE ProLiant Servers and HPE Integrity Servers, and also provides you with basic support for non-HPE servers. HPE SIM also integrates with Smart Update Manager to provide quick and seamless firmware updates. Learn more at http://www.hpe.com/servers/hpsim.
Scripting Tool Kit and Windows PowerShell
Provision 1 to many servers using your own scripts to discover and deploy them with Scripting Tool Kit (STK) for Windows and Linux or Scripting Tools for Windows PowerShell. Learn more at http://www.hpe.com/servers/proliant/stk or http://www.hpe.com/servers/powershell.
RESTful Interface Tool
RESTful Interface tool is a scripting tool to provision using RESTful API for iLO 4 to discover and deploy servers at scale. Learn more at http://www.hpe.com/info/resttool.
HPE iLO Mobile Application
Enables the ability to access, deploy, and manage your server anytime from anywhere from select smartphones and mobile devices. For additional information please visit: http://www.hpe.com/info/ilo/mobileapp.
HPE Insight Online
HPE Insight Online, available at no additional cost as part of your Hewlett Packard Enterprise warranty or contractual support agreement with Hewlett Packard Enterprise, is a personalized dashboard for simplified tracking of IT operations and support information from anywhere, anytime. Learn more at http://www.hpe.com/info/insightonline/explore.
Security
Power-on password
Keyboard password
External USB port enable/disable
Network Server Mode
Serial interface control
Administrator's password
Trusted Platform Module (TPM)
TPM 2.0
NOTE: HPE Trusted Platform Module 2.0 Option (745823-B21) works with Gen9 servers with UEFI Mode not Legacy Mode. It is not compatible with HP ProLiant Gen8 servers or earlier generation variants. HPE Gen9 servers purchased earlier may need the latest firmware update to be compatible with the TPM 2.0 Option. The earlier HPE Trusted Platform Module Option (488069-B21) is the TPM 1.2 version, which is also available however TPM 2.0 has newer technology standards incorporated. The TPM 1.2 compatible server platforms include Gen8 and Gen9 servers. HPE server systems can have a TPM module (of any type) installed only once. It cannot be replaced with any other TPM module.
Intel® Secure Key
HPE Secure Encryption (Smart Array Controller)
HPE Advanced Data Guard (Smart Array Controller)
Warranty
This product is covered by a global limited warranty and supported by HPE Services and a worldwide network of HPE Authorized Channel Partners resellers. Hardware diagnostic support and repair is available for three years from date of purchase. Support for software and initial setup is available for 90 days from date of purchase. Enhancements to warranty services are available through HPE Care Pack services or customized service agreements. Hard drives have either a one year or three year warranty; refer to the specific hard drive QuickSpecs for details.
NOTE: Server Warranty includes 3-Year Parts, 3-Year Labor, 3-Year Onsite support with next business day response. Warranty repairs may be accomplished through the use of Customer Self Repair (CSR) parts. These parts fall into two categories: 1) Mandatory CSR parts are designed for easy replacement. A travel and labor charge will result when customers decline to replace a Mandatory CSR part; 2) Optional CSR parts are also designed for easy replacement but may involve added complexity. Customers may choose to have HPE replace Optional CSR parts at no charge. Additional information regarding worldwide limited warranty and technical support is available at: http://h18004.www1.hp.com/products/servers/platforms/warranty/index.html.
Optional Features
Embedded Management
iLO Advanced
HPE iLO Advanced licenses offer smart remote functionality without compromise, for all HPE ProLiant servers. The license includes the full integrated remote console, virtual keyboard, video, and mouse (KVM), multi-user collaboration, console record and replay, and GUI-based and scripted virtual media and virtual folders. You can also activate the enhanced security and power management functionality. Learn more about HPE iLO Advanced at http://www.hpe.com/servers/iloadvanced.
Server Management
HPE Insight Control
HPE Insight Control, lets you deploy, migrate, monitor, remote control, and optimize your IT infrastructure through a single, simple management console. For more information, see http://www.hpe.com/info/insightcontrol.
HPE Insight Cluster Management Utility (CMU)
HPE Insight Cluster Management Utility is a HyperScale management framework that includes software for the centralized provisioning, management and monitoring of nodes and infrastructure. Learn more at http://www.hpe.com/info/cmu.
Rack and Power Infrastructure
HPE Rack and Power Infrastructure products and services create highly efficient and intelligent solutions for existing or new IT data centers. HPE Rack and Power infrastructure solutions - rack infrastructure, power protection and management, performance optimized data centers (PODs) - are the foundation you are looking for to help secure your long-term IT success. These products are designed to help you react to changes in the industry. They deliver efficient, easy-to-use capabilities to manage, monitor, deploy and provision infrastructure from entry to enterprise. As an industry leader, Hewlett Packard Enterprise is uniquely positioned to address the key concerns of power, cooling, cable management and system access.
Learn more at HPE Rack and Power Infrastructure.
High Performance Clusters
HPE Cluster Platforms
HPE Cluster Platforms are specifically engineered, factory-integrated large-scale ProLiant clusters optimized for High Performance Computing, with a choice of servers, networks and software. Operating system options include specially priced offerings for Red Hat Enterprise Linux and SUSE Linux Enterprise Server, as well as Microsoft Windows HPEC Server. A Cluster Platform Configurator simplifies ordering. http://www.hp.com/go/clusters.
HPE HPEC Interconnects
High Performance Computing (HPC) interconnect technologies are available for this server as part of the HPE Cluster Platform portfolio. These high-speed InfiniBand and Gigabit interconnects are fully supported by Hewlett Packard Enterprise when integrated within a Hewlett Packard Enterprise cluster. Flexible, validated solutions can be defined with the help of configuration tools. http://www.hp.com/techservers/clusters/ucp/index.html.
HPE Insight Cluster Management Utility
HPE Insight Cluster Management Utility (CMU) is a Hewlett Packard licensed and supported suite of tools that are used for lifecycle management of hyperscale clusters of Linux ProLiant systems. CMU includes software for the centralized provisioning, management and monitoring of nodes. CMU makes the administration of clusters user friendly, efficient, and effective. http://www.hp.com/go/cmu.
HPC Interconnects
NOTE: High Performance Computing (HPC) interconnect technologies are available for this server under the HPE Cluster Platform product portfolio. These high-speed interconnects are fully supported by Hewlett Packard Enterprise when they are part of these configure to order clusters. Solutions can be defined with a lot of flexibility with the help of configuration tools. Please visit the following URL to configure HPEC Clusters with InfiniBand Interconnects: http://www.hp.com/techservers/clusters/ucp/index.html.
Storage Software
Whether you need to solve a specific data protection, archiving, or storage command and control challenge, or deliver on strategic consolidation, compliance, or continuity initiatives, look no further than Hewlett Packard Enterprise storage software. Our storage software helps you reduce costs, simplify storage infrastructure, protect vital assets and respond faster to business opportunities.
Storage software that gets the job done:
Data Protection and Recovery Software Whether you're a large enterprise or a smaller business, Hewlett Packard Enterprise data protection and recovery software will cost-effectively protect you against disaster and ensure business continuity. Data Archive and Migration Software Hewlett Packard Enterprise storage software enables you to comply with data retention and retrieval requirements, improve application performance, and reduce costs by efficiently migrating infrequently accessed or less valuable data to lower cost storage. Storage Resource Management Software (SRM) Hewlett Packard Enterprise storage resource management software reduces operational costs and provides the command and control foundation you need to efficiently manage and visualize your physical and virtual environments. Data Replication Software Hewlett Packard Enterprise offers array-based and host-based replication software for use in disaster recovery, testing, application development and reporting. Storage Device Management Software Maximize your investment in Hewlett Packard Enterprise storage and networking with software that enables hardware-specific configuration, performance tuning and connectivity management. HPE StoreVirtual VSA With HPE StoreVirtual VSA you can use the power of virtualization to create a virtual array within your host server. Manage it as a single pool of shared storage capacity, and scale it to match your evolving needs. HPE ProLiant Gen9 servers include a 3-year limited license for HPE StoreVirtual VSA software with 1TB of capacity at no extra cost. Simply select to install HPE StoreVirtual VSA software during server setup within Intelligent Provisioning. More information, instructional videos, and free console management software are available at http://www.hp.com/go/vsa1TB.   NOTE: For more information about Storage Software including QuickSpecs, please see: http://www.hp.com/go/storage/software.
Solutions
Factory Express  Portfolio for Servers and Storage
HPE Factory Express offers configuration, customization, integration and deployment services for Hewlett Packard Enterprise servers and storage products. Customers can choose how their factory solutions are built, tested, integrated, shipped and deployed.
Factory Express offers service packages for simple configuration, racking, installation, complex configuration and design services as well as individual factory services, such as image loading, asset tagging, and custom packaging. Hewlett Packard Enterprise products supported through Factory Express include a wide array of servers and storage: HPE Integrity, HPE ProLiant, HPE ProLiant Server Blades, HPE BladeSystem, HPE 9000 servers as well as the MSAxxxx, VA7xxx, EVA, XP, rackable tape libraries and configurable network switches.
For more information on Factory Express services for your specific server model please contact your sales representative or go to: http://www.hp.com/go/factory-express.
One Config Simple (SCE)
SCE is a guided self-service tool to help sales and non-technical people provide customers with initial configurations in 3 to 5 minutes. You may then send the configuration on for configuration help, or use in your existing ordering processes. If you require "custom" rack configuration or configuration for products not available in SCE, please contact Hewlett Packard Enterprise Customer Business Center or an Authorized Partner for assistance. https://h22174.www2.hp.com/SimplifiedConfig/Index.
0 notes
vmwarews-blog · 8 years ago
Text
HPE ProLiant ML10 Gen9 server
ProLiant ML10 sees Gen9 server before Hewlett Packard enterprise solution for SMBs or workgroups. The server is a cost-effective Tower variant on the basis of a U4 chassis with a processor. HPE offers marketing action currently discounted for 639 euro the model in the context of a . It has a Intel Xeon E3 1225v5 (Skylake), 8 GB DDR4/2133-memory and a 1 terabyte large hard drive.
sufficient power for smaller and medium-sized enterprises
the ProLiant ML10 Gen9 comes standard with two display ports, as well as sufficient USB slots in the House. On the back there are four USB 3.0 (type A)- and on the front two USB 2.0 ports (type A). The test model is equipped with a current Intel Skylake architecture E3-1225 v5 -based Xeon. The quad-core processor offers a standard clock of 3.3 GHz, the Turbo frequency is 3.7 GHz. He has 8 MByte L3 cache and provides support by Intel VT-d , which guarantees a powerful virtualization. The power consumption of the chip in the 14-nanometer process made of 80 watts.
the HPE ProLiant ML10 Gen9 E3-1225 v5 is equipped with one Xeon based on the current Intel Skylake architecture. The quad-core processor offers a standard clock of 3.3 GHz, the Turbo frequency is 3.7 GHz (image: HPE).
as compared to the previous version of HASWELL of Skylake processor features an integrated graphic unit. In addition, the new Xeon CPUs support the fast DDR4/2133-RAM, which provides a maximum bandwidth of 34.1 GByte / s, which represents a substantial improvement compared to its predecessor with a maximum memory performance of 25.6 GB/s. Also the bus speed has increased from 5 (HASWELL) to 8 GByte / s (Skylake) .
surveyyour company plans in the next few months the purchase of new servers?
Yes.
No.
don’t know.
results show
loading…
the connection to the network using an Intel I219-ch-chip. The connection to do this can be found on the back and base included. Who needs more network connections, can add it as an additional adapter.
the default installed hard drive offers a capacity of 1 TB. Who wants to put the server in production to store data is available, should incorporate multiple disks and combine them into a RAID array. The HPE ProLiant ML10 Gen9 brings all prerequisites for this usage scenarioo.
the HPE ProLiant ML10 Gen9 E3-1225 v5 CPU is equipped with a powerful Intel Xeon (screenshot: Thomas Joos).
chipset and memory
as chipset, the server uses an Intel C236. The Fraiklösung built into the CPU is an Intel HD graphics P530. There are two display ports with version 1.2 available for the monitor port. An adapter is required for the connection of a monitor with HDMI/DVI or VGA interface. The maximum RAM is unbuffered, with 4 DIMM slots DDR4 (UDIMMS), ECC, up to 64 GB of RAM. Thus, the server should meet almost all claims of SMEs.
by default an 8-GByte module is installed in the server so that three other memory slots are free. The HPE ProLiant per module supports up to 16 GB of memory. If the existing module is to be used further, the RAM 56 GB can expand. The already existing module is replaced with a 16-GByte module, the maximum capacity of 64 GB can be achieved.
as BIOS is AMI Aptio 1.02 to use. The configuration with the UEFI boot mode, both local and remote deployments are with intelligent provisioning or scripting toolkit possible. In most cases, the installation will take place but on a USB stick. This connects to the server, he can select the boot menu when the server is started. This Windows Server 2012 R2 Essentials is for example Windows Server 2012 R2 or for small businesses.
disk and PCIe
In the Interior of the server are different slots available: 1 x PCIe 3.0 x 16 (x 8 bound), 1 x PCIe 3.0 x 8 (x 8 bound), 1 x PCIe 3.0 x 4 (with x 4 bound), 1 x PCIe 3.0 x 4 (x 1 tied). In contrast to other servers in this price range the HPE ProLiant ML10 Gen9 offers PCIe 3.0 on all available slots. Easily current extension adapter such as a hardware RAID controller, additional graphics or network adapter in the server can be installed.
the server has a total of five bays for hard drives with a slot already occupied by the standard 1-TByte plate. It comes from Seagate (ST1000DM003) and is designed as a desktop solution.
the Tower Server HPE ProLiant ML10 Gen9 has on the back of two display ports, four USB 3.0 ports and a LAN connection (screenshot: HPE)
the server but used 24-hour intensive with corresponding disk access, it is recommended to buy a hard drive for optimised. Alternatively, the server should be shut down when not needed.  In addition, the server has two bays with 5.25 “, but about not HotSwap removable frames. The disks have to be screwed so even with an Exchange. In smaller companies or networks, but that shouldn’t matter.
are the bays behind the front cover (image: HPE).
internally the server has six SATA-III (1 x slim SATA), USB 2.0 and 1 x TPM. RAID function is available an Intel RST SATA RAID (RAID 0, 1, 10, 5), which can bind to the individual hard drives. A hardware RAID controller must be added mind as adapter. With the optional HPE H241 smart can operating in the HBA host bus adapter or simple RAID mode selection. When operating in the simple RAID mode, the adapter provides RAID 0, RAID 1, and RAID 5 with optional HPE secure encryption functions. In addition, HPE StoreEver can be connected Ultrium drives to backup the data on the server to the adapter. Alternatively, you can use the software RAID features of the operating system.
the power supply delivers 300 Watts. Because the server for SMEs and small departments is optimized, lacks the expansion options for the power supply. It is to build so not possible redundant power supplies. It also capitalizes on the noise emission. The dimensions of the server are 17.50 x 40,13 x 36,76 cm (W x D x H). In the standard version, the HPE ProLiant ML10 9 6.86 kg weighs gene.
the HPE ProLiant ML10 Gen9 offers a maximum memory capacity of 64 GB with four DIMM Sockets. Expansion slots are four PCIe slots available (picture: HPE).
warranty and support
as all servers provide HPE also for the HPE ProLiant ML10 Gen9 an own website, on which query is the support period. Here are the different drivers available. The necessary serial number for downloading the driver can be found sticker on the server chassis. HPE offers the possibility for up to 5 years to book support options. By default is one year warranty included. It is recommended that at least the 3-year support contract HPE eCare Pack ML10 Gen9 3Y/NBD FC (H1RN5E) to book. This will ensure that a Ttechnicians on the ground no later than on the next working day carries out a repair. The recommended retail price for it is 120 euros. If you want a faster response time you can book the package HP eCarePack ML10 G9 3Y / 24 × 7 FC (H1RN7E) for about 300 euro. Then, responds the support within four hours and is also at the weekend in use. HPE has brought together the various service options in a PDF document .
HPE offers its own Internet page with drivers, information, and support data for the HPE ProLiant ML10 Gen9 (screenshot: Thomas Joos).
HPE available but also by phone , if administrators need assistance setting up or administration. Support requests can be but also quickly and easily on the HPE Web site .
management technology (AMT)
which can monitor the server with the Intel active management technology (AMT) be monitoring and remote control with Intel Active. HP iLO is not part of the server. A software agent, which can be downloaded from the HPE server side, administrators with information about the State of the server. Server with Intel AMT can not only monitor, but also control over the network. So there is also the possibility to start the server over the network. However, dedicated the servers network ports on the network monitoring or control are missing. The connection via the standard network port, which is installed in the server.
as monitoring solution is the Intel active management technology (AMT) available on the HPE ProLiant ML10 Gen9 (screenshot: Thomas Jooss).
with the Intel AMT manageability server can also read also information of multiple servers and also to control different servers on the network. Following functions are available to do this:
Intel Intelligent Power node Manager
power management service
boat control
power state management
hardware/software inventory
hardware alerting
audit Microsoft of logs
network access protection (NAP)
KVM remote control
firmware upgrade/downgrade
host-based setup and configuration
the server supports although no ACPI S3 for energy saving, but hibernation (S4). Also wake on LAN the server supports. It is so possible to shutdown the server over the network and to start remote, if it is needed again.
supported operating systems
the HPE ProLiant ML10 Gen9 is shared for Windows Server 2012 and Windows Server 2012 R2. Also Linux can be installed on the server. Here, HPE offers drivers for CentOS 7, Red Hat Enterprise Linux 6 / 7, SUSE Linux Enterprise Server 11/12. Because depending on the stage a CD/DVD drive is missing the server, the operating system either over the network or via a USB stick must be installed. This is not a problem. Here it offers to download before installing to before the necessary driver for HPE. When used with Windows Server 2012 R2, for example, the driver for the network adapter, and other drivers are missing. They are quickly installed. HPE on the support page of the server provides the latest versions available. Because the driver for the storage controller is integrated in Windows Server 2012 R2, the operating system can be installed quickly and easily.
low consumption and quiet operation
HPE has limited the hardware of the server on the essentials. The benefits of course. Because the server requires very little power at idle. As a result, the energy costs can be significantly reduced. Of course the energy consumption is increasing, if additional adapter or hard disk drives are installed in the server. In normal operation, the server is also very quiet so that he can run outside the server room. For these reasons, it is therefore also quite possible to run the server as a workstation.
the HPE ProLiant ML10 Gen9 conclusion is an ideal entry-level server for small businesses, home Officeenvironments or departments or small offices. He provides enough power for the interests of a small Department, or small business and has sufficient scalability. Generally is recommended to put it Xeon processors on the Intel version, and make sure that an optimal hard drive is installed, when continuous operation is to be carried out.
the standard variant, where an extension of memory can be useful when more sophisticated services are to be used is sufficient for entry. The server as a workstation is generally to use. The Tower Server, consumes little energy, is quiet and offers sufficient expansion capabilities. With its numerous interfaces, slots and drive bays, the server is flexible can be used for different purposes. In the standard version, the model in the Starter model 639 euros. For more information
0 notes
vmwarews-blog · 8 years ago
Text
Blade servers: An introduction and overview
Blade servers have become a staple in almost every data center. The typical “blade” is a stripped-down modular server that saves space by focusing on processing power and memory on each blade, while forgoing many of the traditional storage and I/O functionality typical of rack and standalone server systems. Small size and relatively low cost makes blades ideal for situations that require high physical server density, such as distributing a workload across multiple Web servers).
But high density also creates new concerns that prospective adopters should weigh before making a purchase decision. This guide outlines the most important criteria that should be examined when purchasing blade servers, reviews a blade server’s internal and external hardware, and discusses basic blade server management expectationss.
Form factor. Although blade server size varies from manufacturer to manufacturer, blade servers are characterized as full height or half height. The height aspect refers to how much space a blade server occupies within a chassis.
Unlike a rackmount server, which is entirely self-contained, blade servers lack certain key components, such as cooling fans and power supplies. These missing components, which contribute to a blade server’s small size and lower cost, are instead contained in a dedicated blade server chassis. The chassis is a modular unit that contains blade servers and other modules. In addition to the servers, a blade server chassis might contain modular power supplies, storage modules, cooling modules (i.e., fans) and management modules.
Blade chassis design is proprietary and often specific to a provider’s modules. As such, you cannot install a Hewlett-Packard (HP) Co. server in a Dell Inc. chassis, or vice versa. Furthermore, blade server chassis won’t necessarily accommodate all blade server models that a manufacturer offers. Dell’s M1000e chassis, for example, accommodates only Dell M series blade servers. But third-party vendors sometimes offer modules that are designed to fit another vendor’s chassis. For example, Cisco Systems Inc. makes networking hardware for HP and Dell blades.
Historically, blades’ high-density design posed overheating concerns, and they could be power hogs. With such high density, a fully used chassis consumes a lot of power and produces a significant amount of heat. While there is little danger of newer blade servers overheating (assuming that sufficient cooling modules are used), proper rack design and arrangement are still necessary to prevent escalating temperatures. Organizations with multiple blade server chassis should design data centers to use hot-row/cold-row architecture, as is typical with rack servers.
Processor support. As organizations ponder a blade server purchase, they need to consider a server’s processing capabilities. Nearly all of today’s blade servers offer multiple processor sockets. Given a blade server’s small form factor, each server can usually accommodate only two to four sockets.
Most blade servers on the market use Intel Xeon processors, although the Super Micro SBA-7142G-T4 uses Advanced Micro Devices (AMD) Inc.’s Opteron 6100 series processors. In either case, blade servers rarely offer less than four cores per socket. Most blade server CPUs have six to eight cores per socket. Some AMD Opteron-based processors, such as the 6100 series used by Super Micro, have up to 32 cores.
If you require additional processing power, consider blade modules that can work cooperatively, such as the SGI Altix 450. This class of blades can distribute workloads across multiple nodes. By doing so, the SGI Altix 450 offers up to 38 processor sockets and up to 76 cores when two-core processors are installed.
Memory support. As you ponder a blade server purchase, consider how well the server can host virtual machines (VMs). In the past, blade servers were often overlooked as host servers, because they were marketed as commodity hardware rather than high-end hardware capable of sustaining a virtual data center. Today, blade server technology has caught up with data center requirements, and hosting VMs on blade servers is a realistic option. Because server virtualization is so memory-intensive, organizations typically try to purchase servers that support an enormous amount of memory.
Even with its small form factor, it is rare to find a blade server that offers less than 32 GB of memory. Many of the blade servers on the market support hundreds of gigabytes of memory, with servers like the Fujitsu Primergy BX960 S1 and the Dell PowerEdge M910 topping out at 512 GB.
As important as it is for a blade server to have sufficient memory, there are other aspects of the server’s memory that are worth considering. For example, it is a good idea to look for servers that support error-correcting code (ECC) memory. ECC memory is supported on some, but not all, blade servers. The advantage to using this type of memory is that it can correct single-bit memory errors, and it can detect double-bit memory errors.
Drive support. Given their smaller size, blade servers have limited internal storage. Almost all the blade servers on the market allow for up to two 2.5-inch hard drives. While a server’s operating system (OS) can use these drives, they aren’t intended to store large amounts of data.
If a blade server requires access to additional storage, there are a few different options available. One option is to install storage modules within the server’s chassis. Storage modules, which are sometimes referred to as storage blades or expansion blades, can provide a blade server with additional storage. A storage module can usually accommodate six 2.5-inch SAS drives and typically includes its own storage controller. The disadvantages to using a storage module are that storage modules consume chassis space and the total amount of storage it provides is still limited.
Organizations that need to maximize chassis space for processing (or provide blade servers with more storage than can be achieved through storage modules) typically deploy external storage, such as network-attached storage or storage area network (SAN). Blade servers can accept Fibre Channel mezzanine cards, which can link a blade server to a SAN. In fact, blade servers can even boot from a SAN, rendering internal storage unnecessary.
If you do use internal storage or a storage module, verify that the server supports hot-swappable drives so that you can replace drives without taking the server offline. Even though hot-swappable drives are standard features among rackmount servers, many blade servers do not support hot-swappable drives.
Expansion slots. While traditional rackmount servers support the use of PCI Express (PCIe) and PCI eXtended (PCI-X) expansion cards, most blade servers cannot accommodate these devices. Instead, blade servers offer expansion slots that accommodate mezzanine cards, which are PCI based. Mezzanine card slots, which are sometimes referred to as fibers, are referred to by letter, where the first slot is A, the second slot is B and so on.
We refer to mezzanine slots this way because blade server design has certain limits and requires consistent slot use. If in one server, you install a Fibre Channel card in slot A, for example, every other server in the chassis is affected by that decision. You could install a Fibre Channel card into slot A on your other servers or leave slot A empty, but you cannot mix and match. You cannot, for example, place a Fibre Channel card in slot A on one server and use slot A to accommodate an Ethernet card on another server. You can, however, put a Fibre Channel card in slot A and an Ethernet card in slot B -- as long as you do the same on all other servers in the chassis (or, alternatively, leave all slots empty).
External blade server characteristics
Power. Blade servers do not contain a power supply. Instead, the power supply is a modular unit that mounts in the chassis. Unlike a traditional power supply, a blade chassis power supply often requires multiple power cords, which connect to multiple 20 ampere utility feeds. This ensures that no single power feed is overloaded, and in some cases provides redundancy.
Another common design provides for multiple power supplies. For example, the HP BladeSystem C3000 enclosure supports the simultaneous use of up to eight different power supplies, which can power eight different blade servers.
Network connectivity. Blade servers almost always include 2 GB network interface cards (NICs) that are integrated into the server. However, some servers, such as the Fujitsu Primergy BX960 S1, offer 10 GB NICs instead. Unlike a rackmount server, you cannot simply plug a network cable into a blade server’s NIC. The chassis design makes it impossible to do so. Instead, NIC ports are mapped to interface modules, which provide connectivity on the back of the chassis. The interesting thing about this design is that a server’s two NIC ports are almost always routed to different interface modules for the sake of redundancy. Additional NIC ports can be added through the use of mezzanine cards.
User interface ports. The interface ports for managing blade servers are almost always built into the server chassis. Each chassis typically contains a traditional built-in keyboard, video and mouse (KVM) switch, although connecting to blade servers through an IP-based KVM may also be an option. In addition, the chassis almost always contains a DVD drive that can be used for installing software to individual blade servers. Some blade servers, such as the HP ProLiant BL280c G6, contain an internal USB port and an SD card slot, which are intended for use with hardware dongles.
Controls and indicators. Individual blade servers tend to be very limited in terms of controls and indicators. For example, the Fujitsu Primergy BX960 S1 only offers an on-off switch and an ID button. This same server has LED indicators for power, system status, LAN connection, identification and CSS.
Often the blade chassis contains additional controls and indicators. For example, some HP chassis include a built in LCD panel that allows the administrator to perform various configuration and diagnostic tasks, such as performing firmware updates. The precise number and purpose of each control or indicator will vary with each manufacturer and their blade chassis design.
Given that blade servers tend to be used in high-density environments, management capabilities are central. Blade servers should offer diagnostic and management capabilities at both the hardware and the software level.
Hardware-based management features. Hardware-level monitoring capabilities exist so that administrators can monitor server health regardless of the OS that is running on the server. Intelligent Platform Management Interface (IPMI) is one of the most common and is used by the Dell PowerEdge M910 and the Super Micro SBA-7142G-T44.
IPMI uses a dedicated low-bandwidth network port to communicate a server’s status to IPMI-compliant management software. Because IPMI works at the hardware level, the server can communicate its status regardless of the applications that run on the server. In fact, because IPMI works independently of the main processor, it works even if a server isn’t turned on. The IPMI hardware can do its job as long as a server is connected to a power source.
Blade servers that support IPMI 2.0 almost always include a dedicated network port within the server’s chassis that can be used for IPMI-based management. Typically, a single IPMI port services all servers within a chassis. Unlike a rack server, each server doesn’t need its own management port.
Blade servers can get away with sharing an IPMI port because of the types of management that IPMI-compliant management software can perform. Such software (running on a PC) is used to monitor things like temperature, voltage and fan speed. Some server manufacturers even include IPMI sensors that are designed to detect someone opening the server’s case. As previously mentioned, blade servers do not have their own fans or power supplies. Cooling and power units are chassis-level components.
Software-based management features. Although most servers offer hardware-level management capabilities, each server manufacturer also provides their own management software as well, although sometimes at an extra cost. Dell, for example, has the management application OpenManage, while HP provides a management console known as the HP Systems Insight Manager (SIM). Hardware management tools tend to be diagnostic in nature, while software-based tools also provide configuration capabilities. You might, for example, use a software management tool to configure a server’s storage array. As a general rule, hardware management is fairly standardized. Multiple vendors support IPMI and baseboard management controller (BMC), which is another hardware management standard.  Some servers, such as the Dell PowerEdge M910, support both standards. Management software, on the other hand, is vendor-specific. You can’t, for example, use HP SIM to manage a Dell server. But you can use a vendor’s management software to manage different server lines from that vendor. For example, Dell OpenManage works with Dell’s M series blade servers, but you can also use it to manage Dell rack servers such as the PowerEdge R715.
Because of the proliferation of management software, server management can get complicated in large data centers. As such, some organizations try to use servers from a single manufacturer to ease the management burden. In other cases, it might be possible to adopt a third-party management tool that can support heterogeneous hardware, though the gain in heterogeneity often comes at a cost of management granularity. It’s important to review each management option carefully and select a tool that provides the desired balance of support and detail.
Table 1: A basic summary of blade servers
There are countless blade servers on the market. Table 1 displays a sample of some of the currently available blade servers. Furthermore, most server vendors provide numerous configuration options, so the configurations outlined in the table may differ from what you encounter in the real world.
ProductPowerEdge M910 Blade Server
Processor supportTwo to four processor sockets
Maximum cores32
ChipsetIntel Xeon 7500 and 6500 series
Memory supportUp to 512 GB (32 DIMM slots) 1 GB, 2 GB, 4 GB, 8 GB 16 GB ECC DDR3
Hard drive supportUp to two 2.5” SAS SSD, SATA SSD, SAS (15K or 10K), nearline SAS (7.2K). Maximum internal storage of up to 2 TB
Expansion slotsSupport for three fabrics
Network ports
Two embedded Broadcom NetExtreme II Dual port 5709S Ethernet NICs with failover and load-balancing capabilities.
TCP/IP Offload and iSCSI offload capabilities on supported OSes.
Manageability
BMC, IPMI 2.0 Compliant
Dell OpenManage
Unified Server Configurator
Lifecycle Controller
iDRAC6 with optional vFlash
Power suppliesSupported by Dell’s M1000e Blade Chassis
ProductFujitsu Primergy BX960 S1
Processor supportUp to four Intel Xeon processors
Maximum cores32
ChipsetIntel Xeon E7500 series or Intel X7500 series
Memory support
8 GB to 512 GB
DDR3 registered ECC 1333 MHz PC3-10600, DIMM
Hard drive supportTwo 2.5-inch non hot-pluggable SATA SSD
Expansion slotsFour BX900 mezzanine cards
Network portsTwo Intel 82599, 2x 10 Gbps Ethernet
Manageability
Automatic Server Recovery and Restart
Prefailure Detection and Analysis
ServerView Suite:
SV Installation Manager
SV Operation Manager
SV RAID Manager
SV Update Management
SV Power Management
SV Agents
iRMC S2 Advanced Pack
Power suppliesIntegrated into chassis
ProductHP ProLiant BL280c G6
Processor supportDual socket Intel Xeon 5500 or 5600 series
Maximum cores12
ChipsetIntel Xeon 5500 series or 5600 series
Memory support
Maximum 192 GB
12 DIMM slots
PC3-10600 DDR3
Hard drive supportTwo drive bays supporting non hot-pluggable SAS, SATA or SATA SSD
Expansion slotsTwo
Network portsTwo, NC362i gigabit NICs
Manageability
HP Integrated Lights Out (iLO 2)
HP Insight
Onboard Administrator
Power suppliesInstalled in chassis
ProductSGI Altix 450
Processor support38 sockets for Intel Itanium series 9000 processors
Maximum cores76
ChipsetIntel Itanium series 9000
Memory supportUp to 32 GB DDR2 per blade
Hard drive supportUp to two 146 GB SAS drives
Expansion slotsTwo low-profile PCI-X slots
Network portsNot specified
ManageabilityNot specified
Power suppliesBuilt into chassis
ProductSuper Micro SBA-7142G-T4
Processor supportFour-socket AMD Opteron 6100 series
Maximum cores48
ChipsetAMD Opteron 6100 series eight/12 core
Memory support
Up to 256 GB (16 x 240 pin DIMM)
1333/1066
800 MHz DDR3 ECC unbuffered
Hard drive supportFour 2.5-inch hot-swappable SATA
Expansion slots4X QDR / DDR (40/20 Gbps) Infiniband mezzanine HCA
Network portsIntel 8276 dual-port gigabit Ethernet
ManageabilityIPMI 2.0 via Chassis Management Module
Power suppliesIncluded in chassis
Check out the rest of our Server Month resources.
0 notes
vmwarews-blog · 8 years ago
Text
HP introduces new 3PAR StoreServ 8000 series arrays for the midrange
After almost a 3 year run, HP is replacing the 3PAR StoreServ 7000 series with all new 3PAR StoreSev 8000 series arrays.  This news comes while HP is celebrating how well its mid-range 3PAR arrays have been selling versus competitors. The new arrays features upgraded hardware including 16-gigabit fibre channel and 12-gigabit SAS connectivity for its drives and will feature the same fifth-generation ASIC that were introduced in the 20000 series arrays earlier this year.  The 8000 series also increases the density of storage possible across the board in the 3PAR arrays, reducing the footprint and increasing the top-end capacities.
In terms of portfolio, HP touts a single architecture, single OS and single management across a wide range of solutions with the HP 3PAR.  With the 8000 series introduction, the difference between 3PAR models comes down to the number of controller nodes and associated ports, the types of drives in the array and the number of ASICs in the controllers.  The 8000 series features a single ASIC per controller node and the 20000 series features 2 ASICs per controller node along with more CPU capacity and more RAM for caching.
Both the 8000 and 20000 series arrays feature the 3PAR Gen5 ASIC, which is the latest generation introduced earlier in 2015.  If history repeats, additional capabilities of the Gen5 ASIC will get unlocked by future software upgrades on these two new series of arrays, but out of the gate, the new platforms are already touting density and performance gains in the new platforms.  HP says that they have increased density by 4x, performance 30 to 40 percent and decreased latency by 40 percent between the 7000 and 8000 series arrays.  HP says the 8000 series can provide up to 1 million IOPS at 0.387 ms latency.
HP also announced a new 20450 all-flash starter kit.  This model scales to a maximum of 4 controller nodes as opposed to 8 controller nodes in the 20800 and 20850 models. The 20000 series are the high-end storage arrays HP introduced earlier this year to replace the 10000 series arrays, and are typically targeted at large enterprise and service providers.
That rounds out the HP 3PAR portfolio with the following models:
HP 3PAR StoreServ 8200 is the low-end dual-controller model that scales up to 750TB of raw capacity
HP 3PAR StoreServ 8400 scales up to 4 controller nodes and is capable of scaling out to  2.4PB of raw capacity
HP 3PAR StoreServ 8440 is the converged flash array that provides similiar high performance to an 8450 array, but with the ability to also have spinning disks.  It scales up to 4 controller nodes and includes an increased amount of cache on the controller pairs, comparable to the cache on node with an all-flash array.
HP 3PAR StoreServ 8450 is the all-flash storage array scales up to 4 controller nodes and  up to 1.8PB of raw capacity and a usable capacity over 5.5PB.  This is the model HP talks about when it says 1 million IOPS at under 1 ms of latency.
HP 3PAR StoreServ 20450, a quad-controller, all-flash configuration with larger scale than the 3PAR 8450
HP 3PAR StoreServ 20800, the workhorse array with up to 8 controller nodes and a mix of hard disk and solid state drives.
HP 3PAR StoreServ 20850, the all-flash configuration of the 20000 series.
HP announced the new 8450 all-flash array is available in 2U starter kit priced at just $19,000 for 6TB of usable storage.  When HP talks about usable storage and the all-flash array, it assumes a 4 to 1 compaction using its thin-provisioning and thin-deduplication – both native, realtime capabilities powered by the ASIC.  The same array can also be configured with up to 280TB of usable capacity in just 2U of space.
All this news comes just in time for VMworld, where HP is going to be showing the new arrays publicly for the first time.  I look forward to checking them out on the show floor and talking with some HP folks to find out more.
0 notes
vmwarews-blog · 8 years ago
Text
HPE ProLiant ML350 Gen9 Server
Need to increase your business productivity and improve operations while balancing your employees’ mobility, security and collaboration needs, all within your limited IT budget? HPE ProLiant ML350 Gen9 Server delivers a class-leading combination of performance, expandability, manageability, reliability and serviceability making it the choice for enterprise data centers, expanding SMBs and remote offices of larger businesses. ML350 Gen9 Server leverages the Intel® Xeon® E5-2600 v3 and v4 processors with up to 21%1 performance gain, plus the latest HPE DDR4 SmartMemory offering up to 23%2 performance increase. Additional support for 12 Gb/s SAS, embedded 4x1GbE NIC with a broad range of graphics and compute options. Manage your HPE ProLiant Server in any IT environment by automating the most essential server lifecycle management tasks: deploy, update, monitor, and maintain with ease. The ML350 Gen9Server is ideal for enterprise IT infrastructure to mission critical applicationss.
What's new
Support for HPE DDR4 2400MHz SmartMemory RDIMM/LRDIMM offering 8/16/32/64/128 GB, system maximum memory capacity up to 3 TBB.
Support for the new 25GbE Network Adapters offering, breakthrough performance at 2.5X increase over previous generation for I/O intensive application workloads.
Support for Intel® Xeon® E5-2600 v4 processor offering.
Support for the new 32 Gb and 16 Gb Fiber Channel HBA card options.
Graphic support of NVIDIA® M2000, M4000 and M5000
Six SFF Express Bay option supports up to six NVMe PCIe SSDs for application workloads that demand highest read/write access to storage.
Features
Performance with Unmatched Capacity and Reliability
The HPE ProLiant ML350 Gen9 Server supports up to (2) Intel® Xeon® E5-2600 v3 or v4 processors offering improved performance.
HPE Smart Array Controllers are designed to increase 12 Gb/s performance, data availability and storage capacity while providing the flexibility in choice of solutions that are simple to manage.
Up to (24) DIMM slots to support HPE DDR4 SmartMemory 2400MHz, helping to prevent data loss and downtime with enhanced error handling while improving workload performance and power efficiency.
Availability, Expandability and Serviceability - A Winning Combination
The HPE ProLiant ML350 Gen9 Server offers expandability with greater capacity with (8) to (24) LFF, (8) to (48) SFF drive options and increased I/O expansion.
Large expansion capacity with (9) PCIe expansion slots, (8) USB ports, 5U Rack conversion, and Power Supply options.
Embedded 4x1Gbe and choice of HPE PCIe standup 1GbE, 10GbE or 25GbE Adapters provide you flexibility of networking bandwidth and fabric so you can adapt and grow to changing business needs.
Wider GPU support (4) to boost performance in graphic and VDI applications for financial services, education, scientific research, and medical imaging.
Worldwide availability, service and support with a complete range of Foundation Care offerings from installation to extended support.
Agile Infrastructure Management for Essential Administration
The HPE ProLiant ML350 Gen9 Server provides powerful converged management capabilities for the infrastructure lifecycle with embedded server management for provisioning, updating and diagnostic support with HPE iLO.
Configure in Unified Extensible Firmware Interface (UEFI) boot mode to provision local and remote servers with Intelligent Provisioning and Scripting Toolkits.
Online personalized dashboard for converged infrastructure health monitoring and support management with HPE Insight Online.
Optimize firmware and driver updates and reduce downtime with Smart Update, consisting of Smart Update Manager (SUM) and Service Pack for ProLiant (SPP).
Energy Efficiency by Design
The HPE ProLiant ML350 Gen9 Server offers ENERGY STAR® qualified server configurations illustrate the HPE continued commitment to helping customers conserve energy and save money.
Next generation high efficiency redundant HPE Flexible Slot Power Supplies provide up to 96% efficiency (Titanium).
Improved ambient temperature standard with ASHRAE A3 and A43 helps you reduce cooling costs.
1.
2.
Intel performance testing, http://www.intel.com/performance
up to 23% better performance is based on similar capacity DIMM running on HPE server compared to a non HPE server with DDR4, March 2016
Technical Specifications
Processor family
Intel® Xeon® E5-2600 v3 product family
Intel® Xeon® E5-2600 v4 product family
Number of processors
1 or 2
Processor core available
22 or 20 or 18 or 16 or 14 or 12 or 10 or 8 or 4
Processor cache
55MB LLC
Processor speed
3.5GHz
Form factor (fully configured)
5U
Form factor chassis
Tower or Rack
Power supply type
(4) Common Slot
Expansion slots
(9) For detail descriptions reference the QuickSpecs
Memory, maximum
3 TB
Based on 128 GB DDR4 LRDIMM
Memory slots
24 DIMM slots
Maximum
Memory type
DDR4 SmartMemory
Drive description
(24) LFF SAS/SATA/SSD or
(48) SFF SAS/SATA/SSD
NVMe support via Express Bay will limit max drive capacity
System fan features
Hot plug redundant optional
Network controller
1Gb 331i Ethernet Adapter 4 Ports per controller
Embedded 4-port NIC controller
Storage controller
Smart Array P440ar/2GB FBWC and/or
Dynamic Smart Array B140i
Depending on configuration
Infrastructure management
iLO Management (standard), Intelligent Provisioning (standard), iLO Advanced (optional), HP Insight Control (optional)
Warranty
3/3/3 Server Warranty includes three years of parts, three years of labor, three years of onsite support coverage. Additional information regarding worldwide limited warranty and technical support is available at: www.hpe.com/services/support. Additional Hewlett Packard Enterprise support and service coverage for your product can be purchased locally. For information on availability of service upgrades and the cost for these service upgrades, refer to www.hpe.com/services/support
0 notes
vmwarews-blog · 8 years ago
Text
سرور و انواع سرورهای شبکه
سرورها از طریق شبکه خدمات خود را ارایه می دهند، چه به کاربران خصوصی درون سازمان های بزرگ و چه کابران عمومی از طریق اینترنت. برای مثال وقتی شما عبارتی را در موتور جستجوگر وارد می کنید، این عبارت از طریق اینترنت به سرورهایی که تما�� صفحات مربوطه را ذخیره کرده اند فرستاده می شود و سپس نتایج از سرور به کامپیوتر شما ارسال می گردد.
واژه سرور کاربرد بسیار وسیعی در فناوری اطلاعات دارد و با وجودی که انواع مختلفی از محصولات تحت عنوان سرور نام گذاری می شوند (سخت افزارها، نرم افزارها و سیستم های عامل)، در تئوری به هر برنامه ای که منابع خود را با یک یا چند برنامه دیگر به اشتراک می گذارد، سرور می گویند. برای نشان  دادن این موضوع، مثال آشنای اشتراک گذاری فایل را در نظر بگیرید. در حالی که وجود فایل در یک سیستم آن را تبدیل به سرور نمی سازد، اما مکانیزمی که این فایل ها را توسط سیستم عامل بین کلاینت ها به اشتراک می گذارد سرور نامیده می شود.
در زمینه سخت افزار، سرور معمولا به کامپوترهایی اطلاق می شود که برای میزبانی برنامه های نرم افزاری در محیط شبکه طراحی شده اند. با وجودی که هر کامپیوتر شخصی می تواند به عنوان سرور عمل کند، اما یک سرور اختصاصی (Dedicated Serverr) قابلیت هایی دارد که آن را برای محیط های بزرگ  مناسب تر می سازد. از جمله این قابلیت ها می توان به پردازنده قوی تر، رم ارتقا یافته، منبع تغذیه قوی تر و اتصالات شبکه بیشتر اشاره کرد.
بین سالهای ۱۹۹۰ تا ۲۰۱۰ با افزایش استفاده از سخت افزارهای اختصاصی شاهد حضور سرورهای جامع و اختصاصی بوده ایم که از معروفترین این محصولات می توان ابزار جستجوی گوگل را نام برد.
بر اساس نوع خدمات پردازشی ارایه شده، سرورها در انواع مختلفی طبقه بندی می شوند که در ادامه به آن ها اشاره خواهد شد.
انواع سرور
File server :
سروری می باشد که از طریق آن می توان امکانی جهت  مدیریت فایلها و دسترسی کاربران مختلف شبکه در درایوهای مختلف به صورت متمرکز بر روی یک سرور در شبکه خود برقرار کنیم . که جهت راه اندازی این نوع سرور از طریق Manage Your Server option در منوی Administrative Tools اقدام می کنیم .بیشتر بخوانید… Print Server :
اگر بر روی کامپیوتری ویندوز سرور نصب شود و این کامپیوتر مجهز به یک ��ستگاه پرینتر باشد و این پرینتر جهت دسترسی کاربران مختلف شبکه  Share یا به اشتراک گذاشته شود می تواند به عنوان Print Server مورد استفاده قرار گیرد . Application Server :
سروری می باشد که بر روی آن برنامه های تحت وب قرار می گیرد و از طریق سرویس
IIS(Internet Information Services)این برنامه در اختیار کامپیوترهای دیگر
در اختیار کامپیوترهای دیگر   شبکه قرار می گیرد . Proxy Server :
سروری است که نقش واسطه را بین کلاینت ها و سرورها ی دیگر ایفا می کند . وقتی کاربر بخواهد اطلاعاتی چون فایل ، صفحات وب و سایر منابع را از سرور دیگری دریافت کند ، به  Proxy Serverr متصل می شود . Cache Server :
سروری که برخی از آبجکت ها را مثلاً از اینترنت در حافظه خود نگهداری می کند تا در دسترسی های بعدی آن آبجکت ها از Cachee آورده شوند، که در سرور هستند نه از اینترنت که پهنای باند دوباره مصرف شود، در این صورت سرعت دسترسی ها افزایش و میزان پهنای باند مصرفی کاهش می یابد. شاخص سخت افزاری این سرورها، هارد دیسک و رم بالا است. Mail Server :
کنترل ارسال و دریافت Email توسط Clientها توسط این سرویس انجام می گیرد، در غیر این صورت User باید اطلاعات خود را مستقیماً به اینترنت دریافت یا ارسال کند. Terminal Server:
توسط این سرویس می توان به صورت Remote یا از راه دور به سرور متصل شده و  مدیریت مربوطه را انجام دهیم و یا برنامه ای تحت شبکه را بدین طریق و با استفاده از این سرویس اجرا نمود . VPN Server / Remote Server:
توسط این سرورها می توانیم به کاربران مختلف جهت Remoteویا  وصل شدن از راه دور به شبکه داخلی ما مجوزهائی را بدهیم و این امکان را برای کاربران خود مهیا کنیم و یا با استفاده از ((Virtual Private Nework) VPN ارتباطی ایمن بین دو نقطه ایجاد کنیم . DNS Server:
سروری می باشد که کار Name Resolution را برای ما انجام می دهد و وظیفه آن تبدیل IP  به اسم و بالعکس می باشد . Dynamic Host Configuration Protocol ) DHCP Server ):
این سرور از طریق رنج IP که بر روی آن تعریف می شود بصورت اتوماتیک به کلاینت ها IP  می دهد . این سرویس حتما باید بر روی کامپیوتری که نسخه سرور دارد نصب شود . Game server:
کاربران بازی های کامپیوتری می توانند با اتصال به این سرور به صورت آنلاین به انجام بازی های گروهی بپردازند. Home server:
سروری برای منازل مسکونی است که از طریق یک شبکه خانگی و اینترنت به سایر دستگاه های داخل خانه خدمات ارایه می دهد. Database server:
یک برنامه کامپیوتری است که خدمات database به سایر کامپوترها یا برنامه های کامپیوتری ارایه می دهد که به آن مدل client-sever نیز می گویند. Fax server:
سیستمی است که در یک سرور شبکه محلی (LAN) نصب شده و به کاربرانی که به شبکه متصل  هستند امکان ارسال و دریافت فکس را می دهد.
0 notes
vmwarews-blog · 8 years ago
Text
دیتاسنتر مجازی Virtual DataCenter
تكنولوژي مجازي سازي در ديتا سنتر Virtualization in Datacenters
مقدمه
همزمان با افزایش استفاده از برنامه ها و منابع در سازمان های متوسط و بزرگ،  دیتا سنتر ها نیز، که نقشی اساسی را در محاسبات  بازی می­کنند، بزرگ و بزرگ تر می شوند و مصرف انژی آن ها نیز به شکل فزاینده ای افزایش پیدا می کند. افزایش تعداد سرورها و سرویس دهنده ها نیز هزینه پشتیبانی و نگهداری را به  شدت بالا می برد. این موانع به عنوان  چالش های کلیدی برای دیتاسنتر های نسل گذشته مطرح هستند و راه برون رفت از این چالش ها، مهاجرت به سوی دیتاسنتر های مجازی یا به اصطلاح Virtual DataCenterr  می باشد.
در دیتاسنتر مجازی، تلفیق سرور به عنوان یک روش کارآمد برای بهبود بهره وری انرژی در دیتاسنتر ها مطرح شده است. در این روش، اپلیکیشن هایی که بر روی چندین سرور اجرا می­شوند، می­توانند با استفاده از مجازی سازی بر روی یک سرور قرار گیرند. در نتیجه، با استفاده از مجازی سازی می­توان  سرور های بیکار در دیتا سنتر ها را برای کاهش مصرف انرژی خاموش کرد. اثبات شده است که با بهینه سازی عملیات دیتا سنتر ها توسط مجازی سازی، تا بیش از 20% در مصرف انرژی کاهش وجود خواهد داشت. اگرچه مجازی سازی، تأثیرات مخاطره آمیز مختص به خود را نیز داراست، مانند سربار انرژی و یا کاهش توان کاری. این تأثیرات قطعی را در صورتی که به خوبی نتوان درک کرد، می­تواند مزایای مجازی سازی سرور را نیز خنثی سازد. بنابراین، درک صحیح و مدل سازی دقیق از مصرف انرژی سرور ها در دیتا سنتر ها، یک مبنای اساسی را برای بهینه سازی عملیات دیتاسنتر ها فراهم می­سازد
استفاده به عنوان Consolidationبا استفاده از اين تكنولوژي مي‌توان تعداد زيادي از
سرور‌ها
را روي يك
سرور
فيزيكي قرار داد. با در نظر گيري اينكه امروزه بيشتر
سرور
ها در حدود 8 تا 12 درصد از قابليت حقيقي خود را در حالت عادي استفاده مي‌كنند، ��ا  انتقال چند
سرور
روي يك ماشين فيزيكي مي‌توان از باقيمانده منابع استفاده نشده، استفاده بهينه نمود
 پشتيباني از نرم‌افزارها و سيستم هاي قديمي Legacy Application Support
نرم‌افزارهايي كه روي ماشين‌هاي قديمي در حال اجرا مي‌باشند را مي توان بدون در نظر گيري سازگاري با سخت‌افزار‌هاي جديد آنها را به
سرور‌هاي مجازي
با قابليت بالا انتقال داد
 استفاده همزمان از چند سيستم عامل
در محل هايي كه تعدد سيستم عامل وجود دارد و امكان اختصاص دادن يك
سرور
جداگانه به هر يك از OS وجود ندارد
 استفاده براي ايجاد قابليت دسترسي بالا High Availabilityنرم‌افزارهايي كه ضعيف طراحي شده‌اند معمولاً فاقد امكان HA بوده و ما آنها را Cluster unaware Application  مي‌ناميم و راه حل‌هاي Clustering در اين موارد غير ممكن است. مثلاً برنامه‌اي كه از يك پايگاه داده Local‌ استفاده مي‌كند. را مي‌توان با استفاده از ابزارهاي
مجازي سازي
با تاخير چند ثانيه‌اي بعد از تشخيص خرابي سيستم عامل و يا سخت افزاري به سيستم مجازي ديگر به طور اتوماتيك انتقال داد.
 آسان سازي طراحي و به كارگيري سايت هاي Disaster Recovery بسياري از ارايه دهندگان راه‌حل‌هاي
مجازي سازي
امكاناتي را براي در طراحي و خودكار سازي روش باز گرداندن اطلاعات در سايت هاي Disaster Recovery ارايه مي‌دهند. به طور مثال اين راه حل هاي داراي قابليت پشتيباني از Replication پرتكل‌هايي مانند Iscsi‌ و Fiber Channel Over Ethernet بوده و در زمان Recovery نيز مي‌توانند به طور خودكار پيكربندي
شبكه
سرور
ها را مطابق سايت مقصد تغيير دهند. اين امكان باعث سهولت پيكربندي و افزايش سرعت و كاهش خطاي انساني مي‌گردد. همچنين استفاده از اين قابليت مي‌تواند نياز به مطالعه پرونده ها و دستورالعملهاي Recovery‌ را كاهش دهد.
امكان استفاده بهينه از منابع و تقسيم بار بين
سرورهاي
Dynamic Resource Scheduling مي‌توان منابع تعدادي از ماشين‌ها را بين ماشين هاي مجازي بدون در نظر‌گيري اينكه ماشين مجازي روي چه سيستمي Host گرديده است به اشتراك گذاشت. اين منابع مي توانند از نوع حافظه و يا پردازشي باشند
سهولت در تهيه نسخه پيشتيبان كامل Simplifying Full System Imageامكان تهيه نسخه Snapshot بدون نياز به نصب برنامه و يا Agent خاص كه باعث سهولت بازگرداندن كل سيستم به قبل از سانحه مي‌گردد.
مجازي سازي
تجهيزات سخت افزاري شبكه Network Appliance Virtualizationراه‌حل‌‌هاي مجاتزي معمولاً در برگيرنده تجهيزات مجازي شبكه نيز ميگردند كه اين امر موجب كاتهش هزينه مديريت و خريد اين گونه تجهيزات نيز مي‌گردد در نظر بگيريد سروري كه با يك پورت اترنت به شبكه متصل است و 100 سرور مجازي را Host نموده در اينجا مانيازي به خريد سوييچ 1000 پورت نداريم. از موارد جالب مجازي سازي مي  توان امكان مجازي سازي تجهيزات م��نند ديواره‌هاي آتش و سيستم‌ هاي تشخيص نفوذ و .. را نيز نام برد.
0 notes
vmwarews-blog · 8 years ago
Link
0 notes
vmwarews-blog · 8 years ago
Text
INTRODUCTION TO SITE RECOVERY MANAGER
Introduction to Site Recovery Manager
Version: vCenter SRM 5.0
Before I embark on the book proper I want to outline some of the new features in SRM. This will be of particular interest to previous users, as well as to new adopters, as they can see how far the product has come since the previous release. I also want to talk about what life was like before SRM was developed. As with all forms of automation, it’s sometimes difficult to see the benefits of a technology if you have not experienced what life was like before its onset. I also want at this stage to make it clear what SRM is capable of and what its technical remit is. It’s not uncommon for VMware customers to look at other technologies such as vMotion and Fault Tolerance (FT) and attempt to construct a disaster recovery (DR) use case around them. While that is entirely plausible, care must be taken to build solutions that use technologies in ways that have not been tested or are not supported by VMware.
What’s New in Site Recovery Manager 5.0
To begin, I would like to flag what’s new in the SRM product. This will form the basis of the new content in this book. This information is especially relevant to people who purchased my previous book, as these changes are what made it worthwhile for me to update that book to be compatible with SRM 5.0. In the sections that follow I list what I feel are the major enhancements to the SRM product. I’ve chosen not to include a change¬log-style list of every little modification. Instead, I look at new features that might sway a customer or organization into adopting SRM. These changes address flaws or limitations in the previous product that may have made adopting SRM difficult in the past.
vSphere
5.0 Compatibility
This might seem like a small matter, but when vSphere 5 was released some of the advanced management systems were quickly compatible with the new platform—a situation that didn’t happen with vSphere 4. I think many people underestimate what a huge undertaking from a development perspective vSphere 5 actually is. VMware isn’t as big as some of the ISVs it competes with, so it has to be strategic in where it spends its development resources. Saturating the market with product release after product release can alienate customers who feel overwhelmed by too much change too quickly. I would prefer that VMware take its time with product releases and properly QA the software rather than roll out new versions injudiciously. The same people who complained about any delay would complain that it was a rush job had the software been released sooner. Most of the people who seemed to complain the most viciously about the delays in vSphere 4 were contractors whose livelihoods depended on project sign-off; in short, they were often looking out for themselves, not their customers. Most of my big customers didn’t have immediate plans for a rollout of vSphere 5 on the day of General Availability (GA), and we all know it takes time and planning to migrate from one version to another of any software. Nonetheless, it seems that’s a shake-up in which VMware product management has been effective, with the new release of SRM 5.0 coming in on time at the station.
vSphere Replication
One of the most eagerly anticipated new features of SRM is vSphere Replication (VR). This enables customers to replicate VMs from one location to another using VMware as the primary engine, without the need for third-party storage-array-based replication. VR will be of interest to customers who run vSphere in many branch offices, and yet still need to offer protection to their VMs. I think the biggest target market may well be the SMB sector for whom expensive storage arrays, and even more expensive array-based replication, is perhaps beyond their budget. I wouldn’t be surprised to find that the Foundation SKUs reflect this fact and will enable these types of customers to consume SRM in a cost-effective way.
Of course, if you’re a large enterprise customer who already enjoys the benefits of EMC MirrorView or NetApp SnapMirror, this enhancement is unlikely to change the way you use SRM. But with that said, I think VR could be of interest to enterprise customers; it will depend on their needs and situations. After all, even in a large enterprise it’s unlikely that all sites will be using exactly the same array vendor in both the Protected and Recovery Sites. So there is a use case for VR to enable protection to take place between dissimilar arrays. Additionally, in large environments it make take more time than is desirable for the storage team to enable replication on the right volumes/LUNs, now that VMware admins are empowered to protect their VMs when they see fit.
It’s worth saying that VR is protocol-neutral—and that this will be highly attractive to customers migrating from one storage protocol to another—so VR should allow for replication between Fibre Channel and NFS, for example, just like customers can move a VM around with VMware’s Storage vMotion regardless of storage protocol type. This is possible because, with VR, all that is seen is a datastore, and the virtual appliance behind VR doesn’t interface directly with the storage protocols that the ESX host sees. Instead, the VR appliance communicates to the agent on the ESX host that then transfers data to the VR appliance. This should allow for the protection of VMs, even if local storage is used—and again, this might be very attractive to the SMB market where direct attached storage is more prevalent.
Automated Failback and Reprotect
When SRM was first released it did not come with a failback option. That’s not to say failback wasn’t possible; it just took a number of steps to complete the process. I’ve done innumerable failovers and failbacks with SRM 1.0 and 4.0, and once you have done a couple you soon get into the swing of them. Nonetheless, an automated failback process is a feature that SRM customers have had on their wish lists for some time. Instructions to manage the storage arrays are encoded in what VMware calls Site Recovery Adapters (SRAs). Previously, the SRA only automated the testing and running of SRM’s Recovery Plans. But now the SRAs support the instructions required to carry out a failback routine. Prior to this, the administrator had to use the storage vendor’s management tools to manage replication paths.
Additionally, SRM 5.0 ships with a process that VMware is calling Reprotect Mode. Prior to the reprotect feature it was up to the administrator to clear out stale objects in the vCenter inventory and re-create objects such as Protection Groups and Recovery Plans. The new reprotect feature goes a long way toward speeding up the failback process. With this improvement you can see VMware is making the VM more portable than ever before.
Most VMware customers are used to being able to move VMs from one physical server to another with vMotion within the site, and an increasing number would like to extend this portability to their remote locations. This is currently possible with long-distance live migrate technologies from the likes of EMC and NetApp, but these require specialized technologies that are distance-limited and bandwidth-thirsty and so are limited to top-end customers. With an effective planned migration from SRM and a reprotect process, customers would be able to move VMs around from site to site. Clearly, the direction VMware is taking is more driven toward managing the complete lifecycle of a VM, and that includes the fact that datacenter relocations are part of our daily lives.
VM Dependencies
One of the annoyances of SRM 1.0 and 4.0 was the lack of a grouping mechanism for VMs. In previous releases all protected VMs were added to a list, and each one had to be moved by hand to a series of categories: High, Low, or Normal. There wasn’t really a way to create objects that would show the relationships between VMs, or groupings. The new VM Dependencies feature will allow customers to more effectively show the relationships between VMs from a service perspective. In this respect we should be able to configure SRM in such a way that it reflects the way most enterprises categorize the applications and services they provide by tiers. In addition to the dependencies feature, SRM now has five levels of priority order rather than the previous High, Low, and Normal levels. You might find that, given the complexity of your requirements, these offer all the functionality you need.
Improved IP Customization
Another great area of improvement comes in the management of IP addresses. In most cases you will find that two different sites will have entirely different IP subnet ranges. According to VMware research, nearly 40% of SRM customers are forced to re-IP their VMs. Sadly, it’s a minority of customers who have, or can get approval for, a “stretched VLAN” configuration where both sites believe they make up the same continuous network, despite being in entirely different geographies. One method of making sure that VMs with a 10.x.y.z address continue to function in a 192.168. 1 .x network is to adopt the use of Network Address Translation (NAT) technologies, such that VMs need not have their IP address changed at all.
Of course, SRM has always offered a way to change the IP address of Windows and Linux guests using the Guest Customization feature with vCenter. Guest Customization is normally used in the deployment of new VMs to ensure that they have unique hostnames and IP addresses when they have been cloned from a template. In SRM 1.0 and 4.0, it was used merely to change the IP address of the VM. Early in SRM a command-line utility, dr-ip-exporter, was created to allow the administrator to create many guest customizations in a bulk way using a .csv file to store the specific IP details. While this process worked, it wasn’t easy to see that the original IP address was related to the recovery IP address. And, of course, when you came to carry out a failback process all the VMs would need to have their IP addresses changed back to the original from the Protected Site. For Windows guests the process was particularly slow, as Microsoft Sysprep was used to trigger the re-IP process. With this new release of SRM we have a much better method of handling the whole re-IP process—which will be neater and quicker and will hold all the parameters within a single dialog box on the properties of the VM. Rather than using Microsoft Sysprep to change the IP address of the VM, much faster scripting technologies like PowerShell, WMI, and VBScript can be used. In the longer term, VMware remains committed to investing in technologies both internally and with its key partners. That could mean there will be no need to re-IP the guest operating system in the future.
A Brief History of Life before VMware SRM
To really appreciate the impact of VMware’s SRM, it’s worth it to pause for a moment to think about what life was like before virtualization and before VMware SRM was released. Until virtualization became popular, conventional DR meant dedicating physical equipment at the DR location on a one-to-one basis. So, for every business-critical server or service there was a duplicate at the DR location. By its nature, this was expensive and difficult to manage—the servers were only there as standbys waiting to be used if a disaster happened. For people who lacked those resources internally, it meant hiring out rack space at a commercial location, and if that included servers as well, that often meant the hardware being used was completely different from that at the physical location. Although DR is likely to remain a costly management headache, virtualization goes a long way toward reducing the financial and administrative penalties of DR planning. In the main, virtual machines are cheaper than physical machines. We can have many instances of software—Windows, for example—running on one piece of hardware, reducing the amount of rack space required for a DR location. We no longer need to worry about dissimilar hardware; as long as the hardware at the DR location supports VMware ESX, our precious time can be dedicated to getting the services we support up and running in the shortest time possible.
One of the most common things I’ve heard in courses and at conferences from people who are new to virtualization is, among other things:
We’re going to try virtualization in our DR location, before rolling it out into production.
This is often used as a cautious approach by businesses that are adopting virtualization technologies for the first time. Whenever this is said to me I always tell the individual concerned to think about the consequences of what he’s saying. In my view, once you go down the road of virtualizing your DR, it is almost inevitable that you will want to virtualize your production systems. This is the case for two main reasons. First, you will be so impressed and convinced by the merits of virtualization anyway that you will want to do it. Second, and more important in the context of this book, is that if your production environment is not already virtualized how are you going to keep your DR locations synchronized with the primary location?
There are currently a couple of ways to achieve this. You could rely solely on conventional backup and restore, but that won’t be very slick or very quick. A better alternative might be to use some kind of physical to virtual conversion (P2V) technology. In recent years many of the P2V providers, such as Novell and Leostream, have repositioned their offerings as “availability tools,” the idea being that you use P2V software to keep the production environment synchronized with the DR location. These technologies do work, and there will be some merits to adopting this strategy—say, for services that must, for whatever reason, remain on a physical host at the “primary” location. But generally I am skeptical about this approach. I subscribe to the view that you should use the right tools for the right job; never use a wrench to do the work of a hammer. From its very inception and design you will discover flaws and problems—because you are using a tool for a purpose for which it was never designed. For me, P2V is P2V; it isn’t about DR, although it can be reengineered to do this task. I guess the proof is in the quality of the reengineering. On top of this you should know that in the long term, VMware has plans to integrate its VMware Converter technology into SRM to allow for this very functionality. In the ideal VMware world, every workload would be virtualized. In 2010 we reached a tipping point where more new servers were virtual machines than physical machines. However, in terms of percentage it is still the case that, on average, only 30% of most people’s infrastructure has been virtualized. So, at least for the mid-term, we will still need to think about how physical servers are incorporated into a virtualized DR plan.
Another approach to this problem has been to virtualize production systems before you virtualize the DR location. By doing this you merely have to use your storage vendor’s replication or snapshot technology to pipe the data files that make up a virtual machine (VMX, VMDK, NVRAM, log, Snapshot, and/or swap files) to the DR location. Although this approach is much neater, this in itself introduces a number of problems, not least of which is getting up to speed with your storage vendor’s replication technology and ensuring that enough bandwidth is available from the Protected Site to the Recovery Site to make it workable. Additionally, this introduces a management issue. In the large corporations the guys who manage SRM may not necessarily be the guys who manage the storage layer. So a great deal of liaising, and sometimes cajoling, would have to take place to make these two teams speak and interact with each other effectively.
But putting these very important storage considerations to one side for the moment, a lot of work would still need to be done at the virtualization layer to make this sing. These “replicated” virtual machines need to be “registered” on an ESX host at the Recovery Site, and associated with the correct folder, network, and resource pool at the destination. They must be contained within some kind of management system on which to be powered, such as vCenter. And to power on the virtual machine, the metadata held within the VMX file might need to be modified by hand for each and every virtual machine. Once powered on (in the right order), their IP configuration might need modification. Although some of this could be scripted, it would take a great deal of time to create and verify those scripts. Additionally, as your production environment started to evolve, those scripts would need constant maintenance and revalidation. For organizations that make hundreds of virtual machines a month, this can quickly become unmanageable. It’s worth saying that if your organization has already invested a lot of time in scripting this process and making a bespoke solution, you might find that SRM does not meet all your needs. This is a kind of truism. Any bespoke system created internally is always going to be more finely tuned to the business’s requirements. The problem then becomes maintaining it, testing it, and proving to auditors that it works reliably.
It was within this context that VMware engineers began working on the first release of SRM. They had a lofty goal: to create a push-button, automated DR system to simplify the process greatly. Personally, when I compare it to alternatives that came before it, I’m convinced that out of the plethora of management tools added to the VMware stable in recent years VMware SRM is the one with the clearest agenda and remit. People under-stand and appreciate its significance and importance. At last we can finally use the term virtualizing DR without it actually being a throwaway marketing term.
If you want to learn more about this manual DR, VMware has written a VM book about virtualizing DR that is called A Practical Guide to Business Continuity & Disaster Recovery with VMware Infrastructure. It is free and available online here:
www.vmware.com/files/pdf/practical_guide_bcdr_vmb.pdf
I recommend reading this guide, perhaps before reading this book. It has a much broader brief than mine, which is narrowly focused on the SRM product.
What Is Not a DR Technology?
In my time of using VMware technologies, various features have come along which people often either confuse for or try to engineer into being a DR technology—in other words, they try to make a technology do something it wasn’t originally designed to do. Personally, I’m in favor of using the right tools for the right job. Let’s take each of these technologies in turn and try to make a case for their use in DR.
vMotion
In my early days of using VMware I would often hear my clients say they intended to use vMotion as part of their DR plan. Most of them understood that such a statement could only be valid if the outage was in the category of a planned DR event such as a power outage or the demolition of a nearby building. Increasingly, VMware and the network and storage vendors have been postulating the concept of long-distance vMotion for some time. In fact, one of the contributors to this book, Chad Sakac of EMC, had a session at VMworld San Francisco 2009 about this topic. Technically, it is possible to do vMotion across large distances, but the technical challenges are not to be underestimated or taken lightly given the requirements of vMotion for shared storage and shared networking. We will no doubt get there in the end; it’s the next logical step, especially if we want to see the move from an internal cloud to an external cloud become as easy as moving a VM from one ESX host in a blade enclosure to another. Currently, to do this you must shut down your VMs and cold-migrate them to your public cloud provider.
But putting all this aside, I think it’s important to say that VMware has never claimed that vMotion constitutes a DR technology, despite the FUD that emanates from its competitors. As an indication of how misunderstood both vMotion and the concept of what constitutes a DR location are, one of these clients said to me that he could carry vMotion from his primary site to his Recovery Site. I asked him how far away the DR location was. He said it was a few hundred feet away. This kind of wonky thinking and misunderstanding will not get you very far down the road of an auditable and effective DR plan. The real usage of vMotion currently is being able to claim a maintenance window on an ESX host without affecting the uptime of the VMs within a site. Once coupled with VMware’s Distributed Resource Scheduler (DRS) technology, vMotion also becomes an effective performance optimization technology. Going forward, it may indeed be easier to carry out a long-distance vMotion of VMs to avoid an impending disaster, but much will depend on the distance and scope of the disaster itself. Other things to consider are the number of VMs that must be moved, and the time it takes to complete that operation in an orderly and graceful manner.
VMware HA Clusters
Occasionally, customers have asked me about the possibility of using VMware HA technology across two sites. Essentially, they are describing a “stretched cluster” concept. This is certainly possible, but it suffers from the technical challenges that confront geo-based vMotion: access to shared storage and shared networking. There are certainly storage vendors that will be happy to assist you in achieving this configuration; examples include NetApp with its MetroCluster and EMC with its VPLEX technology. The operative word here is metro. This type of clustering is often limited by distance (say, from one part of a city to another). So, as in my anecdote about my client, the distances involved may be too narrow to be regarded as a true DR location. When VMware designed HA, its goal was to be able to restart VMs on another ESX host. Its primary goal was merely to “protect” VMs from a failed ESX host, which is far from being a DR goal. HA was, in part, VMware’s first attempt to address the “eggs in one basket” anxiety that came with many of the server consolidation projects we worked on in the early part of the past decade. Again, VMware has never made claims that HA clusters constitute a DR solution. Fundamentally, HA lacks the bits and pieces to make it work as a DR technology. For example, unlike SRM, there is really no way to order its power-on events or to halt a power-on event to allow manual operator intervention, and it doesn’t contain a scripting component to allow you to automate residual reconfiguration when the VM gets started at the other site. The other concern I have with this is when customers try to combine technologies in a way that is not endorsed or QA’d by the vendor. For example, some folks think about overlaying a stretched VMware HA cluster on top of their SRM deployment. The theory is that they can get the best of both worlds. The trouble is the requirements of stretched VMware HA and SRM are at odds with each other. In SRM the architecture demands two separate vCenters managing distinct ESX hosts. In contrast, VMware HA requires that the two or more hosts that make up an HA cluster be managed by just one vCenter. Now, I dare say that with a little bit of planning and forethought this configuration could be engineered. But remember, the real usage of VMware HA is to restart VMs when an ESX host fails within a site—something that most people would not regard as a DR event.
VMware Fault Tolerance
VMware Fault Tolerance (FT) was a new feature of vSphere 4. It allowed for a primary VM on one host to be “mirrored” on a secondary ESX host. Everything that happens on the primary VM is replayed in “lockstep” with the secondary VM on the different ESX host. In the event of an ESX host outage, the secondary VM will immediately take over the primary’s role. A modern CPU chipset is required to provide this functionality, together with two 1GB vmnics dedicated to the FT Logging network that is used to send the lockstep data to the secondary VM. FT scales to allow for up to four primary VMs and four secondary VMs on the ESX host, and when it was first released it was limited to VMs with just one vCPU. VMware FT is really an extension of VMware HA (in fact, FT requires HA to be enabled on the cluster) that offers much better availability than HA, because there is no “restart” of the VM. As with HA, VMware FT has quite high requirements, as well as shared networking and shared storage—along with additional requirements such as bandwidth and network redundancy. Critically, FT requires very low-latency links to maintain the lockstep functionality, and in most environments it will be cost-prohibitive to provide the bandwidth to protect the same number of VMs that SRM currently protects. The real usage of VMware FT is to provide a much better level of availability to a select number of VMs within a site than currently offered by VMware HA.
Scalability for the Cloud
As with all VMware products, each new release introduces increases in scalability. Quite often these enhancements are overlooked by industry analysts, which is rather disappointing. Early versions of SRM allowed you to protect a few hundred VMs, and SRM 4.0 allowed the administrator to protect up to 1,000 VMs per instance of SRM. That forced some large-scale customers to create “pods” of SRM configurations in order to protect the many thousands of VMs that they had. With SRM 5.0, the scalability numbers have jumped yet again. A single SRM 5.0 instance can protect up to 6,000 VMs, and can run up to 30 individual Recovery Plans at any one time. This compares very favorably to only being able to protect up to 1,000 VMs and run just three Recovery Plans in the previous release. Such advancements are absolutely critical to the long-term integration of SRM into cloud automation products, such as VMware’s own vCloud Director. Without that scale it would be difficult to leverage the economies of scale that cloud computing brings, while still offering the protection that production and Tier 1 applications would inevitably demand.
What Is VMware SRM?
Currently, SRM is a DR automation tool. It automates the testing and invocation of disaster recovery (DR), or as it is now called in the preferred parlance of the day, “business continuity” (BC), of virtual machines. Actually, it’s more complicated than that. For many, DR is a procedural event. A disaster occurs and steps are required to get the business functional and up and running again. On the other hand, BC is more a strategic event, which is concerned with the long-term prospects of the business post-disaster, and it should include a plan for how the business might one day return to the primary site or carry on in another location entirely. Someone could write an entire book on this topic; indeed, books have been written along these lines, so I do not intend to ramble on about recovery time objectives (RTOs), recovery point objectives (RPOs), and maximum tolerable downtimes (MTDs)—that’s not really the subject of this book. In a nutshell, VMware SRM isn’t a “silver bullet” for DR or BC, but a tool that facilitates those decision processes planned way before the disaster occurred. After all, your environment may only be 20% or 30% virtualized, and there will be important physical servers to consider as well.
0 notes