Facing increasing network complexity, rising performance expectations, and budget cuts, IT leaders worldwide are looking to answer the ongoing question of how to do more with less. The answer for many organizations is network virtualization.
Network Virtualization, or the use of a high-capacity server to host multiple operating system instances, offers organizations efficient IT resource utilization, simplified infrastructure operations and management, and reduced total cost ownership (TCO) without sacrificing reliability or service. Through the use of a small management system (or hypervisor) interfacing between hardware and virtual servers, network virtualization enables the management of server hardware through the interpretation of functions necessary for preserving the illusion of dedicated hardware for virtual servers. As a result, organizations are able to consolidate multiple physical network functions translating into an increase in performance efficiency and a reduction in capital and operating costs.
Reasons for Server Virtualization
Server virtualization provides several advantages over a larger quantity of physical servers, such as:
- Reduced space usage – modern high-end servers can have 40+ cores and 1TB+ DRAM in a 2-4RU package. Such a server can potentially host hundreds of virtual machines, each of which in the past would have been residing on its own individual piece of hardware, making what would have taken up several full racks of space now fit into a single 42RU rack.
- Reduced environmental considerations – A physical server has some degree of unavoidable power consumption, that baseline amount of power needed just to have the system operational. This can add up considerably as the number of servers grows, and by consolidating servers via virtualization, significant power consumption (and by extension, cooling and ventilation) burdens can be reduced.
- Ease of operation, administration, and maintenance – From a physical perspective, fewer nodes to manage means less work, and with higher-end servers, additional high availability features such as redundancy and lights-out management are also available. From the virtual server perspective, having the OS and application now independent from the hardware it's running on gives a greater degree of flexibility when it comes time to perform maintenance, provision new servers, or update the physical hardware.
Note, though, that not all applications are suitable for virtualization – for example, a database server that has substantial I/O requirements may not be a good candidate for virtualization; due to inefficiencies inherent in a virtualization scheme, performance of said database can be severely compromised. Likewise, any application that is expected to fully utilize the server hardware, like a high performance computing application, may not be a fit for virtualization; in this case, as the server will only be using this one application, there are no major benefits to virtualization here.
Impact on Server Selection
While virtualization has been around for some time now, the demands on the underlying hardware imposed by virtualization have until recently made large-scale virtualization a very challenging prospect. Foremost is the need for extremely large quantities of memory – modern OSes will, at rest, want to have 1GB or more DRAM available to it, and this is before any application-specific needs. In older hardware, this either meant that extremely expensive high density memory had to be used, or that limited DRAM quantity had to be worked around. With modern servers, however, available DRAM quantity can be around 128GB with low density memory, or in excess of 1TB with high density memory, now allowing the use of significantly larger numbers of virtual machines on each physical server. Likewise, with modern multicore processors available (10-12 cores per socket currently), processing resources have also scaled to the level where heavy virtualization is now a possibility.
Impact on the Network
When using server virtualization, network needs will change as well. With a large number of low-utilization servers, the performance needed per-port is correspondingly low, allowing for the use of oversubscribed gigabit cards or even 10/100 cards in the datacenter switch. However, by combining many of these servers into one piece of hardware, we see a dramatic increase in per-port performance needs, often needing at the minimum aggregated gigabit Ethernet, and depending on the applications, 10GbE may end up being required. On the other hand, total port count will likely end up reduced considerably. This has, in many cases, caused a migration away from the large chassis based switches seen in older datacenters towards a new generation of small, high density 10GbE switches like the Dell Networking S4810 and Cisco Nexus 5000 line, with high availability being achieved through some form of multichassis link aggregation. Where gigabit connections are required, high performance switches such as the Dell Networking S55 and S60, Nexus 2000, and Catalyst 4900 series are placed at the top of the rack and connected to the core with multiple 10GbE uplinks for minimal oversubscription.
|Cisco Catalyst 3560G, E, X|
|Cisco Catalyst 3750G, E, X|
|Cisco Catalyst 4900M/4948|
|Cisco Catalyst 6500/SUP720|
Dell Networking Switches
|S55 - High-Performance 1/10 GbE Top-of-Rack Switch|
|S60 - High-Performance 1/10 GbE Access Switch|
|S4810 - High-Performance 1/40 GbE Top-of-Rack Switch|
|CISCO7200VXR + NPE-G2|
- Tech Guides
- Cisco EoL Guide
- Cisco 10GbE Optics Cheat Sheet
- Stacking Switches
- Catalyst 6500 Series Video Datasheet
- Breathing New Life into the Catalyst 6500
- Catalyst 6500 vs Nexus 7000
- Guide to 40GbE and 100GbE Optics
- Dell PowerEdge Servers
- 4948 Quick Reference Matrix
- Which ISR Series is Right for Your Network?
- What Cisco Won’t Tell You About ISR G1 vs. G2 Routers
- IPv6 Compatibility Guide
- Pre-Owned Buyer's Guide
- ASR 1000s Simplified