Differences Between Physical Server vs Virtual Machine

Over the past decade or more IT infrastructure has changed drastically. As virtualization rise, organizations have shifted the way business-critical workloads are provisioned, managed, and housed in the infrastructure. Rather of configuring a server workload in a 1:1 fashion with one workload per physical server, virtualization has brought about the ability to run many software workloads on a single set of physical hardware.

As the advancements in processing, network, & storage power, virtualization has allowed businesses to take advantage of the evolution in CPU processing power across the entire landscape more efficiently & take advantage of the advancements in hardware. Hence, there may be cases where a physical server is still desirable for some workloads.

Let’s take a look at the important differences between a physical server and a virtual machine.

Physical Server vs Virtual Machine

When looking at the differences between a physical server and virtual machines and deciding between them to run your business-critical workloads, let’s first get a better understanding of each. We will consider the following:

1. What is a physical server?

2. What is a virtual machine?

3. Physical vs virtual machine feature comparison

4. How do you choose?

5. Other Considerations

Let’s get started in looking at physical servers.

What is a Physical Server?

1. The physical server is a well-known part of the IT infrastructure that has been around since the very beginning. It is a hardware that you can touch and feel. A typical server is referred to as “bare-metal”.

2. It includes all physical hardware components contained in the physical server case that allows it to function. Physical servers basically have a CPU, RAM, and few types of internal storage from which the OS is loaded & is booted. It may or may not have general-purpose storage outside of the storage used for OS.

3. Physical connections in the datacenter connect to physical server and this includes power, network, storage connections, and other peripheral devices and hardware.

4. Bare-metal servers that run a single application, generally provide applications and data for a single “tenant”. In easy terms, a tenant is a customer or consumer. Tenant is a single instance of the software and supporting infrastructure that serves a single customer.

Types of Servers

If people may think of a physical server as a “one size fits all” type piece of hardware, there are all kinds, sizes, and purposes for physical servers. These include the following different server types:

1. Tower Servers – This type of servers has lower cost and less powerful than their rackmount and modular counterparts. These servers are found in the edge or small business environments where a server rack may not be installed or there is no other rackmount equipment to justify purchasing a server rack.

2. Rackmount Servers – These servers are the typical servers you think about when thinking about an enterprise data center environment and are mounted in a standard server rack.

Rackmount Server

3. HCI or Modular Servers – HCI or Modular Servers are sometimes known as “blade” servers or hyper-converged form factors as they typically have the ability to install or scale the compute, storage, & network by simply installing a new “server blade” or “module” into the chassis of the HCI/Modular server.

Dell PoerEdge Servers

Image - Dell PowerEdge MX 

Above mentioned different types of servers are certainly not the only ones you will find available for purchase. However, these types of servers are the most common types of physical form factors that you will find in an enterprise data center environment.

What is a Virtual Machine?

1. Virtual Machines are the common type of IT infrastructure found in today’s environments. While containers are certainly gaining traction & are growing in adoption, virtual machines are still the de facto standard of today’s virtualized environments.

2. These machines are made possible by installing a hypervisor on top of a “bare-metal” server. A common approach for many popular hypervisors today, like VMware vSphere and Microsoft Hyper-V, is to virtualize the hardware of the underlying physical server and present this virtualized hardware to the OS. The hypervisor has a CPU scheduler of some sort that brokers requests from the client operating systems running in guest virtual machines with the physical CPU installed in the underlying physical host.

3. Virtual machines provide many advantages than a physical server in terms of provisioning, management, configuration, & automation. While a new physical server take days or weeks to acquire, provision, and configure, a virtual machine can be spun up in minutes and even seconds in some cases.

4. The way a virtual machine is abstracted from the underlying physical hardware, means it is afforded mobility & flexibility that are simply not possible with physical servers. Virtual machines can seamlessly be moved between different hosts, while the virtual machine is running. Since virtual machines are a set of files on shared storage rather than a set of physical hardware, allows easy mobility & changing of their compute/memory ownership.

5. As mentioned earlier, a physical server is well-suited for a single tenant or customer. A virtual machine is better suited for multi-tenant environments where possibly many different companies make use of different virtual machines, all located on a physical or cluster of hypervisor hosts.

Types of VMs

As there is no physical form factor that we can put our arms around for a virtual machine, there is the concept of “virtual hardware” for a VM. Take an example of VMware vSphere when you look at the VM settings, you can see the virtual hardware that comprises the VM.

Virtual hardware contained in a VMware virtual machine Outside of the virtual hardware, there are other types of VMs to make note of:

1. Persistent – It is Generally associated with VDI environments as describing a VM that will not be powered down and destroyed after being used.

2. Non-persistent – Generally associated with VDI environments as describing a VM that is short-lived in existence, and only provisioned when needed

3. Thick provisioned – Describing storage for a VM as having the disk fully committed or “zeroed” out when created

4. Thin Provisioned – Thin provisioned disks only zero out the disk as space is needed. This effectively allows “overprovisioning” of storage as you can assign more storage to your VMs than you physically have available

5. Virtual Appliances – Virtual appliances in VMware vSphere can be deployed from OVA/OVF templates. This makes provisioning an appliance extremely easy and useful.

6. vApps – A vSphere concept that allows logically grouping virtual machines together so they can be managed and administered as a single entity

7. Generation 1 – In Hyper-V, this is the legacy VM configuration. The “generation” generally affects the VM’s capabilities and features. Generation 1 VMs are usually limited in their features when compared to generation 2 VMs.

8. Generation 2 – The newest type of VM configuration in Hyper-V that affords all the latest features and capabilities.

Physical vs Virtual Machine Feature Comparison

Physical servers and virtual machines are very different in the way they are constructed, but they do share similarities. It comes down to connecting to a “physical server” vs a “virtual server”, the experience from a client perspective is going to be the exact same. Applications generally do not care if they are connecting to a physical server or if they are connecting to a virtual machine as virtual Machines.

The resources which are needed are presented by either a physical server or a virtual machine, an application can perform the same, regardless of whether or not the server is physical vs virtual. Let’s take a look at the following comparisons.

  1. Costs
  2. Physical footprint
  3. Lifespan
  4. Migration
  5. Performance
  6. Efficiency
  7. Disaster Recovery and High Availability

1. Even though the price of physical hardware has come down considerably when you look at the processing power you get for the dollar, it is still expensive. Depending on the specs of the hardware that is provisioned, costs can be a few thousand dollars to tens of thousands of dollars for a single physical server.

2. Looking at the cost of a VM can be a more abstract exercise since you can literally create as many VMs on top of a physical host running a hypervisor as the hardware can support. There are “costs” associated with VMs since they essentially take a “slice” of the hardware specs and performance the physical host is capable of and that you paid for when purchasing the hardware.

3. Products like VMware’s vRealize Operations Manager have the ability to run continuous cost analysis based on processors allocated, RAM, and storage consumed. This can be helpful to have tangible information regarding the costs of your individual VMs.

4. When it comes to a 1 to 1 comparison however, of physical server hardware for (1) workload compared to the ability to run many instances or workloads on top of a physical hypervisor host, VMs are a much more cost-effective and efficient use of your physical resources in the enterprise data center.

Physical Footprint

1. When you look at the physical footprint of a physical server, it can certainly be extensive. Whether it is a tower, rack, or blade type chassis, space will be required to accommodate the physical form factor of the server. If you think about literally having a physical server for each workload running to service a single solution, application, or set of users, the physical space required can add up.

2. Virtual machines on the other hand allow what is known as server consolidation. Over the past decade or more, many organizations have been undergoing this transformation from having a 1 to 1 physical server relationship with a single application to virtualized environments that can run 10, 20, 50, or more VMs per physical hypervisor host.

3. VMs are certainly a more efficient use of physical space in the enterprise data center when compared to physical servers each running a single workload.


1. For a physical server hardware general Lifespan in most enterprise environments ranges anywhere from 3-5 years. It means that workloads running on top of the physical server hardware needs to be migrated off after that lifespan has been reached.

2. Since VMs are abstracted from the underlying hardware of a physical server, virtual machine lifespans can be much longer than the physical hardware on which they reside. After the lifespan has been reached for the underlying hypervisor host, a new hypervisor host can be provisioned in parallel with the current host and the VMs can be migrated over seamlessly.

3. On the other side of the coin, with strong automation capabilities, virtual machines can be provisioned ephemerally and spun up and down as needed. A classic example of this is non-persistent VMs that are provisioned in a VDI environment as needed. After a user logs off, the non-persistent VM is destroyed.


When comparing the migration possibilities with physical hardware vs virtual machines, physical server migration is much more difficult. Migrating a physical server to new physical hardware involves many more complexities than a virtual machine. With physical server migration to new hardware, there are a couple of options.

  1. Take an image of the physical server and apply the image to new hardware
  2. Migrate the software from the old physical server to a new physical server

Option 1 requires the least effort. However, this option may be the most problematic in terms of drivers and other challenges with the image containing hardware references to the old physical server. This approach can result in bluescreens or hardware issues after the image is applied. A maintenance period would be required and the application(s) hosted by the physical server would incur an outage during that period.

Option 2 can require the heaviest lifting since migrating software/applications to a new server can be complicated, depending on the software/application. A maintenance period would most likely be needed for migrating software/applications from one physical server to another.

By comparison, virtual machine migration is much easier. Due to the fact that virtual machines are abstracted from the underlying physical hypervisor host hardware, migrating to new hypervisor hardware is a simple hypervisor-level migration process. This would be a VMware “vMotion” or a Microsoft Hyper-V “Live Migration” process to move to new hardware in the case of those hypervisors. 

As we Migrate a VMware virtual machine, the great thing about the hypervisor level migrations enabled by the likes of vMotion or Live Migration is they can be done while the VM is running which means your application can remain available during the process! Migrations are certainly an advantage of virtual machines compared to physical server migrations.


1. Performance is an area where physical servers (bare-metal) typically shines and one of the most use cases seen for having a physical server as opposed to running a VM is the requirement to have the absolute most performance available for a business-critical application. Virtualized environments have a small bit of overhead related to the hypervisor.

2. However, it needs to be noted that the gap between VM performance and bare-metal performance has grown very narrow as hypervisor schedulers have grown very good at scheduling CPU time. Most running on physical server for performance reasons may result from the need to have absolutely no contention for resources from other VMs that may compete for those resources on the same physical hypervisor host hardware.


1. Efficiency is an advantage of running virtual machines over a physical server for a single workload. The cost for powering a physical server, cooling, and the cost per “rack-U” of data center space, running physical servers to host applications and workloads as opposed to VMs becomes very expensive.

2. While running multiple or even tens of VMs per hypervisor host, in place of a single workload per physical server, VMs are more efficient in orders of magnitude compared to physical servers. VMs have effectively allowed organizations to successfully consolidate the footprint of their data centers drastically.

3 Resource efficiency, using physical servers for single workloads will result in a great deal of wasted idle resources. VMs allow actually using the available CPU cycles, memory, and storage capacity fully.

Disaster Recovery and High-Availability

1. While Running any business-critical workload, either on physical server hardware or virtual machines requires a way to protect your applications and data from disaster & also ensure the application and data are available. VM have a definite advantage when compared to running workloads on physical servers in terms of DR and HA.

2. Virtual machines are abstracted from the underlying physical hardware. It makes them extremely mobile in terms of being able to be moved to a different hypervisor host. VM opens up several capabilities when it comes to protecting applications and data in disaster recovery scenarios.

3. With virtual machines, VM snapshots/checkpoints can be leveraged for redirecting Input so that all changed data can be captured by backup solutions. Changed Block Tracking Change Tracking can be used to only capture the changes that have been made since the last backup.

4. Backups of virtual machines at the hypervisor level result in a total backup of everything required to restore the VM to a functioning state, with the virtual hardware configured.

5. Along with Physical server backups, can capture the OS and all data stored within the server. The physical hardware cannot be magically duplicated. For a physical server failure, you will have to reproduce compatible server hardware to restore your backups.

6. Virtualization clusters make high-availability easy. By abstracting the hardware from the VM, the VMs can easily run from any hypervisor host in the cluster. When a hypervisor host fails, ownership of the VM can simply be assumed on a different hypervisor host in the hypervisor cluster.

7. Physical servers can be clustered as well. As Windows Server Failover Clusters have long been the standard in the enterprise data center for clustering physical servers together to ensure high-availability at an application/data perspective. Another physical server in the cluster will assume running the application/hosting the data, If the master node fails.

8. Virtual machines allow the simplest means of protecting your data at a site-level and can easily be replicated across to a different environment housed in a separate location like a DR facility.

How Do You Choose?

1. While taking the decision most are making between physical server’s vs virtual machines has been identified with the widespread adoption of virtualization. The advantages that virtual machines offer in terms of cost, physical footprint, lifespan, migration, performance, efficiency, and disaster recovery/high-availability are far greater than running a single workload on a single physical server.

2. Does this mean that running applications & hosting data on physical workloads are not an option you would ever choose? No. Actually Physical servers are still an important part of the enterprise data center environment. There are still various situations & use cases for running an application on a physical server. Whether it is for performance reasons, or perhaps the need to hook physical devices into a physical server, the use cases certainly exist.

3. The choice comes down to both a technology and business decision for organization. In many IT infrastructure environments, the majority of workloads will be virtual machines and containers, with a small number of physical servers running various applications.