A datastore is a logical storage unit that can use disk space on one physical device or pan several physical devices. Datastores are used to hold VM files, VM templates and ISO images.
Network storage that ESXi supports
- Fibre Channel: FC/SCSI
- High-speed network that connects hosts to high-performance storage devices. The host needs FC Host Bus Adapters
- Fibre Channel over Ethernet: FCoE/SCSI
- ISCSI: IP/SCSI
- Packages SCSI storage traffic into the TCP/IP protocol so that it can travel over TCP/IP networks.
- NAS: IP/NFS
Vsphere supports the following types of datastores:
- VMFS (v5 and v6)
- NFS
- vSAN
- vSphere Virtual Volumes
- Raw device mapping
VMFS (v5 and v6)
- Concurrent access to shared storage
- Dynamic expansion
- On-Disk Locking
- 4K native storage devices
- Automatic space reclamation
VMFS is a clustered file system where multiple ESXi hosts can read and write to the same storage device simultaneously. It provides unique services:
- Migration of running VMs from one host to another without downtime
- Automatic restarting of a failed VM on a seperate ESXi host
- Clustering of VMs across various physical servers
A VMFS datastore can be increased dynamically when VMs residing on the datastore are powered on and running.
A VMFS datastore provides block-level distributed locking to ensure that the same VM is not powered on by multiple servers at the same time.
It can be deployed on three kinds of SCSI-based storage devices:
- Directly Attached Storage or DAS (SAS, SATA, NVMe)
- FC (FCoE)
- iSCSI (iSCSI, iSER/NVMe)
To upgrade a VMFS 5 datastore to a VMFS 6 datastore you must delete and reformat the datastore.
NFS
- supports NFS 3 and NFS 4.1 over TCP/IP
ESXi hosts do not use the Network Lock Manager protocol, which is the standard to lock files on NFS-mounted files, VMware has its own locking protocol where NFS 3 lock files are created on the NFS server and NFS 4.1 uses server-side file locking
You cannot use NFS 3 and 4.1 to mount datastores on multiple hosts simultaneously because of the difference in locking mechanisms.
Accessing the same VHD from different NFS versions can cause the VHD to corrupt.
vSAN
- Protocols: FC/Ethernet (iSCSI, NFS)
vSAN is hypervisor-converged, software-defined storage for virtual environments that does not use traditional external storage. By clustering host-attached HDDs or SSDs, vSAN creates an aggregated datastore shared by VMs

vSAN can be configured as hybrid or all-flash storage. vSAN pools server-attached HDDs and SSDs to create a distributed shared datastore. It abstracts the storage hardware to provide a software-defined storage tier for VMs. Flash is used as a read cache and write buffer to accelerate performance, and magnetic disks provide capacity and persistent data storage.
VSAN can be deployed as an all-flash storage architecture. The tiering of SSDs result in a cost-effective implementation: a write-intensive, enterprise-grade SSD cache tier and a read-intensive, lower-cost SSD capacity tier.
vSphere Virtual Volumes
- Native representation of VMDKs on SAN/NAS. No LUN management.
- Works with existing SAN/NAS systems
- New control path for data operations at the VM and VMDK level
- Snapshots, replications and other operations at the VM level on external storage
- Automates control of per-VM service levels via storage policies
- Standard access to storage with the vSphere API
- Storage containers that span an entire array
vVol virtualizes SAN and NAS devices by abstracting the physical hardware resources into logical pools of capacity. This provides lower storage costs, reduced storage management overhead, greater scalability and a better response to data access and analytical requirements and uses the protocols from vSAN.

The way it works with vVol is that the storage device has a component called the protocol endpoint. It’s the protocol endpoint that the ESXi host establishes a connection with and manages the multipathing for. When a virtual machine powers on a host that needs access to vVols, those vVols are mounted as subLANs to the protocol endpoint and it leverages the existing connection between the host and the protocol endpoint to present the relevant vVols to the relevant virtual machine
Raw Device Mapping
- Is not a datastore, however it can give a VM direct access to a physical LUN.
- It stores the VM data not in a VHD, but directly on a raw LUN. This is useful if you run applications in your VM that needs to know the physical characteristics of the storage device.
Use it if a VM must interact with a real disk on the SAN