Hi ,
A storage system has a fixed amount of storage I/O that it can support.This is typically measured in I/O Operations per Second (IOPS). If VM workloads increase so the IOPS requirements of VMs become greater than the IOPS that the storage system can support,
slowdowns and bottlenecks start to occur.If a storage I/O bottleneck becomes severe enough, it can cause VMs to experience disk I/O timeouts,
which can freeze or cause a Blue Screen Of Death (BSOD) on Windows VMs and cause the disks of Linux VMsto be re-mounted as read-only.This can consist of storage related settings that are not set properly on either the host or storage device, improperplacement of VMs on datastores and also host misconfigurations. Storage settings, such as queue depth, cache size,and network-specific settings for iSCSI & NFS can cause poor performance if improperly set.
1.You can follow http://kb.vmware.com/kb/1008205 and do post the esxtop ( DAVG-Device latency - Average amount of time, in milliseconds, to complete SCSI command from the physical device-If DAVG value is going above 10 on a constant then we will have performance issues)
2.Which path policy we are using?
3.Please post the kernel logs
- ESX 3.5 and 4.x – /var/log/vmkernel
- ESXi 3.5 and 4.x – /var/log/messages
- ESXi 5.x - /var/log/vmkernel.log