Home > detach scsi > error - cannot detach scsi lun

Error - Cannot Detach Scsi Lun

Contents

admit I was a little surprised when I found out from EMC that SRM does not support Recoverpoint Point in Time fail over. Funny how VMware couldn't tell me this? They just passed the buck to EMC… typical! What's the point of purchasing a product like EMC Recoverpoint if you cannot use it to its full potential? Well, that's not entirely true, you can, you just have to do it outside of SRM! Maybe SRM 5 has spoilt me a little. It does what it is supposed to do extremely well, but I just assumed the SRM RecoverPoint SRA would integrate with RecoverPoint's image access and allow you to pick a previous point https://kb.vmware.com/kb/2032365 in time if required during a failover. Alas, this is not the case. You cannot pick a point in time with SRM, it always uses the latest image only. This is a feature request for SRM, VMware employees if you are reading this: When I perform a Test Failover I would like the ability to pick a previous point in time if required, before the failover commences. What if you have to performed a Disaster Recovery https://blocksandbytes.com/category/srm/ failover and your latest image is corrupt. How do you then roll back to a previous journal entry in your Recovery Site? These are some of the scenarios I don't quite fully understand and I'm going to do some testing to see if I can combine some SRM and RP steps to at least partially automate the process - the thought of using RP natively, enabling image access,  mounting LUNS in the recovery site, rescanning hosts, registering VMs in vCenter, etc has really put me off using RecoverPoint's point in time features. More on this to follow. 2 Comments Posted in RecoverPoint, SRM Tagged failover, point in time, recoverpoint, restore, srm What to do if a reprotect fails in SRM… Protection group has protected VMs with placeholders which need to berepaired Posted by blocksandbytes on August 2, 2012 I had this issue today where a reprotect failed after a Planned Migration. I thought it was worth running through what I had to do to resolve the issue without performing a ‘Force Cleanup' Reprotect as there is currently no KB article describing this workaround. In my case the planned migration went ahead as planned without issues. All VMs were powered on at the Recovery Site and the Recovery Plan completed successfully. When it came to the reprotect however, the reprotect failed on Step 3

Planning & Management Topics View All Facilities & Operations Networking Services & Outsourcing Storage Topics Archive Please select a category Topics Section Problem Solve News Get Started Evaluate Manage Problem Solve Sponsored Communities Essential Guide Best practices, tips http://searchdisasterrecovery.techtarget.com/tip/Five-common-VMware-SRM-error-messages-and-how-to-resolve-them and tools for VMware virtual recovery and backup A comprehensive collection https://raj2796.wordpress.com/2012/03/14/vmware-vsphere-5-dead-lun-and-pathing-issues-and-resultant-scsi-errors/ of articles, videos and more, hand-picked by our editors Five common VMware SRM error messages and how to resolve them byBrien Posey VMware Site Recovery Manager is a disaster recovery product that uses vSphere replication to protect virtual machines and their applications. As is the case detach scsi with any software, you may unexpectedly receive error messages. This tip discusses five of the most common VMware SRM error messages and their solutions. FROM THE ESSENTIAL GUIDE: Best practices, tips and tools for VMware virtual recovery and backup GUIDE SECTIONS Strategies Snapshots VCenter Site Recovery Manager Disaster recovery vSphere Data Protection Third-party tools Definitions + Show detach scsi lun More In this Article Essential Guide Section you're in:Switch virtual recovery efforts to autopilot with vCenter Site Recovery Manager More articles from this section: Navigate VM availability acronyms: VMware HA vs. FT and SRM vs. VADP SRM replication choices: vSphere Replication vs. storage replication Selecting the right storage configuration for Site Recovery Manager Disaster recovery hosting in the cloud eliminates redundant data centers Share this item with your network: Sponsored News Top 3 Things to Look for in a Virtualized Storage Solution –Dell Hyper-Convergence Delivers Better Mission-Critical Performance –SimpliVity See More Vendor Resources The Total Economic Impact of VMware vCenter Site Recovery Manager –Dell, Inc. and Intel® HP LeftHand SAN Failback Procedure for VMware –Hewlett-Packard Company VMware Site Recovery Manager (SRM) is a disaster recovery product that uses vSphere replication to protect virtual... Sign in for existing members Continue Reading This Article Enjoy this article as well as all of our content, including E-Guides, news, tips and more. Step 2 of 2: You forgot to provide an Email

LUN's to vmware before crashing and leaving vmware with a large amount of dead LUN's it could not connect to. Cue a rescan all … and everthing started crashing ! Our primary cluster of 6 servers started to abend, half the ESX servers decided to vmotion of every vm and shutdown, the other 3 ESX servers were left running 6 servers worth of vm's, DRS decided to keep moving vm's of the highest utilised ESX server resulting in another esx server becoming highest utilised which then decided to migrate all vm's of and so on in a never-ending loop As a result the ESX servers were: unresponsive showing powered of machines as being on unable to vmotion vm's of intermittently lost connection to VC lost vm's that were vmotioned i.e. the vms became orphaned Here's what i did to fix the issue. 1 – get NAA ID of the lun to be removed  See error messages on server Or see properties of Datastore in vc (assuming vc isn’t crashed) Or from command line : #esxcli storage vmfs extent list Alternatively you could use #esxclu storage filesystem list however that wouldn’t work in this case since there were no filesystems on the failed luns e.g. output Volume Name VMFS UUID                           Extent Number Device Name                           Partition ------- ----------------------- --------- ------------------------  ------ datastore1  4de4cb24-4cff750f-85f5-0019b9f1ecf6             0  naa.6001c230d8abfe000ff76c198ddbc13e        3 Storage2    4c5fbff6-f4069088-af4f-0019b9f1ecf4             0  naa.6001c230d8abfe000ff76c2e7384fc9a        1 Storage4    4c5fc023-ea0d4203-8517-0019b9f1ecf4             0  naa.6001c230d8abfe000ff76c51486715db        1

 

Related content

No related pages.