Emc Crc Storage Error
Management Converged Platforms Data Protection Infrastructure Management Platform Solutions Security Storage Dell Products for Work Network Servers Education Partners Programs Highlighted Communities Support » Connect » Developers » Partners » Downloads » 日本語コミュニティ EMC Community Dell Community A AppSync robv emc Application Xtender Atmos Avamar C Captiva Celerra Centera CLARiiON Cloud Tiering Appliance Connectrix D emc read only background verify Data Domain Data Protection Advisor Disk Library DiskXtender Documentum E ECS eRoom G Greenplum H Host Systems I InfoArchive Isilon read only block verification ISIS Document Imaging L Leap N NetWorker P PowerPath Prosphere R RecoverPoint Replication RSA S ScaleIO Secure Remote Smarts SourceOne & EmailXtender EMC Storage Analytics Symmetrix U Unity V ViPR Controller Product Communities > Connectrix - Infrastructure Connectivity > Documents Currently Being Moderated Connectrix: How to troubleshoot Fibre Channel node to switch port or SFP communication problems by means of elimination? created by Prajith on Sep 9, 2015 10:55 AM, last modified by Prajith on Sep 10, 2015 3:40 AM Version 2 Environment:EMC Hardware: ConnectrixEMC Hardware: Connectrix Brocade Switches AllEMC Hardware: Connectrix Cisco MDS Switches AllEMC Hardware: Fibre Channel SwitchesDescription:How to troubleshoot Fibre Channel node to switch port or SFP communication problems by means of elimination?Issues Noticed:Too many pro-active SFP replacements.Link failure.G port.No light.NOS (Not Operational Sequence).Off Line Sequence (OLS).Loss of Signal.Resolution:This is caused as Too many SFP pro-actively replaced while the problem lies outside the SFP / switch.To resolve this issue:Identify the node and switch port involved in the communications failure.Verify that the switch port is administratively up (unblocked, no shut), or enabled.Make sure there are redundant paths available to attached device before proceeding.WARNING: Before proceeding any further, make sure you know how your node will react if it gets a new FCID. Some OS versions of AIX and HP-UX do not react well to such changes, since the FCID is built in the hardware path to the storage device. If you move the cable, you might have data unavailable. If you have any doubts, consult with an EMC Technical Support Engineer.To eliminate the SFP of being the problem, do the following:Note: If there is an issue with the SFP, this procedure is the quickest way of bringing the device back online.Check for free port on the switch.Disable the identified free p
System Admins, DBAs, Network guys to Application owners, all are quickly ready to point the figure at SAN Storage given the slightest hint of any performance degradation. Not really surprising though, considering it's the common denominator amongst all silos. On the receiving end of this barrage of accusation is the SAN Storage team, who are then subjected to hours of troubleshooting only to prove that their Storage wasn't responsible. On this circle goes until there reaches a point when the Storage team are faced with a problem that they can't absolve themselves of blame, even though they know the Storage is working completely fine. With array-based management tools still severely lacking in their https://community.emc.com/docs/DOC-48927 ability to pinpoint and solve storage network related problems and with server based tools doing exactly that i.e. looking at the server, there really is little if not nothing available to prove that the cause of latency is a slow draining device such as a flapping HBA, damaged cable or failing SFP. Herein lies the biggest paradox in that 99% of the time when unidentifiable SAN performance problems do occur, they are usually linked to trivial http://www.thesanman.org/2011/01/crc-errors-code-violation-errors-class.html issues such as a failing SFP. In a 10,000 port environment, the million dollar question is ‘where do you begin to look for such a miniscule needle in such a gargantuan haystack?' To solve this dilemma it's imperative to know what to look for and have the right tools to find them, enabling your SAN storage environment to be a proactive and not a reactive fire-fighting / troubleshooting circus. So what are some of the metrics and signs that should be looked for when the Storage array, application team and servers all report everything as fine yet you still find yourself embroiled in performance problems? Firstly to understand the context of these metrics / signs and the make up of FC transmissions, let's use the analogy of a conversation. Firstly the Frames would be considered the words, the Sequences the sentences and an Exchange the conversation that they are all part of. With that premise it is important to first address the most basic of physical layer problems, namely Code Violation Errors. Code Violation Errors are the consequence of bit errors caused by corruption that occur in the sequence - i.e. any character corruption. A typical cause of this would be a failing HBA that would eventually start to suffer from optic degradation prior to its complete failure. I also recently experienced at one site Code
and refresh the page. 1.2.7 Storage Device - NAS Protocol Jammer CRC Corruption on Storage Port Test Objective Perform CIFS/NFS jammer testing http://www.brocade.com/content/html/en/validation-test-report/brocade-vcs-emc-vg2-nas-vt/GUID-41C2BBDC-6AC4-417F-91DA-1BBD801751BA.html using CRC corruption with a burst of 10 CRC errors on storage port link, in each direction sequentially, while I/O is running from a single host. Verify I/O recovers successfully. Results PASS. All I/O completes error-free through Ethernet jammer packet corruption on storage port. CIFS, CRC ERROR BURST ON READ TRAFFIC < =========== > CASTOR-VCS20-12# show interface read only tengigabitethernet 11/1/10 TenGigabitEthernet 11/1/10 is up, line protocol is up (connected) Hardware is Ethernet, address is 0027.f81c.71d7 Current address is 0027.f81c.71d7 Pluggable media present Interface index (ifindex) is 47651815552 MTU 2500 bytes LineSpeed Actual : 10000 Mbit LineSpeed Configured : Auto, Duplex: Full Priority Tag disable IPv6 RA Guard disable Last clearing of show interface counters: 00:00:31 Queueing emc crc storage strategy: fifo Receive Statistics: 1203397 packets, 1699009164 bytes Unicasts: 1203425, Multicasts: 0, Broadcasts: 0 64-byte pkts: 58753, Over 64-byte pkts: 0, Over 127-byte pkts: 29364 Over 255-byte pkts: 0, Over 511-byte pkts: 0, Over 1023-byte pkts: 1115280 Over 1518-byte pkts(Jumbo): 0 Runts: 0, Jabbers: 0, CRC: 10, Overruns: 0 < == Errors: 0, Discards: 0 Transmit Statistics: 1407950 packets, 2012486889 bytes Unicasts: 1407914, Multicasts: 4, Broadcasts: 34 Underruns: 0 Errors: 0, Discards: 0 Rate info: Input 1818.188232 Mbits/sec, 149895 packets/sec, 18.18% of line-rate Output 8.925440 Mbits/sec, 11654 packets/sec, 0.09% of line-rate Time since last interface status change: 00:44:43 < =========== > Figure 66. NAS Protocol Jammer CRC Corruption on Storage Port Medusa Result 1 NFS, CRC ERROR BURST ON READ TRAFFIC < =========== > CASTOR-VCS20-12# show interface tengigabitethernet 11/1/10 TenGigabitEthernet 11/1/10 is up, line protocol is up (connected) Hardware is Ethernet, address is 0027.f81c.71d7 Current address is 0027.f81c.71d7 Pluggable media present Interface index (ifindex) is 47651815552 MTU 2500 bytes LineSpeed Actual : 10000 Mbit LineSpeed Configured : Auto, Duplex: