VPLEX-with-GeoSynchrony-5.2-Release-Notes.pdf



Comments



Description

EMC® VPLEX™with GeoSynchrony™ 5.2 Release Notes 302-000-035-01 June 25, 2013 These release notes contain supplemental information for EMC VPLEX™ with GeoSynchrony™ 5.2. ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ Revision history ....................................................................................... 2 Product description ................................................................................. 2 New features in this release ................................................................... 2 Configuration limits................................................................................ 6 Fixed problems in Release 5.2................................................................ 8 Known problems and expected behaviors ........................................ 12 Documentation updates ....................................................................... 28 Documentation ...................................................................................... 29 Upgrading GeoSynchrony ................................................................... 30 Software packages ................................................................................. 33 Troubleshooting and getting help ....................................................... 33 1 Revision history Revision history The following table presents the revision history of this document. Revision 01 Date June 25, 2013 Description • First draft Product description The EMC VPLEX family removes physical barriers within, across and between data centers. VPLEX Local provides simplified management and non-disruptive data mobility across heterogeneous arrays. VPLEX Metro provides data access and mobility between two VPLEX clusters within synchronous distances. VPLEX Geo further dissolves those distances by extending data access across asynchronous distances. With a unique scale-up and scale-out architecture, VPLEX's advanced data caching and distributed cache coherency provides workload resiliency, automatic sharing, balancing and failover of storage domains, and enables both local and remote data access with predictable service levels. New features in this release Release 5.2 includes the following new features: ◆ New performance dashboard and CLI-based performance capabilities A new customizable performance monitoring dashboard provides a view into the performance of your VPLEX system. You decide which aspects of the system's performance to view and compare. Alternatively, you can use the CLI to create a toolbox of custom monitors to operate under varying conditions including debugging, capacity planning, and workload characterization. The following new dashboards are provided by default: • System Resources • End To End Dashboard • Front End Dashboard • Back End Dashboard • Rebuild Dashboard 2 EMC VPLEX with GeoSynchrony 5.2 Release Notes New features in this release • WAN Dashboard A number of new charts are also available in the GUI. ◆ Improved diagnostics Enhancements include the following: • Collect diagnostics improvements – Prevent more than one user from running Log Collection at any one time, thus optimizing resources and maintaining the validity of the collected logs. – Accelerate performance by combining multiple scripts to get tower debug data into a single script. These improvements decrease log collection time and log size by not collecting redundant information. • Health check improvements – Include Consistency group information into overall health check. – Include WAN link information into overall health check. Storage array based volume expansion Storage array based volume expansion enables storage administrators to expand the size of any virtual volume by expanding the underlying storage-volume. The supported device geometries include virtual volumes mapped 1:1 to storage volumes, virtual volumes on multi-legged RAID-1, and distributed RAID-1, RAID-0, and RAID-C devices under certain conditions. The expansion operation is supported through expanding the corresponding Logical Unit Numbers (LUNs) on the back-end (BE) array. Storage array based volume expansion might require that you increase the capacity of the LUN on the back-end array. Procedures for doing this on supported third-party luns are availble with the storage array based volume expansion procedures in the generator. Note: Virtual volume expansion is not supported on RecoverPoint enabled volumes. ◆ EMC VPLEX with GeoSynchrony 5.2 Release Notes 3 WriteSame (16) requests the server to write blocks of data transferred by the application client multiple times to consecutive logical blocks. but leaves the storage (including meta data) and the rest of the infrastructure intact. ◆ SYR Reporting SYR Reporting is enhanced to collect Local COM switch information. SYR collects the same data for Emerson units as it currently does for APC units. ◆ Element Manager API VPLEX Element Manager API has been enhanced to support additional external management interfaces. Supported interfaces include: • ProSphere for discovery and capacity reporting/chargeback • UIM for provisioning and reporting on VPLEX in a Vblock • Foundation Management for discovery of VPLEX in a Vblock • Archway for application consistent PiT copies with RecoverPoint Splitter ◆ Event message severities All VPLEX events with a severity of “ERROR” in previous releases of GeoSynchrony have been re-evaluated to ensure the accuracy of their severity with respect to Service Level Agreement requirements.2 Release Notes . ◆ Back end (BE) Logical Unit Number (LUN) swapping 4 EMC VPLEX with GeoSynchrony 5. Cluster repair and recover In the event of a disaster that destroys the VPLEX cluster. ◆ Emerson 350VA UPS support Either APC or Emerson uninterruptible power supplies (UPS) can be used in a VPLEX cluster. the cluster recover procedure restores the full configuration after replacing the VPLEX cluster hardware. The WriteSame (16) SCSI command provides a mechanism to offload initializing virtual disks to VPLEX.New features in this release ◆ VAAI VMware API for Array Integration (VAAI) now supports WriteSame(16) calls. ◆ ◆ FRU procedure A field replaceable unit (FRU) procedure is implemented that automates engine chassis replacement in VS2 configurations. for example. ◆ VPLEX presentation of fractured state for DR1 replica volumes DR1 RecoverPoint-Replica volumes will not be DR1s while in use and their status will reflect this in the CLI and GUI as disconnected. which swaps the production/replica roles. Using the VPLEX RecoverPoint Splitter. it would need RP 4. • Once LUN Remap is detected. specifying the minimum password length and the password expiration date. To use the Fake Size feature. VPLEX corrects its mapping to prevent data corruption.New features in this release The system detects and corrects BE LUN swap automatically. you can now replicate to a target LUN that is larger than the source LUN. EMC VPLEX with GeoSynchrony 5. Note: If a RecoverPoint fail-over operation is used. it is possible for the new production volume to have a fake size instead. ◆ RecoverPoint Splitter support for 8k LUNs For 8K Volumes to be put in use.0 or higher.2 Release Notes 5 .0 or higher builds configured. • On detection of LUN remap. ◆ Invalidate cache procedure This is a procedure to invalidate the cache associated with a virtual-volume or a set of virtual-volumes within a consistency-group that has experienced a data corruption and needs data to be restored from backup. ◆ Customer settable password policy You can set the password policy for all VPLEX administrators. ◆ Performance improvement for larger IO block sizes System performance of write operations is improved for block sizes greater than 128KB. ◆ VPLEX presentation of Fake Size Fake Size is the ability to use Replica volumes larger than Production. The limitation of creating RecoverPoint replicas where source LUN and target LUN must be of identical volume has been removed. a call-home event will be sent. you must be running RecoverPoint 4. VPLEX supports the OS2007 bit on Symmetrix arrays. The Symmetrix section of the Configuring Arrays procedure in the generator contains instructions on setting the OS2007 bit. Table 1 Configuration limits Object Virtual volumes Storage volumes IT nexusa per cluster in VPLEX Local IT nexusa per cluster in VPLEX Metro IT nexusa per cluster in VPLEX Geo IT nexus per back-end port IT nexus per front-end port Extents Extents per storage volume RAID-1 mirror legs b Local top-level devices Distributed devices (includes total number of distributed devices and local devices with global visibility) Storage volume size Virtual volume size Maximum 8000 8000 3200 3200 400 256 400 24000 128 2 8000 8000 32 TB 32 TB 6 EMC VPLEX with GeoSynchrony 5. This setting is vital to detect LUN swap situations and storage volume expansion automatically on a Symmetrix array. Configuration limits Table 1 lists the configuration limits in the current release.2 Release Notes .2.Configuration limits OS2007 bit In Release 5. Configuration limits Table 1 Configuration limits (continued) Object Total storage provisioned in a system Extent block size Active intra-cluster rebuilds Active inter-cluster rebuilds (on distributed devices) Clusters Synchronous Consistency Groups Asynchronous Consistency Groups Volumes per Consistency Group Paths per storage volume per VPLEX director Note: A path is a connection from an initiator back-end port on the director to the target port on the array (IT connection).00. b. Minimum bandwidth for VPLEX Geo IP WAN link Minimum bandwidth for VPLEX Metro IP WAN link Minimum bandwidth for VPLEX Metro with RAPIDPath IP WAN link Maximum WAN latency (RTT) in a VPLEX Metro Maximum latency in a VPLEX Geo 1 Gbps 3 Gbps 1 Gbps 5 ms 50 ms Maximum 8 PB 4 KB 25 25 2 1024 16 1000 4 a.0.2.00. A RAID-1 device can contain a RAID-1 device as a mirror leg (up to one level deep). The software version number can be interpreted as follows: VPLEX A.FF EMC VPLEX with GeoSynchrony 5. A combination of host initiator and VPLEX front-end target port. The number of mirror legs directly underneath a RAID-1 device.DD.05.B.C. Software versions The software version for Release 5.EE.2 is: 5.2 Release Notes 7 . Fixed problems in Release 5. which is available from: https://elabnavigator.0. refer to ECC Support Matrix to obtain the correct ECC version.38 5 2 38 Major release number Minor release number Build Interoperability ◆ For Control Center support. For all other VPLEX interoperability.2.com ◆ Fixed problems in Release 5.2 Where each position has the following meaning: Digit Position A B C DD EE FF Description Major release number Minor release number Service Pack number Patch number Hot Fix number Build number For example: VPLEX 5.00.2: 8 EMC VPLEX with GeoSynchrony 5.2 The following problems have been fixed in Release 5.00.2 Release Notes .emc. refer to the EMC Simple Support Matrix. 2 Issue Number 11467qVPX Summary Previously released versions of PowerPath do not fully support devices with two unique array serial numbers when running powermt display. distributed devices re-attach all legs successfully. This has been fixed in GeoSynchrony 5. In Release 5.2.B3-FC03. The error reported by health-check or VPLEXPlatformHealthCheck command for "Checking port Status" check for B3-FC02. A duplicate UUID was produced following an SSD replacement procedure if the most recent director state backup occurred before a configuration activity. This error has been corrected in Release 5.A3-FC02. This has been fixed in Release 5. In Release 5. In release 5. This disruption has been fixed. in cross-connect configurations. you would see an abort message. 15690qVPX LUN swapping issues could result if storage volumes were deleted and created together. back-end failure.Fixed problems in Release 5.2. 18228qVPX 18522qVPX 18653qVPX 18719qVPX 19188qVPX 19540qVPX 19546qVPX EMC VPLEX with GeoSynchrony 5. If there was a staggered disabling of local COM ports starting with director-1-1-A and so on. Release 5. In previous versions of VPLEX. After certain boot-up conditions. the subpage writes check is still performed. after all boot up conditions. PowerPath has the following cosmetic issues with distributed RAID-1 (DR-1) devices: • powermt display dev=device only shows one of the VPLEX IDs • powermt display paths might not show the correct number of VPLEX paths • powermt display ports may not show the correct number of VPLEX ports Refer to the ESSM for information on the environments supported by VPLEX. if there were meta-data changes due to a WAN link failure. health-check –full displayed incorrect error about sub-page writes was displayed.2.2 Release Notes 9 .2. or any other reason.2 detects and corrects LUN re-mapping on back end storage-volumes.2.2 the director stays up and there is no abort message.A3-FC03 can was incorrect. but the message is a warning. VPLEX NDU caused disruption to a virtualized MSCS 2008 cluster running in a VMWare environment on NMP. not an error. a distributed-device with two healthy legs could refuse detaching one of the legs. Although the multipathing functionality is unaffected. there was a temporary split brain between the directors in a cluster. When NDU was performed on a quad-engine configuration on Local and Metro deployments. ndu recover can engage the CLI to update the VPLEX Witness context information from directors that are in the process of initialization. This should just result in the consistency groups becoming suspended with data loss. In some circumstances. If this happens the VPLEX Witness upgrade failed as part of the ndu recover. Normally. However.2 Release Notes .2 the ndu recover works correctly through this issue. resulting in incorrect information in the VPLEX Witness CLI context. the cluster will vault. when there is a power failure.2. When there was an asymmetrical COM visibility across sites (timing dependent).Fixed problems in Release 5. 19967qVPX 22035qVPN 20093qVPX 20698qVPX 20759qVPX 22095qVPX 10 EMC VPLEX with GeoSynchrony 5. This issue has been fixed in Release 5. When the power is restored it will unvault. This defect has been fixed in Release 5. In previous releases. This occurred when attempting to attach the splitter to a VPLEX volume after some volumes were deleted from the environment. During a metadata backup operation. an asynchronous task i s queued to persist that change of metadata backup in the active metadata. RecoverPoint reports failed attaching to the splitter. if a volume was shrunk below 8 SCSI blocks.2. it simply stops and does not vault. In Release 5. This issue has been fixed in Release 5. it also results in failures of the directors due to mismatch in data during the synchronization of the data between the clusters. The health status of a DR-1 will now be reported minor-failure or degraded. However. the clusters temporarily departed each other (and a detach could happen depending on detach rules). This fix prevents that director failure if this recommendation is not adhered to. When both the clusters come up again. This cluster will lose data. due to this defect. However. If a system encounters director failures on the 2nd upgraders.2 Issue Number 19945qVPX Summary EMC does not recommend or support the shrinking of volumes. there were many admin operations (such as removing a virtual volume) the director would possibly assert due to a bug in the firmware. if while the first cluster is down and the other cluster was active (was servicing I/O) and had abrupt failure (perhaps it also had a power failure but its battery was not operational). restoring its dirty data. all directors would fail and end up in a reboot loop. there will be an mismatch in the data between the cluster which has a vault and the one which did not have a vault. If during that time. During the initial creation of a DR-1 the CLI and GUI report that the DR-1 has a health state of major-failure. when the batteries are operational.2. 0 SP1 or later.2 or later.... NDU failed in the Finishing NDU phase with the following error: " * waiting for management connectivity: .2 Release Notes 11 .2 Issue Number 22227qVPX Summary A consistency group with single-legged distributed virtual volumes... This issue has been fixed in Release 5.ERROR Encountered a problem while rebooting the second upgraders: interrupted sleep This error no longer occurs in upgrades to Release 5.. 22733qVPX 22837qVPX 23021qVPX EMC VPLEX with GeoSynchrony 5... A timing issue under certain conditions was causing a full-sweep (loss of history) on RecoverPoint Consistency Groups.. In some cases.... would be suspended at both clusters if VPLEX Witness became isolated from the cluster with the healthy storage leg and the clusters partitioned..2 Interruption in VPLEX director connectivity to all RecoverPoint appliances resulted in corrupt bookmarks for replica images.Fixed problems in Release 5. This issue has been fixed in Release 5...2. This issue was fixed in Release 5..2 and requires the use of RecoverPoint Release 4.. Sample output: . A conflicting detach can fail with No conflict detected in the set. it reports the SPS status as OK even if the serial cable is not connected between the director serial port and the SPS. During a device or extent migration. . a critical call home message is sent: “Target device is detected to be unsuitable for operation with VPlex”. . expected behaviors. Contact EMC support for workaround procedure to clear the consistency group. minor failure for health status.Known problems and expected behaviors Known problems and expected behaviors This section describes known problems. When VPlexPlatformHealthCheck command is run before configuring the system.2 will no longer see this issue. it recognizes the array and identifies it as a supported storage device.2 Release Notes . This is due to the data copy from the source to target of the migration. Known problems The following issues should be noted: Issue Number 7910qVPX Known Problem Windows 2003 and Windows 2008 hosts with the Storport storage driver experienced I/O disruption during the I/O transfer phase of a VPLEX GeoSynchrony upgrade (NDU). When the migration is in the commit-pending state. Upgrades from Release 5. and rebuilding for health indication. Workaround: Once the LUN is exported from the third-party array to the VPLEX. and documentation issues. Sps : OK stand-by-power-supply-A : OK Note: Check the serial cable connections before proceeding. 14459qVPX When a third-party array registers with VPLEX with zero LUNs. the virtual volume or devices built from the migration source report degraded for operational status. 12403qVPX 13466qVPX 14386qVPX 12 EMC VPLEX with GeoSynchrony 5. the operational status and health indications should return to their original value. Netapp Name = <Vendor ID> < Product ID> <lower 28 bits of WWPN> Example: T10 Vendor ID: NETAPP Product ID: LUN ITL = "x fcp i 0x5000144240014d20 t 0x500a098587792de8 0x0002000000000000" (t = netapp port name) Name = NETAPP~LUN~7792de8 15559qVPX 15823qVPX Storage Volumes responding to I/Os with continuous "SCSI Busy" on storage-arrays connected to VPLEX can negatively affect performance on a VPLEX system. This issue should be corrected as soon as it is detected. Issue ndu recover command 2. while the clusters are in contact. a VPLEX director can fail during NDU Geo if the discovery of the port state takes too long.Known problems and expected behaviors Issue Number 14500qVPX Known Problem After an NDU failure. the losing cluster for the distributed device suspends I/O (until it is manually resumed) if auto-resume is false. Refer to the Generator to find troubleshooting information for this procedure. Workaround: After running ndu recover. NetApp array names might not be unique because NetApp does not provide the array serial number in any inquiry data. once VPLEX detects this. To avoid this scenario. if the clusters are partitioned VPLEX can not prevent the same storage volume from being claimed at both clusters. If it is not. manually enable the VPLEX Witness.2 Release Notes 13 . the ndu recover command might fail to enable the VPLEX Witness. a call home will be sent. causing failure of the NDU Geo process. ensure that the VPLEX Witness is running. 15985qVPX 16196qVPX 17126qVPX 17218qVPX EMC VPLEX with GeoSynchrony 5. If this happens. set auto-resume to true for the consistency groups corresponding to the volumes provisioned to the cross-connected hosts. This issue does not cause data unavailability. Re-start upgrade using ndu start --force-geo option In a cross-connected host configuration. VPLEX identifies an array using a portion of its World Wide Port Name (WWPN). will prevent the same storage volume from being claimed at each cluster. The VPLEX system. In rare cases. However. Workaround: 1. when the clusters rejoin after a partition. This causes cross-connected hosts to continuously try the I/Os on the path to the losing cluster. You should be brought to the start of the interview process. 18029qVPX.2 Release Notes . In fact. when a data migration is started on one cluster and it is viewed from the management server on the other cluster. When running configuration system-setup after pre-configuring call-home on VPLEX using configuration event-notices-reports-config. the migration has completed. Say no to the question Would you like to run the setup process now? (yes/no) This will bring you back to the start of the interview process. Workaround: Do one of the following: 1. If a storage-volume on VPLEX has its LUN unmasked on the array. been committed. and removed on the original management server. and it can be completed from there. the migration can appear in an error state on the management server of the second cluster. Remove the file /var/log/VPlex/cli/VPlexconfig. Workaround: 1. and the VPLEX storage-volume is not unclaimed and forgotten and an array re-discover is not run. This is due to the configuration event-notices-reports-config command incorrectly marking the interview as completed. Workaround: On the management server that displays the error. LUN shrinking messages. removal of the unhealthy leg results in error. 4. Then go back into the CLI. Exit the VPlexcli. 2. A non-disruptive procedure to reduce the number of paths to a storage array volume is available in the Generator.xml. When attempting to reduce the number of back-end paths on VPLEX to logical units on the storage array be extremely careful with zoning. Re-try the detach mirror command. the system-setup command will come up immediately to the 'Review and Finish' screen with incomplete answers. and execute configuration system-setup. 3. or unreachable volumes if steps are not executed correctly. In some cases. Find the consistency-groups containing the distributed virtual-volume. LUN masking.Known problems and expected behaviors Issue Number 17963qVPX Known Problem In some cases. 2. and the reported size of the logical unit. a LUN swapping condition can occur if the back end LUN on the array is re-masked to the VPLEX with a different LUN ID. This process can cause storage views to enter error state. Replace the unhealthy leg with a healthy leg. 21987qVPX 18502qVPX 20446qVPX 14 EMC VPLEX with GeoSynchrony 5. when one leg of a distributed-device within a consistency group is unhealthy and marked for a rebuild. cancel and then remove the migration jobs. Remove the distributed virtual-volume from the consistency-groups. Problem 21987qVPX was fixed in Patch 4 but is present in previous releases. array connectivity status will be OK. that director with WAN connectivity issues will be prevented from joining the system until its WAN connectivity is restored. Once all directors detect these new LUNs. it will cause the two clusters to partition. Workaround: use collect-diagnostics --large-scale for large scale systems where collect-diagnostics takes longer than expected. When a LUN is not visible from all directors. EMC VPLEX with GeoSynchrony 5. enter the following command set /engines/*/directors/<Director Involved in Replacement>::auto-boot true 21014qVPX 22327qVPX 22584qVPX 22943qVPX 22962qVPX Run the ll command again to ensure that the change was saved. When adding LUNs to an array. If the auto-boot flag of the director is set to false when you are performing the SSD field replacement. When clusters are joined. Rebuild transfer sizes of larger than 2 MB may cause performance issues on host I/O. enter the following command: ll /engines/*/directors/<Director Involved in Replacement> If auto-boot is set to false. these new LUNs will not be detected.Known problems and expected behaviors Issue Number 20632qVPX Known Problem On very large scale systems.2 Release Notes 15 . Workaround: If this status is reported after adding LUNs. collect diagnostics can take a long time. if I/O is not running to all VPLEX directors. run array re-discover. Workaround: Before performing the SSD replacement procedure. the replacement fails while restarting the director firmware. In some cases. Workaround: Use the VPlexcli claiming wizard instead to claim these volumes with correctly generated names. an array's connectivity status is set to error. VMAX LUNs with device ID greater than A000 are claimed by Unisphere for VPLEX with an incorrect name starting with 0000. If director without WAN connectivity attempts to join a system with clusters joined. if one or more directors loses WAN connectivity. Workaround . Workaround: Before performing the director replacement procedure.To recover the paths. enter the following command set /engines/*/directors/<Director Involved in Replacement>::auto-boot true Run the ll command again to ensure that the change was saved. powermt restore Paths on Standby Node on Windows Server Failover Cluster 2012 with MPIO do not recover automatically after the HBA initiators on the standby node are removed and added back to a VPLEX storage view. Workaround: To recover from this problem. enter the following command: ll /engines/*/directors/<Director Involved in Replacement> If auto-boot is set to false. director restarts may be delayed. Paths to VPLEX storage are not restored automatically on Solaris 11 on x86 with PowerPath multipathing when the initiator is removed and added back in VPLEX storage view. execute following commands: 1. the standby node needs to be rebooted. devfsadm -C 3. There are active I/Os to a VPLEX volume that is serving as a replica volume. If a RecoverPoint journal is full while a replica is in image access mode. A VPLEX director restart will be delayed if the following conditions are true: 1.Known problems and expected behaviors Issue Number 22963qVPX Known Problem If the auto-boot flag on the director is set to false when you are performing a director replacement. The RecoverPoint journal associated with the replica volume is full. the director replacement fails while restarting the director firmware. on the host. 23058qVPX 23223qVPX 23280qVPX 16 EMC VPLEX with GeoSynchrony 5. 2. RecoverPoint blocks I/O to replica volumes if the associated journal is full.2 Release Notes . See primus solution emc319893. cfgadm -al 2. 23050qVPX An extent whose underlying storage volume has been removed from the system is unable to be destroyed. . Depending on the I/O workload pattern.Known problems and expected behaviors Issue Number 23332qVPX Known Problem When the subnet address for any subnet under cluster connectivity is set to an address NOT ending in "..... EMC VPLEX with GeoSynchrony 5.. I/O will continue after about 45 seconds.0" then the health-check command output for "IP WAN COM connectivity" reports that the remote cluster is "not-configured" for associated peer cluster port group.... This may lead to the concern that IP wan com connectivity is not configured.. then connectivity validate-wan-com command should be used to check the connectivity after the configuration is completed. If a virtual-volume gets write I/O during NDU.... 23362qVPX 23451qVPX If there is a metadata update made by the second upgraders within the I/O transfer phase. Clustered Windows hosts configured with native MPIO. which in turn requires a metadata update.. This issue is mostly observed on Geo systems because the inter-cluster link is intentionally disabled during NDU. Below is the sample output from an NDU session: Enabling front-end on 1st upgraders (IOFWD is active): . Workaround:-When a subnet address is set to a value not ending in ".DONE WARNING: A meta-volume update was observed on 2nd upgraders after 1st upgraders had already processed meta-data.2 Release Notes 17 .. If this command reports OK for all port groups then this is an assurance that IP wan com connectivity is successfully configured. NDU has to rollback first upgraders. During an NDU session right in the end of I/O transfer phase. Since first upgraders do not have the capability to read the update made by second upgraders... then first and second upgraders are going to have different (inconsistent) views of the metadata.0". The native MPIO path failover can take more than 45 seconds to discover new paths being presented by the NDU process causing I/O to fail.. may fail I/O when VPLEX GeoSynchrony is upgraded (NDU).. the NDU procedure can rollback because of metadata updates seen on second upgraders.... a virtual volume can get its first write I/O (since NDU disabled inter-cluster link) within I/O transfer phase.. the first write requires VPLEX to mark the volume out-of-date.. If there are no more healthy I/O paths left.the disks has an unsupported disk block size and thus can't be moved to a non-default spare pool.) A large number of I/O timeouts on an array to different storage volumes can potentially impact I/Os to healthy arrays causing both the healthy and unhealthy storage volumes to be marked HW dead. Also the storage device having capacity not divisible by 4KB (4096B) may be discovered by VPLEX but cannot be claimed for use within VPLEX. When you try to use the discovered storage volumes having unsupported block size within VPLEX either by claiming them or for creating meta-volume using appropriate VPLEX CLI commands. .. The storage devices which doesn't use 512-byte sector may be discovered by VPLEX but cannot be claimed for use within VPLEX and cannot be used to create meta-volume.2 Release Notes . Refer to the troubleshooting procedures for information on how to find and remedy degraded disks. The healthy storage volumes will be auto-resurrected.The problem persist s until performance to that storage volume improves or the storage volume is removed from the Raid-1. the command fails with this error .RecoverPoint Repository and Journal volumes should never be placed in a storage view with any hosts other than the RecoverPoint appliances that use them. hence ensure the storage array connecting to VPLEX supports or emulates the same.The visibility property of RecoverPoint Repository and Journal volumes should never be set to "global". The performance problems on the storage volume result in a performance problem on the virtual volume and ultimately the host that is accessing that virtual volume. the command fails with this error The storage-volume <storageVolumeName> does not have a capacity divisible by the system block size (4K and cannot be claimed. VPLEX does not isolate that leg. VPLEX does not try to distinguish more degraded paths from less degraded paths. . 23570qVPX 23602qVPX 23609qVPX 23737qVPX 18 EMC VPLEX with GeoSynchrony 5. VPLEX currently uses all degraded I/O paths. . their winner-loser rules should be set such that the site with RecoverPoint appliance is the winner If the storage volume that makes up a leg of a VPLEX Raid-1 begins to perform poorly. When a user tries to utilize a discovered storage volume having unsupported aligned capacity within VPLEX by claiming them using appropriate VPLEX CLI commands.Known problems and expected behaviors Issue Number 23565qVPX Known Problem VPLEX only supports block-based storage devices which uses 512-byte sector for allocation or addressing.If distributed volumes are ever used for RecoverPoint Repository and Journal volumes. Unhealthy extends are not reported. Distributed-devices with only one leg are reported as healthy rather than degraded. Workaround: 1. Workaround is Contact EMC support for help. This issue is due to a deadlock condition that can occur when call home is enabled and versions are refreshed for the first time. -The log rebuild region should overlap with the pending write completion region. The VPN service does not restart automatically after the Ethernet port on the management server goes down and comes back up. -There should be an outstanding write started to the rebuilding leg. It is also possible for this to occur on completion of configuration complete-system-setup or NDU. if all of the following conditions are to happen. Restart the VPLEX CLI process: On the management station unix prompt issue: sudo /etc/init. Please contact EMC support for remedial steps. -The logging volume and the rebuilding leg of the DR1 should fail simultaneously. Workaround: Restart the VPN using: VPlexcli:/> vpn restart 23928qVPX 23929qVPX 23964qVPX 23973qVPX EMC VPLEX with GeoSynchrony 5. the command has been observed to hang on completion. You do not need to rerun the command that hung. before the above failures and is pending completion. 23836qVPX The VPLEX CLI cluster-status command currently only reports unhealthy devices or volumes.2 Release Notes 19 . and continue the bring-up procedure from the point of the problem.d/VPlexManagementConsole restart 3. Incorrect information in portroles and portlayout files can lead to EZ-setup process getting stuck at a point without discovering the back-end ports and storage. The log rebuild to a DR1 can get stuck and will never complete. The side effect of this is preventing the configuration of meta-volume and not allowing the directors to gain quorum.Known problems and expected behaviors Issue Number 23826qVPX Known Problem When configuration system-setup is run. Open a new console window on the management station 2. Log back in to the VPlexcli. So. Because of this issue. "Verifying that the FRU replacement was successful" step might fail if the director attributes "operational-status" and "health-status" are not "ok" in director context in VPLEX CLI. Workaround: See the related troubleshooting issue in the generator. Enable the disabled ones and wait for all directors to join the quorum. when a new datastore is created in with VMFS5.Known problems and expected behaviors Issue Number 24005qVPX Known Problem In vSphere 5.1. This issue is observed in cases where the FRU Replacement script verifies the director attributes before the VPLEX CLI could update the attributes with correct values. The current FRU replacement script for I/O modules does not warn you if a different type of I/O module was inserted. the removal of a mirror on that distributed virtual volume without the --discard option. The following steps need to be performed to complete the Engine Chassis Replacement procedure. Resolve the director failure issue 2. snapshot.2 on GEO configurations when there is a director failure in the second upgrader set. Workaround: Double check that the type of replacement I/O module is of the same type (Fibre Channel or IP) as the one you are replacing before beginning the replacement. all operations to the datastore fail. see the troubleshooting issues for distributed devices. when ATS (CAW) is disabled in VPLEX. A GeoSynchrony upgrade rolls back because of a director failure. Patch 4 to 5. Workaround: Reset this mode in the datastore formatted for VMFS5. Rollback fails for upgrade from 5. Issue ndu recover to complete the recovery of the failed NDU and to restore the system settings. This can only be done through the ESX CLI. can lead to a director firmware failure. 3.0 (the default). Check all WAN-COM ports.). If a distributed virtual volume has inconsistent logging settings. it uses ATS only for all VM operations (VM creation.2 Release Notes . not through vSphere client GUI. the verification step might have failed but the Engine Chassis Replacement would have completed successfully. 24114qVPX 24123qVPX 24166qVPX 24199qVPX 20 EMC VPLEX with GeoSynchrony 5. During the Engine Chassis Replacement procedure. Workaround: In the generator. Workaround: 1. There is a troubleshooting procedure for this issue in the generator.0. etc. 3. The remedy of event 0x8A4830DC should read. 24245qVPX While logging into the VPlexcli. Workaround is to correct the permission or localhost information in /etc/hosts file. 4. Workaround to reduce the /var/log partition is.Known problems and expected behaviors Issue Number 24207qVPX Known Problem During Engine Expansion. the procedure fails when "not all" the directors of the expansion engine(s) netboot and the error message thrown is as below: The replacement procedure has encountered a fatal error and cannot continue.2 Release Notes 21 . 24321qVPX 24324qVPX EMC VPLEX with GeoSynchrony 5. Re-run the VPlexadmin command "add-engines" and power-on the directors as per the script instructions. the localhost resolution might fail which can result in a failed login. The collect-diagnostics command has been initiated and will be needed in the investigation of this issue. . “Ensure both Fibre channel switches are plugged in to different UPSs and that the management-server is plugged in to one of the UPSs.Move out the capture logs from /var/log/ partition. Verify the cable connections and make sure all the cable connections are properly done as per the procedure document. Import VPlexadmin module. Shutdown all the directors of the expansion engine(s) 2. please contact the EMC Support Center. If the problem persists.Restart the management server. This issue can be hit if there are incorrect permissions on /etc/hosts file or if the local host information is removed/commented from /etc/hosts file. The following steps need to be performed to retry the Engine Expansion procedure: 1.” The var/log/ partition over a period of time can get filled up because of large number of capture log files. Event 0x8A4830DC lists an incorrect remedy. . Login as 'admin' into VPlexcli 2. 24484qVPX 24550qVPX Expected behaviors ◆ If you intend to use the entire capacity of a storage volume. when the storage volume grows. Use the procedure generator to create the document and follow Task 15: Establish a HyperTerminal connection to the switch 3.Known problems and expected behaviors Issue Number 24213qVPX Known Problem During the execution of VPLEX EZ-Setup on a GEO/Metro system the warning The Fibre Channel switch points to an incorrect NTP Server is displayed indicating that the NTP server was not configured correctly on the Fibre Channel switch. If you are expecting to use a specific amount of the storage volume and no more. Execute "reset" command. 2. the volume displays 0B expandable space even if it displays expandable = true. In this case. Get the IP address of the eth2 for the management server. After the above steps. The values are reset to Kiev password policy 5. In this case. input the admin password and later agree at the warning prompt 4. if the upgrading cluster can’t see its peer during the Post NDU tasks. 4. the NDU will display an error message indicating that it failed to configure call home. 22 EMC VPLEX with GeoSynchrony 5. Navigate to "/security/authentication/password-policy" context 3. you configure a full-sized extent on top of that storage volume This way. the Current Capacity of the extent increases as well. During NDU in a VPLEX Metro. On an upgrade from Release 5. Therefore. Log into the side-A of the switch and execute the command tsclockserver with the eth2 IP address to point to the correct NTP server. any of the password policy attributes can be modified using the set attribute-name value command in the password policy context.2 Release Notes . then re-try the command. Workaround: Run the configuration update-callhome-properties command. the volume displays the amount of actual expandable capacity. when the storage volume grows. Repeat the same procedure on the side-B of the switch.2.1 to Release 5. Workaround: 1. If the command fails. changes to the password minimum length attribute do not get updated. it's likely that you want the extent to grow as well. you configure a less-than-full-sized extent on that storage volume. Workaround: 1. and is available for expansion. Ensure that the host resources are sufficient to handle the number of paths provisioned for your VPLEX system. users should first detach the storage volumes from the VPLEX Raid-1.2 Release Notes 23 . If such operations need to be performed. Customers should deploy an external appliance to achieve data encryption over the IP WAN links between clusters. or reduce the number of concurrent migrations/rebuilds. you can expand the volume and use the Show Available button to find the available space on that same volume and then use that to expand with. In the GUI. if you change the active storage processor (SP) for a LUN. the incorrect SP may be reported as active in the VPLEX user interface. when in fact SPB is active. VPLEX removes the “dead” status from the volume. and then re-add the storage volumes to the VPLEX Raid-1 as necessary to trigger a rebuild. When a storage volume becomes hardware dead VPLEX automatically probes the storage volume within 20 seconds. ◆ Using the CLARiiON™ Navisphere Management Suite. WARNING During the time that the device is hw-dead. For example. Please follow the Best Practices to configure and monitor WAN-COM links. VPLEX in Metro and Geo configurations does not provide native encryption over the IP WAN COM link. start I/O.Known problems and expected behaviors This does not mean that you can not use this extra storage. perform the data changing operations. then lower the rebuild transfer-size setting for the devices. thus returning it to a healthy state. Poor QoS on the WAN-COM link in a Metro or Geo configuration could lead to undeterministic behavior and data unavailability in extreme cases. Failure to follow these steps will change data ◆ ◆ ◆ ◆ ◆ EMC VPLEX with GeoSynchrony 5. To correct this reporting inaccuracy. the system recognizes which SP is active and reports it correctly. If the probe succeeds. If host I/O performance is impacted during a data migration or during a rebuild. After I/O initiates. users should not perform operations that change data on the storage volumes underneath VPLEX Raid-1 (through maintenance or replacing disks within the array). SPA may be reported as active. causes the operational status for the consistency group to display: "requires-resume-after-data-loss-failure". Policies are not enforced for the service user. logging volumes. such as requests rejected by storage volume or port leaving the back-end fabric. while all SCSI commands initiated to it by the initiator (VPLEX) timed out. for any user created on the management server. but the admin user will be forced to change their password on the next login.2 Release Notes . • The condition where storage arrays would enter fault modes such that one or more of its target ports remained on the fabric.Known problems and expected behaviors underneath VPLEX without its knowledge. who has not changed their password in the last 91 days. simultaneously vault. Under the rare circumstances where both clusters completely fail at the same time. A back-end failure on both legs of a distributed RAID (back-end failure at each cluster) that belongs to an asynchronous consistency group. service personnel must not remove the power in one or more engines unless both directors in those engines have been shutdown and are no longer monitoring power. ◆ Devices used as system volumes (VPLEX meta-volume. and backups for the meta-volume and mirror). please contact EMC support for assistance with recovery. There are two types of failure handling for back-end array interactions. always follow official maintenance procedures. ◆ 24 EMC VPLEX with GeoSynchrony 5. ◆ By default. To avoid unintended vaults. mirrored copy. or one cluster vaults and the other cluster completely fails before the vault is recovered. Failure to do so. their accounts will get locked. which may lead to data corruption upon resurrection. Refer the “Password Policy” section of the Generator troubleshooting section to overcome account lockouts. • The unambiguous failure responses. without a data rebuild the Raid-1 legs might be inconsistent. ◆ WARNING When performing maintenance activities on a VPLEX Geo configuration. leads to data unavailability in the affected cluster. must be formatted/zeroed out prior to being used by VPLEX as a meta-volume. The admin user account will never be locked out. ◆ ◆ ◆ Veritas DMP settings with VPLEX If a host attached to VPLEX is running Veritas DMP Multipathing. Set the recoveryoption to throttle and iotimeout to 30 using the vxdmpadm setattr enclosure emc-vplex0 recoveryoption=throttle iotimeout=30 command. VPLEX issues a call home event. In Release 5. If a RecoverPoint appliance and a VPLEX director restart at the same time. In this case. 1. I/O requests initiated by a host initiator to VPLEX virtual volumes are redirected away from unresponsive paths to the back-end array. Set the dmp_lun_retry_timeout for the VPLEX array to 60 seconds using the vxdmpadm setattr enclosure emc-vplex0 dmp_lun_retry_timeout=60 command. In the second condition.1 Patch 3. and to a third site (by RecoverPoint). Data from the cluster where the RPAs are configured is replicated to the peer VPLEX cluster (by VPLEX). VPLEX would not take any isolation action for these paths. onto paths that are responsive.VPLEX Geo The following expected behaviors are specific to VPLEX Geo: EMC VPLEX with GeoSynchrony 5. At the time of isolation. change the following values of the DMP tunable parameters on the host to improve the way DMP handles transient errors at the VPLEX array in certain failure scenarios.1 Patch 3 and later. ◆ ◆ Changing the time-zone on VPLEX components is not supported. VPLEX now isolates the paths which remain on the fabric but stay unresponsive. a full sweep may occur. Device migrations between two VPLEX clusters are not supported if one leg of the device is replicated by RecoverPoint.2 Release Notes 25 . Virtual image access is not supported. Expected behaviors . VPLEX handled the unambiguous failure responses by isolating the failed storage volume or path (Initiator-Target Nexus). In VPLEX Metro configurations. 2.Known problems and expected behaviors Prior to Release 5. RecoverPoint Appliances (RPAs) can be configured at only one VPLEX cluster. ◆ Replication of volumes between clusters in a Geo environment should be performed during maintenance when host access to the volumes can be stopped. For the duration of this synchronous migration. VPLEX volume replication (converting from a local volume to a distributed asynchronous volume) temporarily changes the asynchronous volumes to synchronous volumes. the upgrade software prevents Non-Disruptive Upgrade of the VPLEX software to the next release until the remote volumes are either converted to local volumes or to distributed volumes.2 Release Notes . While both the VPLEX GUI and the CLI allow creation of remote volumes in VPLEX Geo. If the user makes use of remote volumes. the only cache mode available at this time is synchronous. If the applications using these volumes are up and running. ◆ Use of remote volumes in VPLEX Geo is not supported. EMC does not support this operation when hosts are accessing the volumes. The replication operation is available in both the GUI and the CLI However. remote volumes in VPLEX Geo are not supported. even across VPLEX Geo asynchronous distances. 26 EMC VPLEX with GeoSynchrony 5. As such. there is a risk of data unavailability. ◆ Deletion of asynchronous consistency groups requires that the applications accessing the volumes in the asynchronous consistency group be halted or shut down. the volumes in the group are removed and the cache mode of the volumes is changed to synchronous.Known problems and expected behaviors ◆ Migration of volumes between clusters in a VPLEX Geo environment should be performed during maintenance when host access to the volumes can be stopped. Currently. there is a risk of data unavailability and for this reason. The migration can be done using the CLI only. this operation has been disallowed in the VPLEX GUI. should remote volumes be present. EMC Support will be limited to those cases where the applications using the volumes have been halted or the volumes removed from the storage view. This change in latency is disruptive to applications and risks data unavailability. When deleting the asynchronous consistency group. VPLEX volume migrations are always synchronous. While neither the GUI nor the CLI prevents this action. Currently. this synchronous replication can lead to data unavailability to those applications sensitive to Geo latency. Furthermore. the detach rule for a VPLEX Consistency Group protected by RecoverPoint is required to have the winner configured to be the VPLEX cluster with the RecoverPoint attached. EMC does not support changing the cache mode of a consistency group from asynchronous to synchronous. and EMC does not support the removal of volumes from an asynchronous consistency group unless the volumes have been removed from the storage view or the application has been shutdown. If the conflicting detach is to be resolved from the VPLEX cluster without RecoverPoint protection.2 Release Notes 27 . The CLI does not provide this safeguard. To avoid unexpected data unavailability. ◆ Modification of the cache mode of an asynchronous consistency group to synchronous is not supported. ◆ Exposure of synchronous volumes (remote and synchronous distributed RAID-1) to views could cause data unavailability. The VPLEX GUI does not allow the user to make such a change. the RecoverPoint protection for the VPLEX consistency group needs to be torn down. EMC does not support this configuration. Changing the cache mode of a consistency group from asynchronous to synchronous risks data unavailability.Known problems and expected behaviors ◆ Removal of a volume from an asynchronous consistency group without first halting the applications accessing the volume is not supported. WAN Link restoration result in a conflicting detach. the other cluster suspends. Expected behaviors — RecoverPoint In a VPLEX Metro/RecoverPoint environment. Removing a volume from an asynchronous consistency group automatically changes its cache mode from asynchronous to synchronous. which makes it inaccessible to the host. During a WAN Link outage. it provides no safeguard against data unavailability. The GUI only allows this operation if the volume has been removed from its storage view. shut the application down prior to the removal from the storage view to avoid data unavailability. While the CLI still allows the user to change the cache mode from asynchronous to synchronous. If the suspended cluster resumes. If the application attached to the volume is up and running there is a risk of data unavailability. and the EMC VPLEX with GeoSynchrony 5. Documentation updates 28 EMC VPLEX with GeoSynchrony 5. not 8. If VPLEX resolves the conflicting detach from the VPLEX cluster with RecoverPoint attached.2 Release Notes .2 since product release: In the CLI Guide. Documentation errata The following errors were found in the VPLEX documentation for Release 5. cannot declare a winning cluster other than the one doing splitting. At this point. in the password-policy set command reference page. If an attempt is made to resolve the conflicting detach before the RecoverPoint protection is torn down. the following error will be seen: In the presence of splitting. you can resolve the conflicting detach. password-minimum-length value is 14. and recreate the RecoverPoint protection. no tear down of RecoverPoint protection is required.Documentation updates RecoverPoint enabled flag removed from the VPLEX consistency group. eliminating the need to duplicate them across documents. EMC VPLEX Site Preparation Guide — Steps to prepare the customer site for VPLEX installation. EMC VPLEX Element Manager API Guide — Describes the element manager programming interface. The tables include space to enter the required values. Unisphere for VPLEX Online Help — Information on performing various VPLEX operations available on the VPLEX GUI. EMC VPLEX Configuration Guide — Detailed steps to configure a VPLEX implementation at the customer site. Implementation and Planning Best Practices for EMC VPLEX Technical Notes ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ EMC VPLEX with GeoSynchrony 5.Documentation Documentation The following documentation is available to support VPLEX: ◆ EMC VPLEX Hardware Installation Guide — High-level overview of the steps to configure a new VPLEX installation. This document contains the content of the open-source licenses used by VPLEX. For use in performing upgrades. EMC VPLEX Security Configuration Guide — Provides an overview of security configuration settings available in VPLEX. EMC VPLEX Open-Source Licenses — For reference. EMC VPLEX CLI Guide — Descriptions of VPlexcli commands. component replacement. SolVe Desktop generator— Replaces the EMC VPLEX Procedure Generator. troubleshooting. EMC VPLEX Configuration Worksheet— Tables of parameters for which specific values are required to configure a VPLEX cluster. The SolVe Desktop is available on the EMC Online Support website.2 Release Notes 29 . and references to the applicable documents. This tool is also referred to in this document as the Generator. for download and use on local PCs. EMC Regulatory Statement for EMC VPLEX — Provides regulatory statements in a single document. and miscellaneous management procedures. EMC VPLEX Administration Guide — High-level information on system administration topics specific to VPLEX. 05_D10_VASA_9-vasa.00. Upgrading GeoSynchrony EMC provides a Procedure Generator for generating custom procedures to assist you in managing your system. EMC VPLEX Configuration Worksheet 4. use the documents in the following order: 1.Installation ◆ EMC VPLEX Product Guide — High level overview of the VPLEX hardware and GeoSynchrony 5.com/downloads >VPLEX. EMC VPLEX Site Preparation Guide 3. follow the instructions in the Generator and use the following files for each release of VPLEX: GeoSynchrony Release Release 5.2 Release Notes . EMC Best Practices Guide for AC Power Connections in Two-PDP Bays ◆ Installation To install and set up a new VPLEX implementation. the Solve Desktop. EMC VPLEX GeoSynchrony Release Notes 2.2 software including descriptions of common use cases. combines the functionality of all the Procedure Generators from different EMC products into one desktop tool and integrates those Procedure Generators with other support tools.0.2. EMC VPLEX Configuration Guide Installing VASA Provider If you are installing VASA provider for the first time. If the VASA Provider is already installed on your VPLEX system.ova You can download these files from http://support.EMC.2 VASA OVA File VPlex-5. 30 EMC VPLEX with GeoSynchrony 5.00. there is no need to upgrade. A new tool. Do one of the following to start the tool: • If you are using the Procedure Generator.05-director-firmware-package. If you have not already done so. start the Procedure Generator. 1. c.tar EMC VPLEX with GeoSynchrony 5.tar VPlex-5. type VPLEX Series and press Enter 3. In either the Procedure Generator or in the VPLEX module of the SolVe Desktop. download the Procedure Generator or SolVe Desktop from the downloads area on EMC Online Support.00. Select Upgrade GeoSynchrony > Upgrade to GeoSynchrony to produce the upgrade document. (available on the EMC Online Support website) to produce the upgrade document. In the Find a product field.Upgrading GeoSynchrony You can use the Procedure Generator or SolVe Desktop.00. 4. select Procedures for EMC USE ONLY: Installation and Upgrade. 3.00.00. Locate and download the following files. Start the SolVe Desktop. 5. • If you are using the SolVe Desktop: a.0. 1.2.com/products 2. Select Downloads >>.2 Release Notes 31 . 2.2 Files to download VPlex-5. Click Next. Navigate to https://support.0.2. Upgrade package location The VPLEX GeoSynchrony upgrade files are available on EMC Online Support (registration required). 4. Accept the download of the VPLEX module d. GeoSynchrony Release Release 5. b. Log in to the tool.05-management-server-package.EMC. Run the VPLEX module. 0.0.2 Upgrade to Release 5.1 5.1 Patch 2 5.Upgrading GeoSynchrony Upgrade paths for Release 5.1 Patch 3 5.1 Patch 2 5.1 Patch 1 5. the upgrade to 5.2 Release Notes .1 Patch 4 For those new systems running Release 4.1 5. 32 EMC VPLEX with GeoSynchrony 5.2 is supported from the following releases: Release VS1 X X X X X X X X X X X X X X X X X X VS2 4.0 5.2 can be accomplished using the NDU pre-configuration upgrade procedure.2 Patch 1 5.1 Patch 1 5.0.2 or above that have not been configured yet. go to the EMC Online Support website (registration is required) at: http://support. Storage Management System (SMS) SMS provides serviceability capabilities and enables phone home. Your suggestions will help us continue to improve the accuracy. error logging.com Technical support For technical support. you must have a valid support agreement. Please send your opinion of this document to: techpubcomments@EMC. VPLEX's advanced data caching and distributed cache coherency provides workload resiliency. and licensing information can be obtained as follows. Please contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account. please include the title and. or for information about EMC products. automatic sharing. secure interconnect. organization. balancing and failover of storage domains with predictable service levels. and service. and overall quality of the user publications. secure communications paths for local and remote clusters. software updates. To open a service request through EMC Online Support. licensing. if available.EMC. comments or questions about specific information or procedures. data logging. the part EMC VPLEX with GeoSynchrony 5.com Your Comments If you have issues. go to EMC Online Support. ◆ Troubleshooting and getting help EMC support.2 Release Notes 33 . Product information For documentation. product. release notes.Software packages Software packages Software packages that may be contained on this kit: EMC GeoSynchrony ◆ GeoSynchrony provides simplified management and non-disruptive data mobility across heterogeneous arrays with a unique scale-up and scale-out architecture. " EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION. the page numbers.Troubleshooting and getting help number.com. THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS. EMC believes the information in this publication is accurate as of its publication date. see EMC Corporation Trademarks on EMC. All other trademarks used herein are the property of their respective owners. Copyright © 2013 EMC Corporation. copying. 34 EMC VPLEX with GeoSynchrony 5. and distribution of any EMC software described in this publication requires an applicable software license. AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. and any other details that will help us locate the subject you are addressing. the revision (for example. Use.2 Release Notes . All rights reserved. -01). For the most up-to-date listing of EMC product names. The information is subject to change without notice.
Copyright © 2024 DOKUMEN.SITE Inc.