Professional Documents
Culture Documents
Topic
Installation Procedure by Hardware Model
Selections
DD Installation Procedure by Model: DD6300, DD6800 and DD9300 Hardware Installation Guide
Storage Shelves Installation Procedures: DS60 Shelf Installation Procedure
Storage Shelves Installation Procedures: ES30 and FS15 Shelf Installation Procedure
REPORT PROBLEMS
If you find any errors in this procedure or have comments regarding this application, send email to
SolVeFeedback@emc.com
Copyright © 2017 Dell Inc. or its subsidiaries. All Rights Reserved.
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES
NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION
IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be the property of their respective owners.
Page 1 of 59
Contents
Preliminary Activity Tasks ...................................................................................................4
Read, understand, and perform these tasks.................................................................................................4
Page 2 of 59
Rails and cable management assembly .....................................................................................................26
Identify the rack location to install the system.............................................................................................27
Task 1: Install the rails ................................................................................................................27
Task 2: Install the DD6300, DD6800, or DD9300 system into a rack .........................................29
Task 3: Installing the cable management assembly (CMA) ........................................................31
Installing the expansion shelves into the racks...........................................................................................33
Connecting ES30 shelves...........................................................................................................................34
ES30 cable information..........................................................................................................................34
DD6300..................................................................................................................................................35
DD6800 and DD9300 (single node, DD Cloud Tier, or ERSO) .............................................................36
DD6800 and DD9300 (HA or HA with DD Cloud Tier)...........................................................................38
Connecting DS60 shelves...........................................................................................................................40
DS30 cable information .........................................................................................................................41
DD6300..................................................................................................................................................41
DD6800 and DD9300 ............................................................................................................................42
DD6800 and DD9300 with HA ...............................................................................................................44
DD6800 with DD Cloud Tier ..................................................................................................................46
DD6800 and with HA and DD Cloud Tier ..............................................................................................48
DD9300 with DD Cloud Tier or HA and DD Cloud Tier .........................................................................49
DD6800 and DD9300 with ERSO..........................................................................................................51
Task 4: Connecting the HA interconnect.....................................................................................53
Task 5: Installing the front bezel .................................................................................................54
Connect data cables ...................................................................................................................................54
Power on all systems ..................................................................................................................................54
Enable administrative communication ........................................................................................................55
Accepting the End User License Agreement (EULA) .................................................................................56
Run the configuration wizard ......................................................................................................................56
Task 6: Configuring the network .................................................................................................56
Task 7: Configuring additional system parameters .....................................................................58
Task 8: Configure HA..................................................................................................................58
Page 3 of 59
Preliminary Activity Tasks
This section may contain tasks that you must complete before performing this procedure.
Table 1 List of cautions, warnings, notes, and/or KB solutions related to this activity
495965: Take caution Data Data Domain Restorers with DS60 shelves may encounter a kernel panic
due to an LCC firmware change
2. [ ] Table 2 lists the top 10 trending service topics related to this product. This is a proactive attempt
to make you aware of any KB articles that may apply to your activity, or at the very least inform you of
issues that may be associated with this product.
Page 4 of 59
Hardware Overview and Installation Guide
Release 6.1 302-003-008 Rev 07
Safety information
CAUTION: If the system is used in a manner that is not specified by the manufacturer, the
protection that is provided by the equipment may be impaired.
The RJ45 sockets on the motherboard, PCI cards, or I/O modules are for Ethernet connection only
and must not be connected to a telecommunications network.
Page 5 of 59
All plug-in modules and blank plates are part of the fire enclosure and must be removed only when a
replacement can be added immediately. The system must not be run without all parts in place.
DD6300, DD6800, and DD9300 systems must be operated only from a power supply input voltage
range of 100–240 VAC and 50–60 Hz. The ES30 and FS15 shelves use 100–240 VAC and 50–60
Hz. DS60 shelves use 200–240 VAC and 50–60 Hz.
Each component is intended to operate with all working power supplies installed.
Provide a suitable power source with electrical overload protection.
A safe electrical earth connection must be provided to each power cord. Check the grounding of the
power sources before applying power.
The plug on each power supply cord is used as the main device to disconnect power from the
system. Ensure that the socket outlets are located near the equipment and are easily accessible.
Permanently unplug the unit if you think it is damaged in any way and before moving the system.
DD6300, DD6800, and DD9300 systems include two power supplies. To remove system power
completely, disconnect both power supplies.
The power connections must always be disconnected before removal or replacement of a power
supply module from the system.
A faulty power supply module must be replaced within 24 hours.
Do not lift system components by yourself. DD6300, DD6800, and DD9300 systems weigh up to 80
lbs (36.29 kg) and an ES30 expansion shelf weighs up to 68 lbs (30.8 kg). A DS60 shelf weighs up to
225 lbs (102 KG)
CAUTION: Data Domain systems are heavy. Use at least two people or a mechanical lift to
move any system.
Do not lift an expansion shelf by the front handles on any modules. The handles are not designed to
support the weight of the populated shelf.
To comply with applicable safety, emission, and thermal requirements, covers must not be removed
and all bays must be fitted with plug-in modules.
Once removed from the shipping box, it is ok to lift the system or the chassis
To prevent the rack from becoming top-heavy, load the rack with storage shelves beginning at the
bottom and the system in the designated location.
Page 6 of 59
Data Domain recommends that you wear a suitable antistatic wrist or ankle strap for ESD protection.
Observe all conventional ESD precautions when handling plug-in modules and components.
Front panel
The front panel contains 12 slots for a mix of 4 TB hard disk drives (HDDs) and 800 GB solid state drives
(SSDs). The exact layout of the drives, and the types of drives used varies depending on the specific
system model.
Note: Configurations that do not fill all 12 drive slots use filler panels in the empty slots to maintain proper
air flow inside the chassis.
Note: Upgrading a base configuration to an expanded configuration provides less capacity than a factory-
built expanded configuration.
Page 7 of 59
Configuration Number of SSDs
DD6800 expanded 4
Page 8 of 59
apex of the triangle points left or right, indicating that disk's status. If the disk drive has a failure, the disk’s
status LED turns from blue to amber, indicating that a drive must be replaced.
The front also contains two system status LEDs. A blue system power LED is present that is on whenever
the system has power. An amber system fault LED is also present that is normally off and lit amber
whenever the chassis or any other FRU in the system requires service.
Back panel
The back panel of the DD6300/DD6800/DD9300 chassis contains the following components:
1. [ ] Management panel
2. [ ] Two 2.5" SSD slots labeled 0 and 1 (populated on DD6300 only)
Page 9 of 59
3. [ ] I/O module slots
4. [ ] Power supply modules (PSU 0 is the lower module, and PSU 1 is the upper module)
Page 10 of 59
Name of LED Location Color Definition
– POST - 1 Hz
– OS - 4 Hz
Drive Power/Activity LED a Left LED on the SSD Blue Lit blue when the drive is
powered. Blinks during
drive activity.
Drive Fault LED a Right LED on the SSD Amber Lit solid amber when a
drive needs service.
System power LED Right-most LED on the Blue SP has good, stable
management panel power
PSU FRU LED - AC Good Top LED on power supply Green AC input is as expected
PSU FRU LED - DC Good Middle LED on power Green DC output is as expected
supply
PSU FRU LED - Attention Bottom LED on power Amber PSU has encountered a
supply fault condition
Page 11 of 59
Figure 4 I/O module Power/Service LED location
Note: a. For RJ45 networking ports, the standard green link and amber activity LEDs are used.
I/O modules
I/O module slot numbering
The eight I/O module slots are enumerated as Slot 0 (on the left when viewed from the rear) through Slot
7. Ports on an I/O module are enumerated as 0 through 3, with 0 being on the bottom.
Page 12 of 59
Figure 6 I/O module slot numbering
1. Slot 0
2. Slot 1
3. Slot 2
4. Slot 3
5. Slot 4
6. Slot 5
7. Slot 6
8. Slot 7
Since DD6300, DD6800, and DD9300 is a data backup appliance, it is only supported in fixed
configurations. The fixed configurations define the exact slots into which the I/O modules may be
inserted. The processors directly drive the eight I/O module slots, meaning all slots are full performance.
The non-optional SAS, NVRAM, and 10GBaseT I/O modules are allocated to fixed slots. The optional
Host Interface I/O modules are used for front end networking and Fibre Channel connections. The
quantity and type of these I/O modules is customizable, and there are many valid configurations.
Page 13 of 59
Tier Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7
T Port 10 GBase-T, GBase-T, GBase-T,
GBase-T, or Dual or Dual or Dual
or Dual Port 16 Port 16 Port 16
Port 16 Gbps Gbps Fibre Gbps Fibre
Gbps Fibre Channel Channel
Fibre Channel
Channel
Note: a. Optional in DD6300 configurations, but required with one or more external storage shelves.
Note: A maximum of three Quad Port 10 GBase-T I/O modules are supported in slots 3-6 because of the
mandatory Quad Port 10 GBase-T I/O module in slot 1.
The following table assigns rules for populating the I/O modules.
Page 14 of 59
Table 13 I/O module slot population rules
Storage capacity
Data Domain system internal indexes and other product components use variable amounts of storage,
depending on the type of data and the sizes of files. If you send different datasets to otherwise identical
systems, one system may, over time, have room for more or less actual backup data than another.
Note: For information about Data Domain expansion shelves, see the separate document, Data Domain
Expansion Shelf Hardware Guide.
Page 15 of 59
Memory Internal Internal External Usable data storage space (TB/TiB/GB/GiB)a
disks storage storage
(raw) (raw)
Rear: : 48 34 TB internal internal drives:
1x TB External: 48 TB drives: drives: 20,489
800 20.02 22,000 GiB
GB TiB GB 12
SSD 12 12 internal
internal internal drives:
drives: drives: 31,665
30.94 34,000 GiB
TiB GB External:
External: External: 44,704
43.68 48,000 GiB
TiB GB
96 GB Front: 48 TB 180 TB Internal: 34 TB Internal: Internal: Internal:
(Expanded) 12 x 4 External: 144 TB 30.94 34,000 31,665
TB TiB GB GiB
HDDs External: External: External:
Rear: 131 TiB 144,000 134,110
2x GB GiB
800
GB
SSD
Note: a. The capacity differs depending on the size of the external storage shelves used. This data based
on ES30 shelves.
Page 16 of 59
Memory Internal External Usable data storage space (TB/TiB/GB/GiB)a
disks storage (raw)
(system
disks
only)
local metadata: metadata:
storage 96,000 GB 89,407 GiB
Note: a. The capacity differs depending on the size of the external storage shelves used. This data based
on ES30 shelves.
b. HA is supported.
c. HA is not supported with Extended Retention.
Note: a. The capacity differs depending on the size of the external storage shelves used. This data based
on ES30 shelves.
b. HA is supported.
Page 17 of 59
DD6300 system features
Table 17 DD6300 system features
Note: a. The weight does not include mounting rails. Allow 2.3-4.5 kg (5-10 lb) for a rail set.
Page 18 of 59
Table 19 System operating environment
Requirement Description
Ambient temperature 10°C - 35°C; derate 1.1°C per 1,000 ft (304 m)
Relative humidity (extremes) 20–80% noncondensing
Elevation 0 - 7,500ft (0 - 2,268m)
Operating acoustic noise Lwad sound power, 7.5 Bels
DIMMs overview
Dual in-line memory modules (DIMM) come in various sizes, which must be configured in a certain way.
This topic can help you select the correct configuration when servicing DIMMs.
The storage processor contains two Intel processors each with an integrated memory controller that
supports four channels of memory. The storage processor allows two DIMM slots per channel, so the
storage processor supports a total of 16 DIMM slots.
To ensure maximum memory performance, there are memory DIMM population rules for best memory
loading and interleaving. Table 21 and Table 22 specify the DIMM location rules for various memory
configurations:
Page 19 of 59
Memory
DD6300 AIO 96 GB 8 GB N/A 8 GB N/A 8 GB 8 GB 8 GB 8 GB
Expanded
DD6300 AIO 48 GB N/A N/A 8 GB N/A N/A 8 GB N/A 8 GB
Page 20 of 59
Feature DD6800 DLH (Base DD6800 DLH (Expanded
configuration) configuration)
availability cluster containing two availability cluster containing four
drives. drives.
SAS I/O modules (Quad Port 6 Gbps 2 2
SAS
SAS string depth ES30 6 6 (7 for extended retention)
(max)
DS60 3 3
ES30 and DS60 5 shelves total 5 shelves total
b. DD Cloud Tier requires two ES30 shelves fully populated with 4 TB drives to store DD Cloud Tier
metadata.
Note: a. The weight does not include mounting rails. Allow 2.3-4.5 kg (5-10 lb) for a rail set.
Requirement Description
Ambient temperature 10°C - 35°C; derate 1.1°C per 1,000 ft (304 m)
Relative humidity (extremes) 20–80% noncondensing
Elevation 0 - 7,500ft (0 - 2,268m)
Operating acoustic noise Lwad sound power, 7.5 Bels
Page 21 of 59
Figure 8 CPU and memory locations
DIMMs overview
Dual in-line memory modules (DIMM) come in various sizes, which must be configured in a certain way.
This topic can help you select the correct configuration when servicing DIMMs.
The storage processor contains two Intel processors each with an integrated memory controller that
supports four channels of memory. The storage processor allows two DIMM slots per channel, so the
storage processor supports a total of 16 DIMM slots.
Page 22 of 59
Tier Total 8 9 10 11 12 13 14 15
Memory
DD6800 DLH 192 GB 16 GB 8 GB 16 GB 8 GB 8 GB 16 8 GB 16
GB GB
DD6800 DLH 192 GB 16 GB 8 GB 16 GB 8 GB 8 GB 16 8 GB 16
Extended GB GB
Retention/DD
Cloud Tier
Page 23 of 59
Feature DD9300 DLH (Base DD9300 DLH (Expanded
configuration) configuration)
(max) DS60 3 3
ES30 and DS60 5 shelves total 5 shelves total
b. DD Cloud Tier requires four ES30 shelves fully populated with 4 TB drives to store DD Cloud Tier
metadata.
Note: a. The weight does not include mounting rails. Allow 2.3-4.5 kg (5-10 lb) for a rail set.
Requirement Description
Ambient temperature 10°C - 35°C; derate 1.1°C per 1,000 ft (304 m)
Relative humidity (extremes) 20–80% noncondensing
Elevation 0 - 7,500ft (0 - 2,268m)
Operating acoustic noise Lwad sound power, 7.5 Bels
Page 24 of 59
Figure 9 CPU and memory locations
DIMMs overview
Dual in-line memory modules (DIMM) come in various sizes, which must be configured in a certain way.
This topic can help you select the correct configuration when servicing DIMMs.
The storage processor contains two Intel processors each with an integrated memory controller that
supports four channels of memory. The storage processor allows two DIMM slots per channel, so the
storage processor supports a total of 16 DIMM slots.
DD9300 memory DIMM configuration
Page 25 of 59
Table 34 Memory locations - CPU 1
CAUTION: Data Domain systems are heavy. Always use two people or a mechanical lift to
move a system.
3. [ ] Remove expansion shelves and their bezels from the shipping packages.
Page 26 of 59
A cable management assembly (CMA), for organization of cables at the rear of the system, is already
installed onto the system on a Data Domain rack. For field installed systems, the CMA is shipped with the
system.
Note: The designated slots in the rack are the recommended location for the DD6300, DD6800, and
DD9300 systems to support the cabling described in this document. Other locations may require different
cable lengths for some configurations.
1. [ ] If EIA rail mounting holes of 7.1 MM diameter round, or M5, 12-24, 10-32 threaded, are being
used, install the filler using the pin as shown. If not, proceed to the next step.
Page 27 of 59
Once the filler is installed to the rail, the installation can continue as follows.
2. [ ] At the front of the cabinet, insert the two adaptors on the front of the rail into the correct holes in
the 2U space.
3. [ ] Insert one screw into the lower hole to hold the front of the rails in place. Do not fully tighten the
screw at this time.
Note: An 18-inch screwdriver (minimum) is required to install the screw into the rear of the rails.
4. [ ] At the rear of the cabinet, align and insert the two adaptors on the rear of the rail with the
mounting holes in the NEMA channel. Make sure the rail is level.
Page 28 of 59
5. [ ] Use an 18-inch screwdriver (minimum) to secure the rear of the rail to the NEMA channel using
one screw.
6. [ ] Tighten the front screw.
7. [ ] Repeat for the other rail.
CAUTION: Data Domain systems are heavy. Always use two people or a mechanical lift to move a
system.
CAUTION: The system controller should be installed in the pre-defined location for the system
controller in the rack to comply with Data Domain rack mounting guidelines.
Do not apply AC power to the system controller until all expansion shelves and cables are
installed.
Ensure the PSNT label, which is in a slot just beneath the power supply on the rear of the chassis
is not damaged or snagged during the installation of the system into the rack.
1. [ ] From the front of the rack, lift the chassis to install the system in the rack in the correct location.
Page 29 of 59
2. [ ] Slide the unit onto the rails and push it fully into the cabinet until the mounting holes on the unit
are flush with the NEMA channel.
3. [ ] Secure the unit to the NEMA channel and rails using four screws, two on each side.
4. [ ] Check the PSNT label in the slot just beneath the power supply at the rear of the chassis.
Page 30 of 59
1. Service tag bracket
2. Locking tab
3. Service tag
1. [ ] Align and insert the CMA tabs in the tongues on the rails and align the plunger in the hole of the
mounting rail on both sides.
2. [ ] Working one side at a time, pull out the plunger and slide the CMA tabs as required until the
plunger pin snaps into the mounting hole of the rail.
Page 31 of 59
Figure 14 Installing the CMA on the rack
3. [ ] Open the velcro straps to route cables through the CMA. Secure the cables in place using the
velcro straps.
4. [ ] To adjust the CMA position depth (in or out), pull inward on the orange latches (1) and pull out or
push in on the arm simultaneously as needed (2).
Note: The I/O modules, the NVRAM module, the power supply units and the 2.5" disks can be
accessed for removal and replacement with the CMA in place. Adjust the depth of the CMA arms to
access these modules.
Page 32 of 59
Installing the expansion shelves into the racks
CAUTION: Data Domain systems are heavy. Always use two people or a mechanical lift to move
and install a Data Domain system. Use caution to install the expansion shelves.
1. [ ] From the front of a rack, lift the shelf to the designated rack location.
2. [ ] Add shelves to the racks in order, one at a time, from the bottom of a rack to the top filling each
string in that rack before going to the next.
Note: Strings in add-on racks may connect to the same string number in other racks.
Shelves are added in the order V1.1, V1.2, V1.3, V1.4, V2.1, V2.2, and so on. Shelves are labeled
VN.M. VN refers to string "N" and the "M" is the number of the shelf in the string. For example, V3.2
refers to the second shelf in the third string.
3. [ ] Secure each expansion shelf in the rack.
4. [ ] When installing an SSD shelf for Data Domain metadata on flash:
The SSD shelf counts towards the total number of shelves connected to the system.
Page 33 of 59
Data Domain recommends installing the SSD shelf in the V1.1 positon, but if that is not possible,
the shelf can be placed in a different location in the rack so long as cables of sufficient length are
available.
Note: V1.1 is recommended for better performance because this will the 1st hop where data will
be written. If the SSD shelf is connected to the last enclosure in a chain, then each read/write
request has to go through many hops, which introduces latency issues when compared to when
the SSD shelf is on the 1st shelf of a chain.
1. [ ] Cable from the B Controller EXPANSION port of the lower shelf to the B controller HOST
port of the next higher shelf.
2. [ ] Then cable from the A Controller HOST port of lower shelf to the A controller EXPANSION
port of the next higher shelf
3. [ ] There are no specific placement or cabling requirements for SSD shelves, or the metadata
shelves for DD Cloud Tier configurations. These shelves can be installed and cabled the same way
as standard ES30 shelves. SSD shelves and DD Cloud Tier metadata do not need to be cabled in a
separate set from the other ES30 shelves.
For HA pairs, the primary and standby nodes use different cables to connect to ES30 shelves. The
primary node uses cables for ES30 host ports ( ), and the standby node uses cables for ES30
expansion ports ( ).
HD-mini-SAS connector on controller, SFF-8088 connector keyed for host port on ES30
Cable model code Part number Cable length
X-SAS-HDMS2 038-003-810 2 m (79 in)
X-SAS-HDMS3 038-003-811 3 m (118 in)
X-SAS-HDMS5 038-003-813 5 m (196 in)
Page 34 of 59
Table 36 Cables for standby node to ES30 shelf loop
HD-mini-SAS connector on controller, SFF-8088 connector keyed for expansion port on ES30
Cable model Part number Cable length
X-HA-ES30-SAS-2 038-004-108 2 m (79 in)
X-HA-ES30-SAS-5 038-004-111 5 m (196 in)
Mini-SAS cable, SFF-8088 connectors on both ends, one end keyed for host ports and the other keyed for
expansion ports
Cable model Part number Cable length
X-SAS-MSMS1 038-003-786 1 m (39 in.)
X-SAS-MSMS2 038-003-787 2 m (79 in.)
X-SAS-MSMS3 038-003-751 3 m (118 in.)
X-SAS-MSMS4 038-003-628 4 m (158 in.)
X-SAS-MSMS5 038-003-666 5 m (196 in.)
Select the appropriate configuration from the following list, and connect the disk shelves to the Data
Domain controller.
DD6300
DD6800 and DD9300 (single node, DD Cloud Tier, or ERSO)
DD6800 and DD9300 (HA or HA with DD Cloud Tier)
DD6300
The DD6300 system supports a maximum of four shelves, cabled in a single set.
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 35 of 59
Figure 16 DD6300 with ES30 shelves
Note: For configurations of 16 SAS shelves or less, do not exceed four shelves per set.
Page 36 of 59
String I/O - Port Shelf Port Length a
(Loop)
1 I/O 7 - Port 0 2M
B controller HOST port of shelf V1.1
1 I/O 2 - Port 0 2M
A controller HOST port of the highest number shelf in V1
2 I/O 7 - Port 2 2M
B controller HOST port of shelf V2.1
2 I/O 2 - Port 2 2M
A controller HOST port of the highest number shelf in V2
3 I/O 7 - Port 1 2M
B controller HOST port of shelf V3.1
3 I/O 2 - Port 1 2M
A controller HOST port of the highest number shelf in V3
4 I/O 7 - Port 3 3M
B controller HOST port of shelf V4.1
4 I/O 2 - Port 3 3M
A controller HOST port of the highest number shelf in V4
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 37 of 59
Figure 17 DD6800 and DD9300 with ES30s, single node, DD Cloud Tier, or ER
Note: For configurations of 16 SAS shelves or less, do not exceed four shelves per set.
Page 38 of 59
Table 38 Primary node cabling instructions
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 39 of 59
Figure 18 DD6800 and DD9300 with ES30s and HA or HA with DD Cloud Tier
Page 40 of 59
2. [ ] There are no specific placement or cabling requirements for SSD shelves. These shelves can be
installed and cabled the same way as standard ES30 shelves.
3. [ ] The SSD shelf counts towards the total number of shelves connected to the system.
4. [ ] Data Domain recommends installing the SSD shelf in the V1.1 positon, but if that is not possible,
the shelf can be placed in a different location in the rack so long as cables of sufficient length are
available.
Note: V1.1 is recommended for better performance because this will the 1st hop where data will be
written. If the SSD shelf is connected to the last enclosure in a chain, then each read/write request
has to go through many hops, which introduces latency issues when compared to when the SSD
shelf is on the 1st shelf of a chain.
5. [ ] Use the cable management assembly to support and organize all cables.
Select the appropriate configuration from the following list, and connect the disk shelves to the Data
Domain controller.
DD6300
DD6800 and DD9300
DD6800 and DD9300 with HA
DD6800 with DD Cloud Tier
DD6800 and with HA and DD Cloud Tier
DD9300 with DD Cloud Tier or HA and DD Cloud Tier
DD6800 and DD9300 with ERSO
DD6300
String I/O - Port Shelf Port Length a
(Loop)
1 I/O 7 - Port 0 A controller port 0 of the DS60. 2M
1 I/O 7 - Port 1 B controller port 0 of the DS60. 2M
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 41 of 59
DD6800 and DD9300
String I/O - Port Shelf Port Length a
(Loop)
1 I/O 7 - Port 0 A controller port 0 of shelf V1.1 2M
1 I/O 2 - Port 0 B controller port 0 of the highest number shelf in V1 2M
2 I/O 7 - Port 1 A controller port 0 of shelf V2.1 2M
Page 42 of 59
String I/O - Port Shelf Port Length a
(Loop)
2 I/O 2 - Port 1 B controller port 0 of the highest number shelf in V2 2M
3 I/O 7 - Port 2 A controller port 0 of shelf V3.1 2M
3 I/O 2 - Port 2 B controller port 0 of the highest number shelf in V3 2M
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 43 of 59
DD6800 and DD9300 with HA
Table 41 Primary node cabling instructions
Page 44 of 59
String I/O - Port Shelf Port Length a
(Loop)
2 I/O 7 - Port 1 A controller port 0 of shelf V2.1 2M
2 I/O 2 - Port 1 B controller port 0 of the highest number shelf in V2 2M
3 I/O 7 - Port 2 A controller port 0 of shelf V3.1 2M
3 I/O 2 - Port 2 B controller port 0 of the highest number shelf in V3 2M
4 I/O 7 - Port 3 2M
A controller HOST port of the SSD shelf
4 I/O 2 - Port 3 2M
B controller HOST port of the SSD shelf
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 45 of 59
DD6800 with DD Cloud Tier
String I/O - Port Shelf Port Length a
(Loop)
1 I/O 7 - Port 0 A controller port 0 of shelf V1.1 2M
1 I/O 2 - Port 0 B controller port 0 of the highest number shelf in V1 2M
2 I/O 7 - Port 1 2M
A controller HOST port of the second metadata shelf
Page 46 of 59
String I/O - Port Shelf Port Length a
(Loop)
2 I/O 2 - Port 1 2M
B controller HOST port of the first metadata shelf
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 47 of 59
DD6800 and with HA and DD Cloud Tier
Table 43 Primary node cabling instructions
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 48 of 59
DD9300 with DD Cloud Tier or HA and DD Cloud Tier
Table 45 Primary node cabling instructions
Page 49 of 59
String I/O - Port Shelf Port Length a
(Loop)
2 I/O 7 - Port 1 A controller port 0 of shelf V2.1 2M
2 I/O 2 - Port 1 B controller port 0 of the highest number shelf in V2 2M
3 I/O 7 - Port 2 A controller port 0 of shelf V3.1 2M
3 I/O 2 - Port 2 B controller port 0 of the highest number shelf in V3 5M
4 I/O 7 - Port 3 2M
A controller HOST port of the SSD shelf
4 I/O 2 - Port 3 2M
B controller HOST port of the SSD shelf
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 50 of 59
DD6800 and DD9300 with ERSO
String I/O - Port Shelf Port Length a
(Loop)
1 I/O 7 - Port 0 A controller port 0 of shelf V1.1 2M
1 I/O 2 - Port 0 B controller port 0 of the highest number shelf in V1 2M
2 I/O 7 - Port 1 A controller port 0 of shelf V2.1 2M
Page 51 of 59
String I/O - Port Shelf Port Length a
(Loop)
2 I/O 2 - Port 1 B controller port 0 of the highest number shelf in V2 2M
3 I/O 7 - Port 2 A controller port 0 of shelf V3.1 2M
3 I/O 2 - Port 2 B controller port 0 of the highest number shelf in V3 5M
4 I/O 7 - Port 3 A controller port 0 of shelf V4.1 5M
4 I/O 2 - Port 3 B controller port 0 of the highest number shelf in V4 5M
Note: a. Cable lengths shown are designed for Data Domain racks. Longer cables (up to 5M) can be
used.
Page 52 of 59
Task 4: Connecting the HA interconnect
The HA interconnect consists of a 10 GbE I/O module in slot 1 of each node in the HA pair. This
connection between the two nodes provides the standby node with the information needed to fail over if
the active node suffers a failure, and maintain the connections to hosts and clients after the failover is
complete.
Note: The interconnect IP address is automatically configured with the IPv6 prefix d:d:d:d:d:/80
If there is an IP conflict, set the registry key config.net.interconnect_ip6prefix.
Page 53 of 59
1. [ ] Refer to the diagram for the port connections.
Figure 19 HA interconnect
2. [ ] Cable port 0 of the interconnect I/O module in node 0, slot 1 to port 0 of the interconnect I/O
module in node 1, slot 1.
3. [ ] Cable port 1 of the interconnect I/O module in node 0, slot 1 to port 1 of the interconnect I/O
module in node 1, slot 1.
a. If using 1 Gb copper Ethernet, attach a Cat 5e or Cat 6 copper Ethernet cable to an RJ-45
Ethernet network port (start with ethMa and go up).
b. If using 10 Gb copper Ethernet with an SFP+ connector, use a qualified SFP+ copper cable.
c. If using 1/10 Gb fiber Ethernet, use MMF-850nm cables with LC duplex connectors.
d. For 10GBaseT connections, use Cat6a S-STP Ethernet cables.
2. [ ] Enable data transfer Fibre Channel (FC) connectivity. Repeat for each connection.
a. Attach a Fibre Channel fiber optical cable (LC connector) to an I/O module port on the controller,
and attach the other end (LC connector) to an FC switch or to an FC port on your server.
1. [ ] Connect power cables to each expansion shelf receptacle and attach the retention clips.
2. [ ] Provide power to power on each expansion shelf. The shelves power on when plugged in.
Ensure that each shelf power cable is connected to a different power source.
Note: Wait approximately 3 minutes after all expansion shelves are powered on before powering on
the controller.
Page 54 of 59
3. [ ] Provide power to power on the controller. The system powers on when plugged in. The first boot
may take several minutes to complete.
Note: DD6300, DD6800, and DD9300 systems should be powered from redundant AC sources.
Redundant power sources allow one AC source to fail or be serviced without impacting system
operation. PSU0 should be attached to one AC source. PSU1 should be attached to the other AC
source.
a. Connect power cables to each receptacle and attach the retention clips.
b. Ensure that each power supply is connected to a different power source.
1. [ ] Connect an administrative console to the serial port on the back panel of the system.
Note: You must have 115200 baud rate for the system to work correctly; 9600 baud rate does not
work.
Launch a terminal emulation program from your computer and configure the following communication
settings:
Setting Value
Baud rate 115200
Data bits 8
Stop bits 1
Parity None
Flow control None
Emulation VT-100
Note: If you do not see the prompt on your terminal to log in, then complete Step 4.
3. [ ] Verify the front blue power LED (blue square) is on. If it is not, make sure the power cables are
fully seated at both ends, and both AC sources are on.
Note: The initial username is sysadmin and the initial password is the system serial number.
4. [ ] Type the default password, which is the system serial number. The Product ID/SN tag is
attached beneath the power supply at the rear of the system. See the rear panel of the system for the
Product ID/SN tag.
Page 55 of 59
Password: system_serial_number
Note: If you type an incorrect password four consecutive times, the system locks out the specified
username for 120 seconds. The login count and lockout period are configurable and might be different on
your system. See the Data Domain Operating System Administration Guide and the Data Domain
Operating System Command Reference Guide for setting these values.
For Data Domain HA systems, SSH keys created on the active node take 30 seconds to one minute to
propagate to the standby node.
The customer can later type the following to redisplay the EULA and accept it:
system show eula
Note: You can begin the CLI configuration wizard manually by typing config setup.
Page 56 of 59
5. [ ] Enable and configure each Ethernet interface. Accept or decline DHCP for each interface. If the
port does not use DHCP to discover network parameters automatically, enter the information
manually.
Ethernet port eth0a
Enable Ethernet port eth0a (yes|no|?) [yes]:
no
Page 57 of 59
-------
Do you want to save these settings (Save|Cancel|Retry):
Note: You can also use the Data Domain (DD) System Manager GUI interface to configure the system
parameters. Open a web browser, and enter your Data Domain system’s IP address in the browser’s
address text box. Log in when the DD System Manager login screen displays. Use the DD System
Manager online help for more information.
The 'system reboot' command reboots the system. File access is interrupted
during the reboot.
Are you sure? (yes|no|?) [no]: yes
ok, proceeding.
The system is going down for reboot.
7. [ ] After the system completes the reboot, login again as sysadmin using the serial number as a
password. Press Ctrl-C to get through the EULA, sysadmin password prompt, and config setup
wizard.
8. [ ] Generate an autosupport sent to yourself to use as ACG input:
# autosupport send your.email@emc.com
OK: Message sent.
Task 8: Configure HA
The HA interconnect between both nodes is connected.
Page 58 of 59
The data connections on both nodes are connected.
Configure the two nodes as an HA pair.
Note: Configuring an HA pair sets the system password on the standby node to match the system
password on the active node, however, that synchronization is not set until the HA configuration is
complete. If the HA configuration fails, or if there is a need to access either node before the HA
configuration is complete, use the serial number of each node as the password.
The ha create command will fail if one node is configured to use DHCP and the other node is
configured to use static IP addresses. Both nodes must use the same method to configure IP
addresses.
Note: The net config command with the float option is the only way to configure a floating IP address.
There is no method available in Data Domain System Manager to configure a floating IP address.
Page 59 of 59