Cisco Collaboration Infrastructure Requirements


Introduction

General

This page summarizes Hardware requirements and Virtualization Software requirements for Cisco Collaboration applications.

Requirements on this page only apply to these software release combinations:

  • CSR 12.7 and subsequent 12x on supported VMware vSphere ESXi versions 7.0 or higher (e.g. UCM 12.5 SU2+ on ESXi 7.0).
  • CSR 14, 15 and subsequent releases on supported VMware vSphere ESXi versions 6.7 or higher (e.g. UCM 15 on ESXi 8.0/7.0 or UCM 14 on ESXi 8.0/7.0/6.7).
All requirements must be met in order to receive Cisco technical support on the Cisco Collaboration application when it is running on virtualized hardware.
  1. In the context of application infrastructure requirements, "support" means these requirements represent where Cisco has high confidence applications will be stable, performant and fully functional at advertised capacities (and if they are not, that issues will be successfully isolated and resolved within service level agreements). Collaboration apps are mission-critical, resource-intensive and latency-sensitive. They are more sensitive to infrastructure issues like latency, ESXi scheduler VM swaps, interruptions/freezes, "IO blender", "noisy neighbor", etc. Cisco requirements are to prevent these types of issues from impacting application availability to end-users.
  2. In the context of when will Cisco TAC engage, "support" is demarcated to:
    • Products purchased from Cisco ... Cisco-provided applications, Cisco-provided VMware software (as OEM), Cisco-provided hardware. Cisco TAC does not engage on issues isolated to 3rd-party-provided/supported hardware or 3rd-party-provided/supported software.
    • Cisco-provided software releases that are not past their Last Day of Support, running on virtualization software releases and hardware that have also not yet reached their end of support. Note end of life dates for hardware products and software releases are out of scope for this doc.
    • Cisco-provided hardware/software with valid, paid-up maintenance contracts (e.g. active Collaboration Flex subscription or Software Support Service contract on perpetual licenses that include UCM)
  3. Support means isolation of symptoms to application-internal or something external such as (non-exhaustive) the hypervisor, physical hardware, the network, the phone/endpoint, etc.
  4. Support means root cause and "fix" for an issue need to come from wherever the problem is isolated to. Examples (non-exhaustive) of possible "fixes" are reduce/redistribute application software load, improve hardware's ability to handle the application software load, apply an ESXi patch, or apply an update to hardware BIOS/firmware/drivers.
These requirements are for "common virtualization", meaning baseline common to all Cisco Collaboration applications.

Some applications may have application-specific rules that are different. See documentation of those applications for details. Examples (non-exhaustive):

  • E.g. CPU requirements are different for CMS on dedicated hardware vs. CMS on shared hardware.
  • E.g. UCCE has application-specific VM placement rules, vs. UCM's rules.
  • E.g. Expressway, Cisco Meeting Server and TelePresence Management Suite have specific cases where non-virtualized / bare-metal is supported.
Note: Cisco Business Edition 6000/7000 appliances are fixed hardware configurations that comply with these requirements for particular ranges of application capacity and VM mixes. Appliance hardware builds are fixed and may NOT be changed from what was shipped unless specifically indicated.


Solution Special Requirements


If your deployment is using one of the solutions below, there are special rules that may override the requirements in this document.
  • Co-residency of Cisco Collaboration apps with other Cisco or 3rd-party apps
    • Unless otherwise indicated, Collaboration applications allow co-residency with other Cisco Collaboration apps, Cisco non-Collaboration apps and 3rd-party apps as long as rules on this page for VM placement and min hardware spec are followed for all applications. See each Collaboration application's documentation for any application-specific rules and restrictions on co-residency (e.g. UCCE and CWMS have restrictions here).
    • If you are using a Cisco Business Edition 6000/7000 appliance with an embedded virtualization license, then choice and quantity of 3rd-party applications and Cisco non-Collaboration applications are restricted due to licensing. See the Business Edition 6000/7000 documentation for more details.
  • If the infrastructure is Business Edition 6000S (M2) appliance model, deployment models are restricted due to technical factors:
    • Only application versions 12.x and under are supported.
    • Co-resident application VMs are restricted:
      • Maximum 150 users and 300 devices.
      • These 5 VMs:
        • One Prime Collaboration Provisioning Small VM
        • One UCM 150 user VM
        • One Unity Connection 1000 user VM (1vcpu)
        • One IM&Presence 150 user VM
        • One Paging Server VM
      • Or these 4 VMs:
        • One Prime Collaboration Provisioning Small VM
        • One UCM 150 user VM
        • One Unity Connection 1000 user VM (2vcpu) used for EITHER Single Inbox or IMAP.
        • One IM&Presence 1000 user VM either/or one Paging Server 1000 user VM


Hardware/Software Compatibility Requirements

Supported Infrastructure

  • VMware vSphere ESXi is mandatory, and is the only supported virtualization environment for Cisco Collaboration applications.
    • The following are not supported:
      • Cisco Collaboration applications do not support non-virtualized / physical / bare-metal installation on any physical hardware except where specifically indicated in app-specific documentation (e.g. Cisco Expressway is supported bare-metal on CE1x00 appliance; Cisco Meeting Server is supported bare-metal on CMS 2000 appliance).
      • Cisco Collaboration applications do not support hypervisors that are not VMware vSphere ESXi (e.g. VMware Cloud Foundation, Microsoft Hyper-V, Citrix Xen, Red Hat Virtualization, etc. are not supported).
      • Cisco Collaboration applications do not support any 3rd-party public cloud infrastructure as a service (IaaS) offer. Including but not limited to:
        • Any 3rd-party public cloud offers based on VMware Cloud Foundation (e.g. VMware Cloud on AWS, Azure VMware Solution, Google Cloud VMware Engine and others are not supported).
        • Any 3rd-party public cloud offers based on non-VMware technology (e.g. Amazon Web Services [AWS], Microsoft Azure, Dell Apex Cloud Platform for Microsoft Azure, Google Cloud Platform and others are not supported).
        • Any on-premises "hybrid cloud" extensions of 3rd-party public cloud infrastructure (e.g. VMware Cloud on Dell EMC VxRail, Amazon AWS Outposts, Microsoft Azure Stack, Dell Azure Stack HCI, GKE On-prem and others are not supported).
        • Any other type of public cloud offer (e.g. IBM Cloud and others are not supported).
      • Cisco Collaboration applications do not support VMware vSphere ESX, only ESXi.
    • Licensing options for VMware vSphere ESXi
      • If hardware is customer-provided / 3rd-party, VMware licensing must be customer-provided.
      • If hardware is Cisco Business Edition 6000/7000 appliance M6 or later, a general-purpose license must be used. Appliances of M5 or older generations may have used a license from an embedded virtualization commercial offer, but all of those offers are in EOL now (see below).
      • If hardware is general-purpose / non-appliance Cisco UCS or HyperFlex, a general-purpose license must be used.
      • For details on general-purpose virtualization licenses, see the following:
        • Cisco UCS Spec Sheet or Cisco HyperFlex Spec Sheet for the hardware model of interest (e.g. C220 M5 SFF, C240 M5 SFF, HX220c M5, B200 M5, etc.).
        • VMware.com comparison of vSphere editions (available on this web page at time of this writing: https://www.vmware.com/products/vsphere.html#compare).
        • General-purpose licenses may be customer-provided instead of purchased from Cisco.
        • General-purpose licenses can be either purchased from and supported by Cisco, or purchased from and supported by a 3rd-party (VMware or other vendor). If purchased from and supported by Cisco (e.g. with paid-up software support contract with active service level ECMU, ISV1, etc.) then Cisco will provide support for ESXi. If purchased from and supported by 3rd-party (e.g. direct from VMware) then ESXi support is provided by the 3rd-party vendor.
      • For details on legacy embedded virtualization licenses, see the tables below (expand the links) as well as the following documents (note all embedded virtualization commercial offers have entered EOL):

  

  Cisco Business Edition Embedded Virtualization Basic 7.x Cisco Business Edition Embedded Virtualization Basic Plus 7.x Cisco Business Edition Embedded Virtualization Enhanced 7.x
Overview Offer’s Product ID BE6K-VIRTBAS-7X BE6/7K-VIRTBASP-7X BE6/7K-VIRTENH-7X
Status End of Sale
(see bulletin EOL13629)
All versions End of Support MarCY25
End of Sale
(see bulletin EOL13629)
All versions End of Support MarCY25
End of Sale
(see bulletin EOL13629)
All versions End of Support MarCY25
Availability & Supported Hardware BE6000 M5 appliance only BE6000 / BE7000 M5 appliance only BE6000 / BE7000, CMS1000 M5 appliance only
Versions offered 7.x, 8.0 (last). No version downgrades.
Supported Applications See co-residency policy requirements at http://www.cisco.com/c/en/us/support/unified-communications/business-edition-6000/products-implementation-design-guides-list.html
(restricted to Cisco Collaboration and limited 3rd-party applications).
Max Virtual Machine specs Max 8vcpu per VM Max 32vcpu per VM Max vcpu per VM same as general-purpose vSphere Standard Edition
Enabled Features ESXi APIs for Cisco Prime Collaboration Deployment

SNMP
Embedded Host Client

Advanced features not included (vCenter, HA, vMotion, DRS, etc.).

ESXi APIs for Cisco Prime Collaboration Deployment

SNMP
Embedded Host Client

Advanced features not included (HA, vMotion, DRS, etc.).

ESXi APIs for Cisco Prime Collaboration Deployment

SNMP
Embedded Host Client

Other features from vSphere Standard Edition.

License Logistics Type of license key Preactivated, hardcoded to 2-CPU.
Not a VMware Partner Activation Code (PAC); not manageable via myvmware.com.
Not a cisco.com Product Authorization Key (PAK).
Fulfillment after order ships See Business Edition 6000 or 7000 Installation Guide.(physically delivered as factory-preload on appliance; multiple appliances will use same initial license key)
Install See Business Edition 6000 or 7000 Installation Guide.
(already factory-loaded on appliance)
Rebuild / Reinstall Contact Cisco licensing operations/support if need PDF of license docs with license key, then see Business Edition 6000 or 7000 Installation Guide.
(do NOT enter key into any vCenter; do NOT attempt to apply key to appliance via vCenter; apply key directly to appliance using Embedded Host Client)
Version upgrade Active SWSS or Flex on-premises required – see Business Edition 6000 / 7000 Ordering Guide (partner-level access).

Obtain new version key from My Cisco Entitlements (MCE).
Obtain installation media from vmware.com (Cisco UCS-specific images).

Support Logistics Active SWSS required - see Business Edition 6000 / 7000 Ordering Guide (partner-level access).

Call Cisco technical support only.
For optimal routing to best support team, recommend open service request with keyword "Business Edition", not "VMware".

Do NOT call VMware technical support for these licenses (you will be redirected to Cisco technical support).

  

  Cisco UC Virtualization Hypervisor Plus 6.x (preloaded) Cisco UC Virtualization Foundation 6.x Cisco Collaboration Virtualization Standard 6.x
Overview Offer’s Product ID VMW-VS6-HYPPLS-K9 VMW-VS6-FND-K9 VMW-VS6-CVSTD-K9
Status All versions End of Support November 2023
(see bulletin EOL13450)
All versions End of Support November 2023
(see bulletin EOL13450)
All versions End of Support November 2023
(see bulletin EOL13450)
Availability & Supported Hardware BE6000 appliance only BE6000 or BE7000 appliance only BE6000 / BE7000, CMS1000 appliance only
Versions offered None (all offered versions 6.x and last 7.0 are end of support; no 8.0+ on this offer).
Supported Applications See co-residency policy requirements at http://www.cisco.com/c/en/us/support/unified-communications/business-edition-6000/products-implementation-design-guides-list.html
(restricted to Cisco Collaboration and limited 3rd-party applications).
Max Virtual Machine specs Max 8vcpu per VM Max 32vcpu per VM Max 64vcpu per VM
Enabled Features ESXi APIs for Cisco Prime Collaboration Deployment

SNMP
Embedded Host Client

Advanced features not included (vCenter, HA, vMotion, DRS, etc.).

ESXi APIs for Cisco Prime Collaboration Deployment

SNMP
Embedded Host Client

Advanced features not included (HA, vMotion, DRS, etc.).

ESXi APIs for Cisco Prime Collaboration Deployment

SNMP
Embedded Host Client

Other features from vSphere Standard Edition.

License Logistics Type of license key Preactivated, hardcoded to 2-CPU.
Not a VMware Partner Activation Code (PAC); not manageable via myvmware.com.
Not a cisco.com Product Authorization Key (PAK).
Fulfillment after order ships End of sale.
Install See Business Edition 6000 / 7000 Installation Guide.
(already factory-loaded on appliance)
Rebuild / Reinstall Contact Cisco licensing operations/support if need PDF of license docs with license key, then see Business Edition 6000 or 7000 Installation Guide.
(do NOT enter key into any vCenter; do NOT attempt to apply key to appliance via vCenter; apply key directly to appliance using Embedded Host Client)
Version upgrade No longer possible due to End of Support.
Support Logistics Active SWSS or Flex required - see Business Edition 6000 / 7000 Ordering Guide (partner-level access).

Call Cisco technical support only.
For optimal routing to best support team, recommend open service request with keyword "Business Edition", not "VMware".

Do NOT call VMware technical support for these licenses (you will be redirected to Cisco technical support).

  

  Cisco UC Virtualization Hypervisor 5.x (license-only) Cisco UC Virtualization Foundation 5.x
Overview Offer’s Product ID VMW-VS5-HYP-K9
VMW-VS5-HYP-USEL
R-VMW-UC-FND5-K9
Status All versions End of Support September 2020
(see bulletin EOL11590)
Availability & Supported Hardware BE6000 M4/older and BE7000 M4/older appliances only BE6000, BE7000, MM410v, MM410vB appliances only
Versions offered None (all offered versions 5.x and last 6.x are end of support; no 7.0+ on this offer).
Supported Applications See co-residency policy requirements at http://www.cisco.com/c/en/us/support/unified-communications/business-edition-6000/products-implementation-design-guides-list.html
(restricted to Cisco Collaboration and limited 3rd-party applications).
Max Virtual Machine specs Max 8vcpu per VM Max 32vcpu per VM
Enabled Features ESXi APIs for Cisco Prime Collaboration Deployment

SNMP
Local Admin Client

Advanced features not included (HA, vMotion, DRS, etc.).

ESXi APIs for Cisco Prime Collaboration Deployment

SNMP
Local Admin Client

Advanced features not included (vCenter, HA, vMotion, DRS, etc.).

License Logistics Type of license key Not a VMware Partner Activation Code (PAC); not manageable via myvmware.com.
Not a cisco.com Product Authorization Key (PAK).
Fulfillment after order ships End of sale.
Install See Business Edition 6000 or 7000 Installation Guide.
(already factory-loaded on appliance)
Rebuild / Reinstall Contact Cisco licensing operations/support if need PDF of license docs with license key, then see Business Edition 6000 or 7000 Installation Guide.
(do NOT enter key into any vCenter; do NOT attempt to apply key to appliance via vCenter; apply key directly to appliance using Embedded Host Client)
Version upgrade No longer possible due to End of Support.
Support Logistics End of support.

  • Application rules determine VM count & VM configurations/specs:
    • Follow Application sizing rules & examples in Cisco Preferred Architectures for Collaboration (PA), Cisco Validated Designs (CVD), Solution Reference Network Design Guides (SRND) and Collaboration Sizing Tool (CST). Cisco applications do not scale by solely editing VM specs, or soley adding VMs, so be careful to consult design documents before making these kinds of changes.
    • VM's for Applications must be deployed from Cisco-provided OVA. Most versions of most applications will require you to use one of a set of fixed-configuration VMs for initial deployment. Some applications like Cisco Meeting Server will instead define absolute minimum VM specs/settings, and then how to change those based on desired application capacity. Application instructions must be followed to avoid issues later.
    • Changes to VM specs/settings must follow application rules defined in their technical documentation (e.g. readme of its Cisco-provided OVA file, or application Install & Upgrade Guides). Each application will specify its rules for mandatory vs. discretionary VM specs / settings, what may or may not be changed, and how to ensure any changes remain aligned with application sizing and infrastructure sizing.
  • Compatibility:
    • Hardware may be Cisco or 3rd-party. For example (non-exhaustive), Cisco Collaboration apps are supported on the following infrastructure if it is compliant with app rules for compatibility, min HW spec and VM placement:
      • Cisco UCS with local DAS and/or 3rd-party SAN/NAS (e.g. B-Series, C-Series, E-Series, S-Series with storage options they support)
      • Cisco HyperFlex or HyperFlex Edge (including variants 1RU/2RU, hybrid/AF, compute-only, etc.)
      • 3rd-party blade/rack compute with local DAS or 3rd-party SAN/NAS
      • 3rd-party hyperconverged infrastructure
    • VMware Compatibility:
      • VMware Compatibility Guide: All hardware components must show as supported on the VMware Compatibility Guide for the app's ESXi version. E.g. if running UCM 12.5 SU2 on ESXi 6.7, the VMware Compatibility Guide must show all desired hardware (chassis, CPU, components, peripherals, etc.) as supported with ESXi 6.7.
      • Cisco-specific ESXi image: if the hardware is Cisco (e.g. UCS or HyperFlex), make sure ESXi is installed, upgraded and updated with what vmware.com calls the "CISCO Custom Image for ESXi". Do not use the general-purpose / "generic" images. See UCS/HyperFlex technical documentation for more details.
      • Required ESXi version:
        • a "fully qualified" ESXi version has at a minimum a major release and a minor release, plus possibly a maintenance release, patch(es), and versions for "VMFS", "vmv" and "VMware Tools" (e.g. ESXi 6.7 U2, vmfs6, vmv15, vmtools 10.3.10). For convenience this will be called the "app's ESXi version".
        • Virtualized hardware must run an ESXi version supported by the Cisco application version
          • ESXi major/minor releases (e.g. "6.5", "6.7", "7.0"): supported releases will be explicitly listed. Unlisted versions are not tested or supported. E.g. UCM 12.5 SU2 supports ESXi 6.7 and 7.0 per Virtualization for Cisco Unified Communications Manager (CUCM).
          • ESXi maintenance releases within a supported ESXi major/minor (e.g. within "6.7", "6.7 U2"): all are supported unless otherwise indicated. Application versions with known incompatibilities will specify in their technical documentation if they do not support certain ESXi maintenance releases, or if they require a minimum maintenance release, or if they only support "up to" a certain maintenance release.
          • ESXi patches within a supported ESXi major/minor/maintenance (e.g. patch 6.7.0d for "L1TF – VMM" mitigation): all are supported unless otherwise indicated, or if they clash with other requirements in this document.
          • ESXi "vmfs" or Virtual Machine File System version within a supported ESXi major/minor/maintenance (e.g. vmfs6 vs. vmfs5 within 6.7 U2): see VMware.com documentation like VMFS Datastores for which VMFS versions are supported by a given ESXi major/minor release. Where multiple VMFS versions are possible, applications will indicate if any VMFS versions are not supported.
          • ESXi "vmv" or Virtual Machine Hardware Versions within a VM on supported ESXi major/minor/maintenance (e.g. vmv13 within 6.7 U2):
            • Application versions will define minimum required vmv. E.g. UCM 12.5 requires minimum vmv13, vs. older versions vmv8.
            • Unless an application otherwise indicates, vmv upgrades are supported, but must be compatible/supported with the app's ESXi version (see VMware KB article).
            • Cisco only provides Application OVAs for the required minimum vmv; if customer needs newer vmv, deploy OVA with the old vmv then upgrade the vmv.
          • ESXi "vmtools" or VMware Tools within a VM on supported ESXi major/minor/maintenance (e.g. vmtools 10 within 6.7 U2):
            • VMware Tools may be either "VMware-native" (provided by VMware ESXi) or "open-vmtools" (provided by guest OS).
            • For VMware-native VMware Tools…
              • Application versions will indicate if any versions are not supported or have known issues.
              • Otherwise follow VMware Product Interoperability Matrices for vmtools version compatibility with ESXi version.
              • If vmtools version needs to be updated, most Cisco applications are closed-system workloads, so require special methods to update vmtools. See Virtualization Software Requirements.
              • Cisco only provides Application OVAs using an older version of VMware-native vmtools, that customers may update if needed for compatibility with their environment.
            • For open-vmtools…
              • Application technical documentation will indicate if they support open-vmtools. E.g. UCM 12.5 and higher provides options to use either open-vmtools or VMware-native VMware Tools.
              • There is no need to manage compatibility / updates of open-vmtools. It is handled as part of the guest OS in the Cisco application workload.
    • Hardware Compatibility:
      • You must also follow any compatibility instructions from the hardware provider(s).
        • E.g. if the hardware is Cisco UCS or HyperFlex (including Fabric Interconnect Switches), you must also follow compatibility instructions on the UCS Hardware and Software Compatibility tool. TAC may require getting current as part of troubleshooting or resolution.
        • E.g. if the hardware is Cisco HyperFlex, you must also follow recommended compatibility instructions in HXDP Release Notes. TAC may require you to get current as part of troubleshooting or resolution.

Management Tools

  • Hardware management tools:
    • Business Edition 6000/7000: Note that Cisco IMC Supervisor, UCS Manager and Intersight do not support managing Business Edition 6000/7000 appliances. Customers requiring centralized hardware management should instead consider non-appliance general-purpose Cisco hardware.
    • Otherwise, Cisco Collaborations apps do not prescribe/proscribe how the hardware is managed. Follow guidance from hardware provider.
  • VMware Embedded Host Client vs. vCenter:
    • Unless specifically indicated by an app or by the infrastructure, VMware vCenter is not a pre-requisite to install, first-time-setup, administer or engage to support Cisco Collaboration apps; the local management client (e.g. Embedded Host Client in ESXi 6.7) is sufficient. Some counter-examples (non-exhaustive):
      • E.g. Cisco HyperFlex mandates vCenter regardless of the apps.
      • E.g. Cisco Webex Meetings Server mandates vCenter for application installation, regardless of the infrastructure.
    • Certain other virtualization features may mandate vCenter based on VMware’s license editions. If you require any of these features, see VMware.com for editions comparison.
    • If you use vCenter, then unless otherwise indicated, Cisco applications do not require their own dedicated vCenter.
    • For effective technical support by Cisco TAC:
      • vCenter recommended for "complex" scenarios: If the customer's virtualization environment is "complex", and/or the issue/symptom being worked is "complex", then VMware vCenter with Statistics Level 4 is recommended to assist TAC with providing effective technical support.
        • For isolation of "complex" issues, if there is no evidence the application is the problem and/or the application logs point to VMware/hardware layer, then the application is ruled out and the problem will be deemed isolated to VMware/hardware layer.
        • For "complex" issues isolated to VMware/hardware layers, without vCenter historical data, root cause analysis may not be possible.
      • "Complex" virtualization environments are defined as:
        • Shared storage (SAN, NAS, HCI)
        • Prevalent 3rd-party infrastructure not provided by Cisco
      • "Complex" issues are defined as:
        • Intermittent issues
        • Performance debugging & other issues where isolation/root cause is not straightforward
        • Issues older than 1 hour, where Cisco Collaboration app is not suspected of root cause, but customer requests root cause analysis.
  • Licensing options for VMware vCenter
    • If hardware is customer-provided / 3rd-party, vCenter licensing must be customer-provided.
    • If hardware is Cisco, general-purpose vCenter licensing must be used (there is no Cisco Collaboration embedded virtualization option for vCenter).
    • For details on general-purpose vCenter licenses, see the following:
      • Cisco UCS Spec Sheet or Cisco HyperFlex Spec Sheet for the hardware model of interest (e.g. C220 M5 SFF, C240 M5 SFF, HX220c M5, B200 M5, etc.).
      • VMware.com comparison of vCenter editions (available on this web page at time of this writing: https://www.vmware.com/products/vcenter-server.html).
      • General-purpose licenses may be customer-provided instead of purchased from Cisco.

Minimum Hardware Specs

CPU / Processor Requirements

General

  • Business Edition 6000/7000: If the hardware is a BE6000/BE7000 appliance, the CPU configuration is fixed and may not be changed. E.g. field-adding an additional CPU is not supported; field-changing the shipped CPU model to another model is not supported.
  • CPU vendor: must be listed in the table below (click to expand). All applications support Intel. Some versions of some applications also support AMD.
    • Unlisted vendors (like ARM) are not supported.
  • CPU architecture: must be listed in the table below (click to expand). Supported Intel CPUs must be of a listed x86 Xeon architecture (note Processor-D is not supported). Supported AMD CPUs must be of a listed 3rd-gen EPYC architecture.
    • Unlisted architectures are not supported unless specifically indicated in this policy or by an application. Examples of the unsupported architectures are non-Xeon (Pentium, Celeron, etc.), certain older Intel Xeon (any Nehalem or Westmere model), AMD older EPYC generations, AMD non-EPYC generations.
  • CPU model: Supported models are listed in the table below (click to expand).
    • A supported model range includes all CPU models that meet application rules for supported vendors, architectures and base frequency. E.g. Intel Xeon Gold 6300 includes all 63xx models that meet application requirements.
    • Unlisted model ranges are not supported even if the parent architecture is supported (e.g. Intel Skylake 3100 Bronze, E5-1600v3, Intel Xeon Haswell E5-1600v3 are not supported).
    • For deployments of >1000 users or >2500 devices, some processors are not supported regardless of their base frequency even if the parent architecture is supported. See the tables below.

  

CPU vendor/architecture codename & details on their website
(unlisted vendors/architectures not supported)
Supported CPU model ranges
(unlisted ranges not supported)
Example Model
(Cores / Base Frequency)
Example Cisco Product IDs
Only UCM 11.5+, IMP/Unity Connection/Emergency Responder/SME/PCD 14+, Expressway X14.3+, CPS 14.4.x, CUAC 12.0.x+, UCCX 12.5 SU3+, P/UCCE 12.5+.

Intel Xeon Emerald Rapids
ark.intel.com/content/www/us/en/ark/products/codename/130707/products-formerly-emerald-rapids.html#@Server

Xeon Platinum 8500 (85xx)
Xeon Gold 6500 (65xx)
Xeon Gold 5500 (55xx)*
Xeon Silver 4500 (45xx)*
6526Y (16C/2.8 GHz)
4510 (12C/2.4 GHz)
UCS-CPU-I6526Y
UCS-CPU-I4510
Only UCM 11.5+, IMP/Unity Connection/Emergency Responder/SME/PCD 14+, Expressway X14.3+, CPS 14.4.x, CUAC 12.0.x+, UCCX 12.5 SU3+, P/UCCE 12.5+.

Intel Xeon Sapphire Rapids
ark.intel.com/content/www/us/en/ark/products/codename/126212/products-formerly-sapphire-rapids.html#@Server

Xeon Platinum 8400 (84xx)
Xeon Gold 6400 (64xx)
Xeon Gold 5400 (54xx)*
Xeon Silver 4400 (44xx)*
6426Y (16C/2.5 GHz)
4410Y (12C/2.0 GHz)
UCS-CPU-I6426Y
UCS-CPU-I4410Y
Only UCM 12.5+, IMP/Unity Connection/Emergency Responder 14+, Expressway X14.3+.

AMD 4th-gen EPYC Genoa
https://www.amd.com/en/processors/epyc-9004-series

AMD EPYC 9004 (9xx4) 9334 (32C / 2.70 GHz)
9224 (24C / 2.5 Ghz)

Intel Xeon Ice Lake
ark.intel.com/content/www/us/en/ark/products/codename/74979/products-formerly-ice-lake.html#@Server
Xeon Platinum 8300 (83xx)
Xeon Gold 6300 (63xx)
Xeon Gold 5300 (53xx)*
Xeon Silver 4300 (43xx)*
6326 (16C/2.9 GHz)
4310T (10C/2.3 GHz)
UCS-CPU-I6326
UCS-CPU-I4310T
Only UCM 12.5+, IMP/Unity Connection/Emergency Responder 12.5+, Expressway X12.7+.

AMD 3rd-gen EPYC Milan
amd.com/en/processors/epyc-7003-series#New-Models

AMD EPYC 7003 (7xx3) 7453 (28C / 2.75 GHz)
7313P (16C / 3.0 Ghz)
UCS-CPU-A7453
UCS-CPU-A7313P
Intel Xeon Cascade Lake
ark.intel.com/content/www/us/en/ark/products/codename/124664/cascade-lake.html#@Server
Xeon 8200 Platinum (82xx)
Xeon 6200 Gold (62xx)
Xeon 5200 Gold (52xx) *
Xeon 4200 Silver (42xx) *
6242 (16C / 2.80 GHz) HX-CPU-I6242
Intel Xeon Skylake
ark.intel.com/content/www/us/en/ark/products/codename/37572/skylake.html#@Server
Xeon 8100 Platinum (81xx)
Xeon 6100 Gold (61xx)
Xeon 5100 Gold (51xx) *
Xeon 4100 Silver (41xx) *
6132 (14C / 2.60 GHz) UCS-CPU-6132
HX-CPU-6142
4114 (10C / 2.20 GHz) UCS-CPU-4114
* Only supported for deployments of <1K users and <2.5K devices.


  

Note that older CPUs run on older hardware generations, which may have entered their end of life. Reminder that application symptoms isolated to hardware layer on end of support hardware will not be able to get a fix.

Intel codename & details on ark.intel.com Supported Intel model ranges (unlisted ranges not supported)
Broadwell-EX
https://ark.intel.com/content/www/us/en/ark/products/series/93797/intel-xeon-processor-e7-v4-family.html
Xeon E7-8800v4
Xeon E7-4800v4
Haswell-EX
https://ark.intel.com/content/www/us/en/ark/products/series/78585/intel-xeon-processor-e7-v3-family.html
Xeon E7-8800v3
Xeon E7-4800v3
Broadwell-EP
https://ark.intel.com/content/www/us/en/ark/products/series/91287/intel-xeon-processor-e5-v4-family.html
Xeon E5-2600v4
Haswell-4SEP
https://ark.intel.com/content/www/us/en/ark/products/series/78583/intel-xeon-processor-e5-v3-family.html
Xeon E5-4600v3
Haswell-EP
https://ark.intel.com/content/www/us/en/ark/products/series/78583/intel-xeon-processor-e5-v3-family.html
Xeon E5-2600v3
Brickland/Ivy Bridge-EX
https://ark.intel.com/content/www/us/en/ark/products/series/78584/intel-xeon-processor-e7-v2-family.html
Xeon E7-8800v2
Xeon E7-4800v2
Xeon E7-2800v2
Ivy Bridge-4SEP
https://ark.intel.com/content/www/us/en/ark/products/codename/68926/ivy-bridge-ep.html
Xeon E5-4600v2
Ivy Bridge-EP
https://ark.intel.com/content/www/us/en/ark/products/codename/68926/ivy-bridge-ep.html
Xeon E5-2600v2
Sandy Bridge-4SEP
https://ark.intel.com/content/www/us/en/ark/products/series/59138/intel-xeon-processor-e5-family.html
Xeon E5-4600v1
Sandy Bridge-EP
https://ark.intel.com/content/www/us/en/ark/products/codename/64276/sandy-bridge-ep.html
Xeon E5-2600v1
Ivy Bridge-EN
https://ark.intel.com/content/www/us/en/ark/products/codename/67492/ivy-bridge-en.html
Xeon E5-2400v2 *
Sandy Bridge-EN
https://ark.intel.com/content/www/us/en/ark/products/codename/29900/sandy-bridge.html
Xeon E5-2400v1 *
Westmere-EX
https://ark.intel.com/content/www/us/en/ark/products/codename/33175/westmere-ex.html
Xeon E7-8800v1
Xeon E7-4800v1
Xeon E7-2800v1
Nehalem-EX
https://ark.intel.com/content/www/us/en/ark/products/codename/64238/nehalem-ex.html
Xeon 7500
Westmere-EP
https://ark.intel.com/content/www/us/en/ark/products/codename/54534/westmere-ep.html
Xeon 5600
* Only supported for deployments of <1K users and <2.5K devices.


Processor Base Frequency

  • Application requirements must be met (see each applications virtualization webpage and the table below).
    • "Max Turbo Frequency" may NOT be used to meet this requirement ("Turbo Mode" represents temporary resources only available when other physical CPU cores are less busy, so is not sufficient for Cisco app needs).
    • Different app performance is likely on CPUs of same base frequency but different generations.
    • Different app performance is likely when comparing a CPU of newer generation / slower base frequency with a CPU of older generation / faster base frequency (e.g. 5600 / 2.66 GHz vs. E7v1 / 2.40 GHz). Do not expect equivalence.
    • CPU power-saving features that do "CPU throttling" are not supported.
    • Regardless of processor base frequency, Xeon 5200/5100, 4200/4100, E5-2400 are not supported for deployments >1000 users or >2500 devices unless specifically indicated.

    Expressway see Virtualization for Expressway
    Cisco Meeting Server see Virtualization for Cisco Meeting Server
    UCCE see Virtualization for Unified Contact Center Enterprise
    CWMS see Virtualization for Cisco Webex Meetings Server
    UCM 14+
    • 2.00-2.49 GHz CPU are always supported for Small systems (<1000 users, <2500 devices, system limits similar to Cisco Business Edition 6000).
    • Medium/Large systems are always supported on 2.50+ GHz CPU. 2.0-2.49 GHz CPUs may be possible, depending on Collaboration Sizing Tool output.
    • Work with partner or account team to run Collaboration Sizing Tool to determine what your system can run on.
    UCM 12.5 SU2 & all other apps
    • 2.00-2.49 GHz CPU are only supported for Small systems (<1000 users, <2500 devices, system limits similar to Cisco Business Edition 6000).
    • Medium/large systems require 2.50 GHz for deterministic support. Slower CPUs may be possible, but will only have caveated support.
    • Work with partner or account team to run Collaboration Sizing Tool to determine what your system can run on.

  VM Placement

  • Physical CPU cores: Must not be oversubscribed except where specifically indicated. The server must provide physical core quantity at least equal to the sum of all virtual machines' vcpus. E.g. if your physical server is 2S/10C then you may run any combination of VMs with total 20 vcpu.
  • Hyperthreading?
    • Always enable hyperthreading in the hardware BIOS if it is present to enable.
    • But deploying with sum of vcpu = sum of Logical Processors in ESXi or sum of vcpu(sum of physical CPU cores) is not supported unless specifically indicated by an application. Logical Cores do not increase the number of physical cores available to apps.
  • ESXi CPU Reservations are hardcoded in the Cisco-provided OVA and may not be changed unless specifically indicated. Most Cisco applications do not rely on or support CPU Reservations for VM placement and sizing.
  • See also any application-specific rules. Some non-exhaustive examples:
    • E.g. Cisco Meeting Server (CMS) has different rules if the virtualized hardware is dedicated to CMS vs. shared with other workloads.
    • E.g. Unified/Packaged Contact Center Enterprise have application-specific co-residency independent of physical CPU.
    • E.g. Cisco Webex Meetings Server has application-specific co-residency rules independent of physical CPU.


RAM / Memory Requirements

General

  • Business Edition 6000/7000: If the hardware is a BE6000/BE7000 appliance, the memory hardware configuration is fixed and may not be changed unless explicitly indicated.
  • Physical memory hardware must be supported by the compute/server hardware vendor (e.g. if the hardware is Cisco UCS, the memory must be from Cisco).
  • Physical RAM configuration: Cisco applications do not prescribe memory module quantity, density, speed, or layout/population; follow instructions from your hardware provider (e.g. at time of this writing, if the hardware is Cisco UCS C240 M5 SFF, approved RAM configurations are described in the "Select Memory" chapter of the Spec Sheet for that model).

VM Placement



Storage Requirements & Guidelines

General

  • Business Edition 6000/7000: If the hardware is a BE6000/BE7000 appliance, the storage hardware and RAID configuration are fixed and may not be changed. Post-sales changes are not supported.
  • Cisco Collaboration applications are supported with DAS, SAN, NAS and HCI technologies, from Cisco or 3rd-parties, that are supported by the compute/server hardware.
  • The "storage system" is defined as all hardware and software end-to-end required for the UCM virtual machine's vdisk to be available to the application. Including but not limited to:
    • The ESXi datastore configuration (regardless of storage technology)
    • DAS local storage: motherboard or RAID controller plus the local disks (SAS/SATA HDD, SSD, NVMe, etc.)
    • SAN/NAS shared storage: adapter for storage access, the transport network (FC, iSCSI, NFS, etc.) and the storage array itself
    • HyperFlex shared storage: HXDP software and controller VMs, the network between HX cluster nodes, and each node's cache / system / capacity disks
    • 3rd-party HCI shared storage: 3rd-party HCI hardware/software and their dependencies.
  • VMware / Hardware compatibility: The entire storage system must show as supported on the VMware Compatibility Guide for the app's ESXi version. You must also follow any compatibility instructions from individual hardware providers (e.g. for Cisco UCS or Cisco HyperFlex, see links in Hardware/Software Compatibility Requirements section above).
  • ESXi datastores may be provisioned with eager zero or lazy zero. Non-appliance hardware may also use thin provisioning, with caveat that disk space must be available to the VM as needed; running out of disk space due to thin provisioning will cause application instability and data corruption, including prevention of restore from backup.

VM Placement

  • Max latency of storage system: Application requirements must be met or the storage will be inadequate for the application software load presented. For application symptoms that correlate with high storage latency, Cisco may require resolution of the storage latency first as part of troubleshooting or resolution.
    • Unless otherwise indicated, Cisco Collaboration applications require storage systems to have average virtual machine operating system latency per command within 25 ms.
      • In VMware documentation and monitoring tools like esxtop, this is also called "GAVG/cmd" (Guest Average Latency per command), and is the response time as perceived by the guest operating system. It is the sum of two other latency values (GAVG = KAVG + DAVG):
        • "KAVG/cmd" (VM Kernel Average Latency per command) - an indicator of CPU resources/performance. Expected to be 0ms in an ideal environment. Values greater than 2ms may cause performance problems.
        • "DAVG/cmd" (Device Average Latency per command) - an indicator of disk subsystem performance. Values consistently greater than 20-30ms are frequently performance problems for typical applications.
      • Latency values over time are expected to fluctuate. Ephemeral spikes above application's indicated maximum with tiny duration and tiny frequency are expected, but spikes above the application's indicated maximum that are frequent, sustained and/or high-magnitude indicate a potential problem and are likely contributors to application symptoms.
      • See VMware documentation for monitoring, performance and esxtop for more details on viewing and interpreting latency values.
  • Usable space: The storage system must provide usable space in GB at least equal to the sum of all virtual machines' vdisks.
  • IOPS capacity: Some storage systems' design documentation will ask for information on the application workloads' IO operations, read/write, sequential/random, etc. For most applications this varies by deployment model and capacity; see each application's technical documentation.

  

   Local DAS (on Cisco or 3rd-party server)

  • The RAID controller (whether motherboard, PCIe, etc.) must show as supported on the VMware Compatibility Guide for the UCM ESXi version.
  • Disks must be supported by the hardware vendor. E.g. for Cisco UCS C-Series, all disks must be Cisco-provided.
  • See your hardware provider for guidance on how to meet application latency and IOPS requirements.
  • Below are some guidelines for Cisco UCS C-Series servers with HDD DAS. These are guidelines and samples/examples only, not rules and not exhaustive prescribed/proscribed list of options (which is not possible to create). Your results will vary. Reminder that for applications to be supported on a storage system, it must meet application requirements for compatibility, latency, usable space and IOPS capacity.

    RAID1
    • HDD pairs not recommended unless the environment is very small (few 100 devices or less).
    • Consider SSD pairs of sufficient usable space. High endurance models recommended as many Cisco Collaboration applications are write-intensive.
    RAID5
    • For most Collaboration deployments, RAID5 has been the best tradeoff among competing factors of maximizing usable space, IO performance, fault tolerance, and ease of failed disk replacement while minimizing total storage system price and complexity.
    • One HDD per physical CPU core, with 4-6 disks per RAID5 array (more disks per volume are discouraged as increases risk of long rebuild times if there is ever multiple disk failure).
    • Each disk SAS 10kbps (15kbps recommended). The following are not recommended as tend to be too slow: SATA disks, SAS disks with <10kbps.
    RAID6
    • One HDD per physical CPU core, with 4-8 disks per RAID6 array (more disks per volume are discouraged due to slowed write times vs. many Cisco Collaboration applications are write-intensive).
    • Each disk SAS 15K rpm minimum. Otherwise same guidelines as for RAID5.
    RAID10
    • One HDD per 2 physical CPU cores, with at least 4 disks per RAID10 array
    • Each disk same as RAID5 recommendation.

  

  • Adapter(s) & network for storage access: see "Network Requirements & Guidelines" section below for any NIC, HBA, Converged Network Adapter or Cisco VIC used on the compute/server, and the transport network to the shared storage array (e.g. FC, FCoE, iSCSI / NFS)
  • Follow your compute vendor's instructions for compatibility.
  • Follow your shared storage array vendor's instructions for compatibility.

  

  • Remember each HyperFlex cluster node runs an HXDP Controller VM. This needs to be accounted for in VM Placement.
  • HyperFlex Edge is permitted, but only with 10GE deployment models. The 1GE deployment models are insufficient to handle application VM vnic and vdisk traffic.
  • 1R/2RU, hybrid storage vs. all-flash are all permitted.
  • See Examples for sample hardware configurations.

  

  • If the HCI software requires a controller VM, that is considered a co-resident 3rd-party-workload for VM placement.
    • Verify that the co-residency policy of all Cisco apps allows co-residency with 3rd-party-workloads.
    • Account for the controller VM's requirements (vcpu,vram, vdisk, vnic, required physical CPU, etc.) in VM Placement.
    • Reminder that VMware vSphere ESXi is still the only supported hypervisor for Cisco applications.
  • Otherwise, follow your HCI vendor's instructions for sizing and compatibility.


Network Requirements & Guidelines

General

  • Business Edition 6000/7000: If the hardware is a BE6000/BE7000 appliance, the network adapter type and quantity are fixed and may not be changed.
  • Adapter(s): "Adapters" means any adapters used by compute/storage for LAN access or storage access (for example [non-exhaustive], NIC, HBA, Converged Network Adapter or Cisco VIC).
    • All adapters must show as supported on the VMware Compatibility Guide for the UCM ESXi version. You must also follow any compatibility instructions from your hardware providers for the server model, network elements and storage system you are using.
    • Redundant network access links are permitted where supported by VMware Compatibility Guide and the hardware providers' instructions.
  • Network infrastructure: "Network infrastructure" means any network elements or links providing access to LAN/WAN or to storage.
    • Elements may be Cisco or 3rd-party, physical or virtual.
      • Physical examples (non-exhaustive) include Cisco route/switch/fabric products like Catalyst, ISR, ASR, ASA, Nexus, UCS Fabric Interconnects/Extenders, ACI, SDA, SD WAN.
      • Virtual examples (non-exhaustive) include Cisco Nexus 1000V, AVS, CSR 1000V, Enterprise NFV.
    • You must follow any compatibility instructions from your hardware/software providers for the server model, network elements and storage system you are using.
    • Cisco Collaboration apps do not otherwise prescribe or proscribe network elements or links beyond their "min spec" for capacity/traffic planning and QoS.

VM Placement

  • Application vnic network traffic capacity/QoS required: See application design guides for their "min spec" for network traffic and QoS (e.g. required bandwidth and max tolerable delay, jitter and loss, along with which QoS traffic marking mechanisms they support).
  • Application vdisk storage traffic capacity/QoS required: If the same network will be carrying both application VM vnic network traffic and application VM vdisk storage traffic (e.g. as with Cisco HyperFlex or certain FCoE/iSCSI/NFS accessed storage), make sure to plan for both sets of traffic. Storage Requirements must be met for the applications to be supported.
  • Physical network access links: each "server" must provide enough for the vnics of all application VMs (details out of scope for this policy).
    • Redundant physical network access links (e.g. "NIC teaming") are permitted where supported by VMware Compatibility Guide and the network hardware/software providers' instructions.
    • If you choose to use access options like VLAN trunking or link aggregation, multiple physical links may be required.

  

  • Business Edition 6000/7000: If the hardware is a BE6000/BE7000 appliance, they ship with fixed motherboard ethernet ports and NICs (line rates vary by appliance model - see appliance hardware examples summary). Changes/adds are not supported. If customer's LAN can't accommodate what ships on the appliance (e.g. they are fiber-only 10GE), customer should consider non-appliance general-purpose Cisco hardware.
    • Make sure to identify customer's access network switching/fabric infrastructure and copper vs. fiber cabling before you buy compute/storage hardware, so you procure the right interface cards/ports/cables.
  • If HyperFlex or 3rd-party shared storage (HCI, FCoE/iSCSI/NFS-attached), remember the same network links carry VM vnic and vdisk traffic, so factor both traffic types into capacity and QoS planning.
  • Guidelines for physical LAN uplinks, link redundancy / NIC teaming, VLAN trunking and LAN traffic sizing may be found here: https://www.cisco.com/c/dam/en/us/td/docs/voice_ip_comm/uc_system/virtualization/virtualization-qos-designs-considerations.html#LAN_Trunking_Traffic

  

  • You cannot connect Business Edition 6000/7000 appliance to Fabric Interconnect or manage them with UCS Manager (they don't have the right kinds of interfaces).
  • For Cisco Fabric Interconnects/Extenders, at a minimum you must follow the UCS Hardware and Software Compatibility tool.

  



Examples

Introduction

The following are example hardware configurations for typical / common customer scenarios of various sizes.

NOTE: These are examples only.
  • Many other possible supported hardware configurations exist.
  • Your customer’s requirements and design may differ from these examples.
  • Hardware components, models and product IDs may have changed since the time of this writing. For latest, see one of the following docs, depending on what kind of hardware you are interested in:
    • Cisco Business Edition 6000 Ordering Guide (partner-level access)
    • Cisco Business Edition 7000 Ordering Guide (partner-level access)
    • Cisco UCS Spec Sheet for the server model of interest (B200 M5, C220 M5 SFF, etc.)
    • Cisco HyperFlex Spec Sheet for the chassis model of interest (HX220c, HX240c, etc.).

Each example has

  1. Summary of example customer needs
  2. Design assumptions, required applications, and application sizing
  3. VM placement and derivation of required hardware specs.

Summary of Hardware Examples

  

Note: the M6+M5 Business Edition 6000M Examples below are spec’d for two appliances only through application version 14 and VMware vSphere ESXi 7.0. For application version 15 and/or VMware vSphere ESXi 8.0, three appliances will frequently be required.

Note: the M6+M5 Business Edition 7000M and 7000H Examples below are spec’d through application version 15 and VMware vSphere ESXi 8.0.

  Business Edition 6000M (M6)
Example for Small Collaboration
Spec Product ID Qty
Base System BE6000M (M6) Appliance BE6K-M6-K9 1
CPU Single Xeon 4310T (1S/10C/2.30 GHz) Included  
RAM 64GB RAM Included  
Storage RAID Controller (12G) Included  
Local DAS storage
6 x 600GB SAS in single 6-disk-RAID5
~3TB usable GB
Included  
Network + IO 2x10GE Cu LoM NIC Included  
     
     
Misc. Redundant power supplies Included  
Rack-mounting kit Included  
Trusted Platform Module Included  
Business Edition 7000M (M6)
Example for Medium Collaboration
Spec Product ID Qty
BE7000M (M6) Appliance BE7M-M6-K9 1
Single Xeon 6326 (1S/16C/2.90 GHz) Included  
96GB RAM Included  
RAID Controller (12G) Included  
Local DAS storage
16x 600GB SAS in quad 4-disk-RAID5
~1.8TB usable GB per volume
Included  
2x10GE Cu LoM NIC Included  
Dual 4x10GE Cu NIC Included  
PCIe Riser Included  
Redundant power supplies Included  
Rack-mounting kit Included  
Trusted Platform Module Included  
Business Edition 7000H (M6)
Example for Large Collaboration
Spec Product ID Qty
BE7000H (M6) Appliance BE7H-M6-K9 1
Single Xeon 6348 (1S/28C/2.60 GHz) Included  
192GB RAM Included  
RAID Controller (12G) Included  
Local DAS storage
24x 600GB SAS in quad 6-disk-RAID5
~3TB usable GB per volume
Included  
2x10GE Cu LoM NIC Included  
Dual 4x10GE Cu NIC Included  
PCIe Riser Included  
Redundant power supplies Included  
Rack-mounting kit Included  
Trusted Platform Module Included  
  Business Edition 6000M (M5)
Example for Small Collaboration
Spec Product ID Qty
Base System BE6000M (M5) Appliance BE6M-M5-K9 1
CPU Single Xeon 4114 (1S/10C/2.20 GHz) Included  
RAM 48GB RAM Included  
Storage RAID Controller (12G) Included  
Local DAS storage
6 x 300GB SAS in single 6-disk-RAID5
~1TB usable GB
Included  
Network + IO 2x10GE LoM NIC Included  
     
     
Misc. Single / non-redundant Power supply Included  
Rack-mounting kit Included  
Business Edition 7000M (M5)
Example for Medium Collaboration
Spec Product ID Qty
BE7000M (M5) Appliance BE7M-M5-K9 1
Single Xeon 6132 (1S/14C/2.60 GHz) Included  
96GB RAM Included  
RAID Controller (12G) Included  
Local DAS storage
14x 300GB SAS in dual 7-disk-RAID5
~1TB usable GB per volume
Included  
2x10GE LoM NIC Included  
Dual 4x1GbE NIC Included  
PCIe Riser Included  
Redundant power supplies Included  
Rack-mounting kit Included  
Business Edition 7000H (M5)
Example for Large Collaboration
Spec Product ID Qty
BE7000H (M5) Appliance BE7H-M5-K9 1
Dual Xeon 6132 (2S/14C/2.50 GHz) Included  
192GB RAM Included  
RAID Controller (12G) Included  
Local DAS storage
24x 300GB SAS in quad 6-disk-RAID5
~1TB usable GB per volume
Included  
2x10GE LoM NIC Included  
Dual 4x1GbE NIC Included  
PCIe Riser Included  
Redundant power supplies Included  
Rack-mounting kit Included  

  

Note: the Small M6 example below is for HyperFlex only, and is spec'd through application version 15 and VMware vSphere ESXi 8.0.

Note: the Small M5 and Medium/Large M6+M5 examples below are for HyperFlex only, and are only spec'd through appliation version 14 and VMware vSphere ESXi 8.0. For application version 15 and/or VMware vSphere 8.0, additional resources are required - use www.cisco.com/go/quotecollab and Cisco Compute Hyperconverged with Nutanix.

  HyperFlex Edge M6
Example for Small Collaboration
Spec Product ID Qty
Base System HX220 M6 Edge Hybrid Server Node
(with Intersight management)
 
HX-M6-MLB
HX-E-220M6S
1
CPU Single Xeon 4316 (1S/20C/2.3 GHz)   HX-CPU-I4316 1
RAM 128GB (8x16GB)   HX-MR-X16G1RW 8
Storage Storage Controller   HX-SAS-220M6 1
Capacity Disks
(3x1.2 TB, see HX Sizer for usable space)
  HX-HD12TB10K12N 3
Cache Disk   HX-SD480G63X-EP 1
System Disk   HX-SD240GM1X-EV 1
Redundant Boot Disks   HX-M2-240GB 2
  HX-M2-HWRAID 1
Network + IO LoM: 2x10GE Cu LoM LAN + 1x1GE CIMC (RJ45)   Included  
HX Edge 10GE Network Topology    HX-E-TOPO4 1
Cisco VIC 1467 (4x 25GE SFP28)    HX-M-V25-04 1
Single 4x 1GE NIC (RJ45)
(as needed, substitute Cisco VIC, higher-speeds, fiber interfaces)
   HX-PCIE-IRJ45 1
Riser Kit    UCSC-R2R3-C220M6 1
Misc. Redundant Power Supplies   HX-PSU1-1050W 2
Rack-mounting kit   HX-RAIL-M6 1
Blanking panels   UCSC-BBLKD-S2 5
  UCS-DIMM-BLK 24
Cables   CBL-SAS-C220M6 1
Heat sink for CPU   UCSC-HSLP-M6 1
Trusted Platform Module   HX-TPM-002C 1
Security Bezel   HX220C-BZL-M5 1
HyperFlex Edge M6
Example for Medium Collaboration
Spec Product ID Qty
HX220 M6 Edge Hybrid Server Node
(with Intersight management)
 
HX-M6-MLB
HX-E-220M6S
1
Single Xeon 6342 (1S/24C/2.8 GHz)   HX-CPU-I6342 1
128GB (8x16GB)   HX-MR-X16G1RW 8
Storage Controller   HX-SAS-220M6 1
Capacity Disks
(3x1.2 TB, see HX Sizer for usable space)
  HX-HD12TB10K12N 3
Cache Disk   HX-SD480G63X-EP 1
System Disk   HX-SD240GM1X-EV 1
Redundant Boot Disks   HX-M2-240GB 2
  HX-M2-HWRAID 1
LoM: 2x10GE LAN + 1x1GE CIMC (RJ45)   Included  
HX Edge 10GE Network Topology    HX-E-TOPO4 1
Cisco VIC 1467 (4x 25GE SFP28)    HX-M-V25-04 1
Single 4x 1GE NIC (RJ45)
(as needed, substitute Cisco VIC, higher-speeds, fiber interfaces)
   HX-PCIE-IRJ45 1
Riser Kit    UCSC-R2R3-C220M6 1
Redundant Power Supplies   HX-PSU1-1050W 2
Rack-mounting kit   HX-RAIL-M6 1
Blanking panels   UCSC-BBLKD-S2 5
  UCS-DIMM-BLK 24
Cables   CBL-SAS-C220M6 1
Heat sink for CPU   UCSC-HSLP-M6 1
Trusted Platform Module   HX-TPM-002C 1
Security Bezel   HX220C-BZL-M5 1
HyperFlex M6
Example for Large Collaboration
Spec Product ID Qty
HX220 M6 Hybrid Server Node
(with Fabric Interconnect management)
HX-M6-MLB
HX220C-M6S
HX-DC-FI
1
Dual Xeon 6354 (2S/18C/3.0 GHz)   HX-CPU-I6354 2
192GB (12x16GB)   HX-MR-X16G1RW 12
Storage Controller   HX-SAS-220M6 1
Capacity Disks
(6x1.2 TB, see HX Sizer for usable space)
  HX-HD12TB10K12N 6
Cache Disk   HX-SD480G63X-EP 1
System Disk   HX-SD240GM1X-EV 1
Redundant Boot Disks   HX-M2-240GB 2
  HX-M2-HWRAID 1
LoM: 2x10GE LAN + 1x1GE CIMC (RJ45)   Included  
     
Cisco VIC 1467 (4x 25GE SFP28)    HX-M-V25-04 1
Single 4x 1GE NIC (RJ45)
(as needed, substitute Cisco VIC, higher-speeds, fiber interfaces)
   HX-PCIE-IRJ45 1
Riser Kit    UCSC-R2R3-C220M6 1
Redundant Power Supplies   HX-PSU1-1050W 2
Rack-mounting kit   HX-RAIL-M6 1
Blanking panels   UCSC-BBLKD-S2 2
  UCS-DIMM-BLK 20
Cables   CBL-SAS-C220M6 1
Heat sink for CPU   UCSC-HSLP-M6 2
Trusted Platform Module   HX-TPM-002C 1
Security Bezel   HX220C-BZL-M5 1
  HyperFlex Edge M5
Example for Small Collaboration
Spec Product ID Qty
Base System HyperFlex Edge Bundle
(nodes + HXDP sub, Intersight-managed)
HX-E-M5S-HXDP 1
HyperFlex Edge 220 M5SX
(1RU hybrid storage HX Edge node)
 HX-E-220M5SX 1
CPU Xeon 2S/10C/2.20 GHz   HX-CPU-4114 2
RAM 128GB RAM (4x32GB)   HX-MR-X32G2RS-H 4
Storage Storage Controller   HX-SAS-M5 1
Capacity Disks
(3x 1TB, see HX Sizer for usable space)
  HX-HD12TB10K12N 3
Cache Disk   HX-SD480G63X-EP 1
System Disk   HX-SD240GM1X-EV 1
Boot Disk   HX-M2-240GB 1
Network + IO HX Edge 10GbE Topology   HX-E-TOPO1 1
10GE Cisco VIC 1457    HX-MLOM-C25Q-04 1
Misc. 32GB Micro SD card for utility   HX-MSD-32G 1
Redundant power supply   HX-PSU1-770W 2
Rack-mounting kit   HX-RAILF-M4 1
HX Edge Security Bezel   HX-E-220C-BZL-M5 1
M.2 mini-storage carrier   UCS-MSTOR-M2 1
Blanking panels (disk slot)   UCSC-BBLKD-S2 5
Heat sink for CPU   UCSC-HS-C220M5 1
HyperFlex Edge M5
Example for Medium Collaboration
Spec Product ID Qty
HyperFlex Edge Bundle
(nodes + HXDP sub, Intersight-managed)
HX-E-M5S-HXDP 1
HyperFlex Edge 220 M5SX
(1RU hybrid storage HX Edge node)
 HX-E-220M5SX 1
Xeon 2S/12C/2.60 GHz   HX-CPU-6126 2
128GB RAM (8x16 GB)   HX-MR-X16G1RS-H 8
Storage Controller   HX-SAS-M5 1
Capacity Disks
(3x 1TB, see HX Sizer for usable space)
  HX-HD12TB10K12N 3
Cache Disk   HX-SD480G63X-EP 1
System Disk   HX-SD240GM1X-EV 1
Boot Disk   HX-M2-240GB 1
HX Edge 10GbE Topology   HX-E-TOPO1 1
10GE Cisco VIC 1457    HX-MLOM-C25Q-04 1
32GB Micro SD card for utility   HX-MSD-32G 1
Redundant power supply   HX-PSU1-770W 2
Rack-mounting kit   HX-RAILF-M4 1
HX Edge Security Bezel   HX-E-220C-BZL-M5 1
M.2 mini-storage carrier   UCS-MSTOR-M2 1
Blanking panels (disk slot)   UCSC-BBLKD-S2 5
Heat sink for CPU   UCSC-HS-C220M5 1
HyperFlex M5
Example for Large Collaboration
Spec Product ID Qty
HyperFlex Bundle
(nodes + HXDP subscription)
HX-M5S-HXDP 1
HyperFlex 220c M5SX
(1RU hybrid storage HX node)
 HX220C-M5SX 1
Dual Xeon 2S/16C/2.8 Ghz   HX-CPU-I6242 2
192GB RAM (4x64 16GB)   HX-ML-X64G4RT-H 4
Storage Controller   HX-SAS-M5 1
Capacity Disks
(6x 1TB, see HX Sizer for usable space)
  HX-HD12TB10K12N 6
Cache Disk   HX-SD480G63X-EP 1
System Disk   HX-SD240GM1X-EV 1
Boot Disk   HX-M2-240GB 1
4x1GbE NIC   HX-PCIE-IRJ45 1
10GE Cisco VIC 1457    HX-MLOM-C25Q-04 1
32GB Micro SD card for utility   HX-MSD-32G 1
Redundant power supply   HX-PSU1-1050W 2
Rack-mounting kit   HX-RAILF-M4 1
HX Security Bezel   HX220C-BZL-M5 1
M.2 mini-storage carrier   UCS-MSTOR-M2 1
Blanking panels (disk slot)   UCSC-BBLKD-S2 2
Heat sink for CPU   UCSC-HS-C220M5 2

  

Note: the M7 Small Examples below are spec’d for two physical blade servers with application version 15 and VMware vSphere ESXi 8.0.

Note: the M6+M5 Small Examples below are spec’d for two physical blade servers only through application version 14 and VMware vSphere ESXi 7.0. For application version 15 and/or VMware vSphere ESXi 8.0, three physical blade servers will frequently be required.

Note: the M7+M6+M5 Medium and Large Examples below are spec’d through application version 15 and VMware vSphere ESXi 8.0.

  UCS X210c M7
Example for Small Collaboration
Spec Product ID Qty
Base System UCS X210c M7 Compute Node (standalone) UCSX-M7-MLB
UCSX-210C-M7-U
1
CPU Single Xeon 4410Y (1S/12C/2.0 GHz)  UCSX-CPU-I4410Y 1
RAM 64GB (4x16GB)  UCSX-MRX16G1RE1 4
Storage None - offbox storage and boot  -  
Network + IO Cisco VIC 15420 (mLOM)  UCSX-ML-V5Q50G-D 1
Misc. Blanking panels  UCSX-X10C-FMBK-D 1
 UCSX-M2-HWRD-FPS 1
 UCS-DDR5-BLK 28
Heat sink for CPU  UCSX-C-M7-HS-F 1
   
Trusted Platform Module  UCSX-TPM-002C-D 1
UCS X210c M7
Example for Medium Collaboration
Spec Product ID Qty
UCS X210c M7 Compute Node (standalone) UCSX-M7-MLB
UCSX-210C-M7-U
1
Single Xeon 6426Y (1S/16C/2.5 GHz)
 
 UCSX-CPU-I6426Y 1
96GB (6x16GB)  UCSX-MRX16G1RE1 6
None - offbox storage and boot  -  
Cisco VIC 15420 (mLOM)  UCSX-ML-V5Q50G-D 1
Blanking panels  UCSX-X10C-FMBK-D 1
 UCSX-M2-HWRD-FPS 1
 UCS-DDR5-BLK 26
Heat sink for CPU  UCSX-C-M7-HS-F 1
   
Trusted Platform Module  UCSX-TPM-002C-D 1
UCS X210c M7
Example for Large Collaboration
Spec Product ID Qty
UCS X210c M7 Compute Node (standalone) UCSX-M7-MLB
UCSX-210C-M7-U
1
Dual Xeon 6426Y (2S/16C/2.5 GHz)
 
 UCSX-CPU-I6426Y 2
192GB (12x16GB)  UCSX-MRX16G1RE1 12
None - offbox storage and boot  -  
Cisco VIC 15420 (mLOM)  UCSX-ML-V5Q50G-D 1
Blanking panels  UCSX-X10C-FMBK-D 1
 UCSX-M2-HWRD-FPS 1
 UCS-DDR5-BLK 20
Heat sinks for CPUs  UCSX-C-M7-HS-F 1
 UCSX-C-M7-HS-R 1
Trusted Platform Module  UCSX-TPM-002C-D 1
  UCS B200 M6
Example for Small Collaboration
Spec Product ID Qty
Base System UCS B200 M6 Blade Server (standalone) UCS-M6-MLB
UCSB-B200-M6-U
1
CPU Single Xeon 4310T (1S/10C/2.3 GHz)  UCS-CPU-I4310T 1
RAM 64GB (4x16GB)  UCS-MR-X16G1RW 4
Storage None - offbox storage and boot  -  
Network + IO Cisco VIC 1440 (mLOM)  UCSB-MLOM-40G-04 1
Misc. Blanking panels  UCSB-FBLK-M6 2
 UCS-DIMM-BLK 28
Heat sink for CPU  UCSB-HS-M6-R 1
Trusted Platform Module  UCSX-TPM-002C 1
UCS 5108 Blade Chassis FW Package 4.20  N20-FW018 1
UCS B200 M6
Example for Medium Collaboration
Spec Product ID Qty
UCS B200 M6 Blade Server (standalone) UCS-M6-MLB
UCSB-B200-M6-U
1
Single Xeon 6326 (1S/16C/2.9 GHz)  UCS-CPU-I6326 1
96GB (6x16GB)  UCS-MR-X16G1RW 6
None - offbox storage and boot  -  
Cisco VIC 1440 (mLOM)  UCSB-MLOM-40G-04 1
Blanking panels  UCSB-FBLK-M6 2
 UCS-DIMM-BLK 26
Heat sink for CPU  UCSB-HS-M6-R 1
Trusted Platform Module  UCSX-TPM-002C 1
UCS 5108 Blade Chassis FW Package 4.2  N20-FW018 1
UCS B200 M6
Example for Large Collaboration
Spec Product ID Qty
UCS B200 M6 Blade Server (standalone) UCS-M6-MLB
UCSB-B200-M6-U
1
Single Xeon 6348 (1S/28C/2.6 GHz)  UCS-CPU-I6348 1
192GB (12x16GB)  UCS-MR-X16G1RW 12
None - offbox storage and boot  -  
Cisco VIC 1440 (mLOM)  UCSB-MLOM-40G-04 1
Blanking panels  UCSB-FBLK-M6 2
 UCS-DIMM-BLK 20
Heat sink for CPU  UCSB-HS-M6-R 1
Trusted Platform Module  UCSX-TPM-002C 1
UCS 5108 Blade Chassis FW Package 4.2  N20-FW018 1

  UCS B200 M5
Example for Small Collaboration
Spec Product ID Qty
Base System B200 M5 Blade Server (standalone) UCSB-B200-M5-U 1
CPU Single Xeon 4210 (1S/10C/2.20 GHz)  UCS-CPU-I4210 1
RAM 32GB RAM (2x16GB)  UCS-MR-X16G1RT-H 2
Storage None - offbox storage and boot  -  
Network + IO Cisco VIC 1440 (mLOM)  UCSB-MLOM-40G-04 1
Misc. Blanking panels (FlexStorage)  UCSB-LSTOR-BK 2
Blanking panels (DIMM)  UCS-DIMM-BLK 22
Heat sink for CPU  UCSB-HS-M5-F 1
   
UCS 5108 Blade Chassis FW Package 4.0  N20-FW016 1
UCS B200 M5
Example for Medium Collaboration
Spec Product ID Qty
B200 M5 Blade Server (standalone) UCSB-B200-M5-U 1
Single Xeon 6132 (1S/14C/2.60 GHz)  UCS-CPU-6132 1
64GB RAM (2x32GB)  UCS-ML-X32G2RS-H 2
None - offbox storage and boot  -  
Cisco VIC 1440 (mLOM)  UCSB-MLOM-40G-04 1
Blanking panels (FlexStorage)  UCSB-LSTOR-BK 2
Blanking panels (DIMM)  UCS-DIMM-BLK 22
Heat sink for CPU  UCSB-HS-M5-F 1
   
UCS 5108 Blade Chassis FW Package 4.0  N20-FW016 1
UCS B200 M5
Example for Large Collaboration
Spec Product ID Qty
B200 M5 Blade Server (standalone) UCSB-B200-M5-U 1
Single Xeon 6132 (2S/14C/2.60 GHz)  UCS-CPU-6132 2
192GB RAM (12x16GB)  UCS-MR-X16G1RS-H 12
None - offbox storage and boot  -  
Cisco VIC 1440 (mLOM)  UCSB-MLOM-40G-04 1
Blanking panels (FlexStorage)  UCSB-LSTOR-BK 2
Blanking panels (DIMM)  UCS-DIMM-BLK 22
Heat sink for CPU  UCSB-HS-M5-F 1
 UCSB-HS-M5-R 1
UCS 5108 Blade Chassis FW Package 4.0  N20-FW016 1

  

Note: the M7 Small Examples are spec’d for two physical servers with application version 15 and VMware vSphere ESXi 8.0.

Note: the M6+M5 Small Examples are spec’d for two physical servers only through application version 14 and VMware vSphere ESXi 7.0. For application version 15 and/or VMware vSphere ESXi 8.0, three physical servers will frequently be required.

Note: the M7+M6+M5 Medium and Large Examples are spec’d through application version 15 and VMware vSphere ESXi 8.0.

  UCS C220 M7S
Example for Small Collaboration
Spec Product ID Qty
Base System UCS C220 M7S Rack Server UCS-M7-MLB
UCSC-C220-M7S
1
CPU Single Xeon 4410Y (1S/12C/2.0 GHz)  UCS-CPU-I4410Y 1
RAM 64GB (4x16GB)  UCS-MRX16G1RE1 4
Storage RAID Controller  UCSC-RAID-T-D 1
 
6x 600GB 12G SAS disk (HDD)
 
 UCS-HD600G10KJ4-D 6
Single 6-disk RAID5 volume  R2XX-RAID5D 1
Network + IO 2x10GE Cu OCP 3.0 NIC
(mLOM slot)
 UCSC-O-ID10GC-D  1
PCIe Riser Kits UCSC-RIS1A-22XM7 1
     
Misc. Redundant Power Supplies  UCSC-PSU1-1200W-D 2
Rack-mounting kit  UCSC-RAIL-D 1
Blanking panels  UCSC-BBLKD-M7 4
 UCS-DDR5-BLK 28
 UCSC-FBRS-C220-D  1
 UCSC-FBRS2-C220M7  1
Cables  CBL-SAS-C220M7 1
 UCSC-RDBKT-22XM7 1
 UCS-SCAP-D 1
 CBL-SCAP-C220-D 1
Heat sink for CPU  UCSC-HSLP-C220M7 1
mLOM Mounting for OCP NIC  UCSC-OCP3-KIT-D 1
Trusted Platform Module  UCSX-TPM-002C-D 1
UCS C240 M7SX
Example for Medium Collaboration
Spec Product ID Qty
UCS C240 M7SX Rack Server UCS-M7-MLB
UCSC-C240-M7SX
1
Single Xeon 6426Y (1S/16C/2.5 GHz)
 
 UCS-CPU-I6426Y 1
96GB (6x16GB)  UCS-MRX16G1RE1 6
RAID Controller  UCSC-RAID-SD-D 1
 
16x 600GB 12G SAS disk (HDD)
 
 UCS-HD600G10KJ4-D 16
Quad 4-disk RAID5 volume  R2XX-RAID5D 1
2x10GE Cu OCP 3.0 NIC
(mLOM slot)
 UCSC-O-ID10GC-D  1
PCIe Riser Kits UCSC-RIS1A-240-D 1
 Dual 4x10GE Cu NIC  UCSC-P-IQ10GC-D  2
Redundant Power Supplies  UCSC-PSU1-1200W-D 2
Rack-mounting kit  UCSC-RAIL-D 1
Blanking panels  UCSC-BBLKD-M7 8
 UCS-DDR5-BLK 31
 UCSC-FBRS2-C240-D  1
 UCSC-FBRS3-C240-D  1
Cables  CBL-SDSAS-C240M7 1
 UCSC-SDBKT-24XM7 1
 UCS-SCAP-D 1
 CBL-SCAPSD-C240-D 1
Heat sink for CPU  UCSC-HSHP-C240M7 1
mLOM Mounting for OCP NIC  UCSC-OCP3-KIT-D 1
Trusted Platform Module  UCSX-TPM-002C-D 1
UCS C240 M7SX
Example for Large Collaboration
Spec Product ID Qty
UCS C240 M7SX Rack Server UCS-M7-MLB
UCSC-C240-M7SX
1
Dual Xeon 6426Y (2S/16C/2.5 GHz)
 
 UCS-CPU-I6426Y 2
192GB (12x16GB)  UCS-MRX16G1RE1 12
RAID Controller  UCSC-RAID-SD-D 1
 
24x 600GB 12G SAS disk (HDD)
 
 UCS-HD600G10KJ4-D 24
Quad 6-disk RAID5 volume  R2XX-RAID5D 1
2x10GE Cu OCP 3.0 NIC
(mLOM slot)
 UCSC-O-ID10GC-D  1
PCIe Riser Kits UCSC-RIS1A-240-D 1
 Dual 4x10GE Cu NIC  UCSC-P-IQ10GC-D  2
Redundant Power Supplies  UCSC-PSU1-1200W-D 2
Rack-mounting kit  UCSC-RAIL-D 1
Blanking panels  
 UCS-DDR5-BLK 20
 UCSC-FBRS2-C240-D  1
 UCSC-FBRS3-C240-D  1
Cables  CBL-SDSAS-C240M7 1
 UCSC-SDBKT-24XM7 1
 UCS-SCAP-D 1
 CBL-SCAPSD-C240-D 1
Heat sink for CPU  UCSC-HSHP-C240M7 2
mLOM Mounting for OCP NIC  UCSC-OCP3-KIT-D 1
Trusted Platform Module  UCSX-TPM-002C-D 1

  UCS C220 M6S
Example for Small Collaboration
Spec Product ID Qty
Base System UCS C220 M6S Rack Server UCS-M6-MLB
UCSC-C220-M6S
1
CPU Single Xeon 4310T (1S/10C/2.3 GHz)  UCS-CPU-I4310T 1
RAM 64GB (4x16GB)  UCS-MR-X16G1RW 4
Storage RAID Controller  UCSC-RAID-220M6 1
 
6x 600GB 12G SAS disk (HDD)
 UCS-HD600G10K12N 6
Single 6-disk RAID5 volume  R2XX-RAID5 1
Network + IO 2x10GE Cu LoM NIC  Included  
 
 
   
PCIe Riser Kits UCSC-R2R3-C220M6 1
UCSC-FBRS-C220M6 1
Misc. Redundant Power Supplies  UCSC-PSU1-1050W 2
Rack-mounting kit  UCSC-RAIL-M6 1
Blanking panels  UCSC-BBLKD-S2 4
 UCS-DIMM-BLK 28
   
   
Cables  CBL-SAS-C220M6 1
 UCS-SCAP-M6 1
 CBL-SCAP-C220M6 1
Heat sink for CPU  UCSC-HSLP-M6 1
Trusted Platform Module  UCSX-TPM-002C 1
UCS C240 M6SX
Example for Medium Collaboration
Spec Product ID Qty
UCS C240 M6SX Rack Server UCS-M6-MLB
UCSC-C240-M6SX
1
Single Xeon 6326 (1S/16C/2.9 GHz)  UCS-CPU-I6326 1
96GB (6x16GB)  UCS-MR-X16G1RW 6
RAID Controller (12G)  UCSC-RAID-M6SD 1
 
16x 600GB 12G SAS disk (HDD)
 
 UCS-HD600G10K12N 16
Quad 4-disk RAID5  R2XX-RAID5 1
2x10GE Cu LoM NIC  Included  
Dual 4x10GE Cu NIC  UCSC-P-IQ10GC 2
PCIe Riser UCSC-RIS1A-240M6 1
   
Redundant power supplies  UCSC-PSU1-1050W 2
Rack-mounting kit  UCSC-RAIL-M6 1
Blanking panels  UCSC-BBLKD-S2 8
 UCS-DIMM-BLK 26
 UCSC-FBRS2-C240M6 1
 UCSC-FBRS3-C240M6 1
Cables  UCS-SCAP-M6 1
 CBL-SCAPSD-C240M6 1
 CBL-SDSAS-240M6 1
Heat sink for CPU  UCSC-HSHP-240M6 1
Trusted Platform Module  UCSX-TPM-002C 1
UCS C240 M6SX
Example for Large Collaboration
Spec Product ID Qty
UCS C240 M6SX Rack Server UCS-M6-MLB
UCSC-C240-M6SX
1
Single Xeon 6348 (1S/28C/2.6 GHz)  UCS-CPU-I6348 1
192GB (12x16GB)  UCS-MR-X16G1RW 12
RAID Controller (12G)  UCSC-RAID-M6SD 1
 
24x 600GB 12G SAS disk (HDD)
 
 UCS-HD600G10K12N 24
Quad 6-disk RAID5  R2XX-RAID5 1
2x10GE Cu LoM NIC  Included  
Dual 4x10GE Cu NIC  UCSC-P-IQ10GC 2
PCIe Riser UCSC-RIS1A-240M6 1
   
Redundant power supplies  UCSC-PSU1-1050W 2
Rack-mounting kit  UCSC-RAIL-M6 1
Blanking panels  UCS-DIMM-BLK 20
 UCSC-FBRS2-C240M6 1
 UCSC-FBRS3-C240M6 1
   
Cables  UCS-SCAP-M6 1
 CBL-SCAPSD-C240M6 1
 CBL-SDSAS-240M6 1
Heat sink for CPU  UCSC-HSHP-240M6 1
Trusted Platform Module  UCSX-TPM-002C 1

  UCS C220 M5SX
Example for Small Collaboration
Spec Product ID Qty
Base System UCS C220 M5SX UCSC-C220-M5SX 1
CPU Single Xeon 4114 (2S/10C/2.20 GHz)  UCS-CPU-4114 2
RAM 48GB RAM (3x16GB)  UCS-MR-X16G1RS-H 3
Storage RAID Controller (12G)  UCSC-RAID-M5 1
 
6x 300G 10K SAS disk
 
 UCS-HD300G10K12N 6
RAID5  R2XX-RAID5 1
Network + IO 2x10GE LoM NIC  Included  
     
     
Misc. Single / non-redundant Power supply  UCSC-PSU1-770W 1
Rack-mounting kit  UCSC-RAILB-M4 1
Blanking panels (disk slot)  UCSC-BBLKD-S2 4
Blanking panels (power supply)  UCSC-PSU-BLKP1U 1
Cable (storage)  CBL-SC-MR12GM52 1
Cable (storage)  UCSC-SCAP-M5 1
Heat sink for CPU  UCSC-HS-C220M5 1
UCS C240 M5SX
Example for Medium Collaboration
Spec Product ID Qty
UCS C240 M5SX UCSC-C240-M5SX 1
Single Xeon 6132 (1S/14C/2.60 GHz)  UCS-CPU-6132 1
96GB RAM (6x16GB)  UCS-MR-X16G1RS-H 6
RAID Controller (12G)  UCSC-RAID-M5HD 1
Local DAS
14x 300GB SAS in dual 7-disk-RAID5
~1TB usable GB per volume
 UCS-HD300G10K12N 14
RAID5  R2XX-RAID5 1
2x10GE LoM NIC  Included  
Dual 4x1GbE NIC  UCSC-PCIE-IRJ45 2
PCIe Riser  UCSC-PCI-1B-240M5 1
Redundant Power supplies  UCSC-PSU1-1050W 2
Rack-mounting kit  UCSC-RAILB-M4 1
Blanking panels (disk slot)  UCSC-BBLKD-S2 12
Blanking panel (PCI riser slot)  UCSC-PCIF-240M5 1
Cable (storage)  CBL-SC-MR12GM5P 1
Cable (storage)  UCSC-SCAP-M5 1
Heat sink for CPU  UCSC-HS-C240M5 1
UCS C240 M5SX
Example for Large Collaboration
Spec Product ID Qty
UCS C240 M5SX UCSC-C240-M5SX 1
Single Xeon 6132 (2S/14C/2.60 GHz)  UCS-CPU-6132 2
192GB RAM (12x16GB)  UCS-MR-X16G1RS-H 12
RAID Controller (12G)  UCSC-RAID-M5HD 1
Local DAS
24x 300GB SAS in quad 6-disk-RAID5
~1TB usable GB per volume
 UCS-HD300G10K12N 24
RAID5  R2XX-RAID5 1
2x10GE LoM NIC  Included  
Dual 4x1GbE NIC  UCSC-PCIE-IRJ45 2
PCIe Riser  UCSC-PCI-1B-240M5 1
Redundant Power supplies  UCSC-PSU1-1050W 2
Rack-mounting kit  UCSC-RAILB-M4 1
Blanking panels (disk slot)  UCSC-BBLKD-S2 2
Blanking panel (PCI riser slot)  UCSC-PCIF-240M5 1
Cable (storage)  CBL-SC-MR12GM5P 1
Cable (storage)  UCSC-SCAP-M5 1
Heat sink for CPU  UCSC-HS-C240M5 2

  

Refer to End of Sale file for details.


Collaboration Example Details

  

Small Collaboration Example Details

Example customer needs
Size 200 users and 300 to 900 devices
Geographic location of employees
  • Single-site main office.
  • 100% work from home + mobile roaming support.
  • USA-based, so requires compliance with US FCC Kari's Law / Ray Baum's Act.
Functional needs
  • Dial tone, voicemail, enterprise instant messaging & presence for 100% of users.
  • Mobile + remote access and desktop + mobile softclients for 100% of users (to support WFH / roaming).
  • Cisco Collaboration System Release 12.7 features and application versions.
Infrastructure Assumptions
  • Assume new buildout (no compute, storage or virtualization infrastructure exists yet for Collaboration).
  • Assume campus network already exists (Catalyst and ISR based, with PSTN, WAN and Internet connections) and passes VOIP readiness assessment.
  • VMware vSphere ESXi version 7.0 is ok.
  • All hardware/software for collaboration must be redundant.
  • Single cluster for each application (UCM, CUC, IMP, CER, Expressway).

Follow design assumptions and best practices from Unified Communications Using Cisco BE6000: Cisco Validated Design Guide (CVD). This yields the following applications and application sizing:

  • Plan to have 2+ hardware nodes for redundancy.
  • Applications are sized with redundant VMs. Plan to distribute VMs across hardware nodes.
  • DMZ infrastructure not included in example hardware BOMs. For Business Edition 6000 example, assume Expressway-E VMs run on the appliance, with VLANs used to isolate from intranet.

  

Medium Collaboration Example Details

Example customer needs
Size 5000 devices with up to 5000 users
Geographic location of employees
  • Single-site main office.
  • 50% work from home + mobile roaming support.
  • USA-based, so requires compliance with US FCC Kari's Law / Ray Baum's Act.
Functional needs
  • Dial tone, voicemail, enterprise instant messaging & presence for 100% of users.
  • Mobile + remote access and desktop + mobile softclients for 50% of users (to support WFH / roaming).
  • Cisco Collaboration System Release 12.7 features and application versions.
Infrastructure Assumptions
  • Assume new buildout (no compute, storage or virtualization infrastructure exists yet for Collaboration).
  • Assume campus network already exists (Catalyst and ISR based, with PSTN, WAN and Internet connections) and passes VOIP readiness assessment.
  • VMware vSphere ESXi version 7.0 is ok.
  • All hardware/software for collaboration must be redundant.
  • Single cluster for each application (UCM, CUC, IMP, CER, Expressway).

Follow design assumptions and best practices from Sizing in Preferred Architecture for Cisco Collaboration 12.x Enterprise On-Premises Deployments, CVD. This yields the following applications and application sizing:

  • Plan to have at least two hardware nodes for redundancy.
  • Applications are sized with redundant VMs. Plan to distribute VMs across hardware nodes.
  • DMZ infrastructure not included in example hardware BOMs.

If this design needed to handle two-site geographic redundancy, it could do so with additional VMs, higher required infrastructure footprint and imposition of "clustering over WAN" network requirements. Will not be covered in these examples, but following recommendations in Solution Reference Design Guide (SRND) and Collaboration Sizing Tool (CST), application sizing could look like this:

  

Large Collaboration Example Details

Example customer needs
Size 20K-30K devices with 10K-20K users
Geographic location of employees
  • Dual datacenters with a HQ site.
  • 50% work from home + mobile roaming support.
  • USA-based, so requires compliance with US FCC Kari's Law / Ray Baum's Act.
  • HQ site has high-profile users who require WAN survivability.
Functional needs
  • Dial tone, voicemail, enterprise instant messaging & presence for 100% of users.
  • Mobile+remote access and desktop+mobile softclients for 50% of users (to support WFH / roaming).
  • Cisco Collaboration System Release 12.7 features and application versions.
Infrastructure Assumptions
  • Assume new buildout (no compute, storage or virtualization infrastructure exists yet for Collaboration).
  • Assume campus network already exists (Catalyst and ISR based, with PSTN, WAN and Internet connections) and passes VOIP readiness assessment.
  • Assume WAN between sites already exists and satisfies applications’ clustering over WAN requirements.
  • VMware vSphere ESXi version 7.0 is ok.
  • All hardware/software for collaboration must be redundant and split across sites.
  • Single application cluster for UCM, IMP, CER, Expressway. Multi-cluster for CUC.

Follow design assumptions and best practices in Preferred Architecture for Cisco Collaboration 12.x Enterprise On-Premises Deployments, CVD and Cisco Collaboration System 12.x Solution Reference Network Designs (SRND). Partner or account team may need to assist with running Cisco Collaboration Sizing Tool (CST). This could yield the following applications and application sizing:


  • Plan to have at least one hardware node per site for redundancy.
  • Applications are sized with redundant VMs. Plan to distribute VMs across sites and across hardware nodes.
  • Minimize server count and use common hardware builds to simplify operations. Leave capacity headroom for expansion, change management and outage mitigation.
  • DMZ infrastructure not included in example hardware BOMs.
  • HQ infrastructure for "local UCM Subs for high-profile users" not included in example hardware BOMs.


Derivation of Cisco Appliance Example Hardware for Collaboration

  

Derivation of Cisco Appliance Example Hardware for Small Collaboration

  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Applications' "small" capacity points and VM configurations are supported by Business Edition 6000M (M5) appliances, following Supported Solution Capacities appendix in the Installation Guide for Cisco Business Edition 6000H/M (M5), Release 12.5.
    • Required VM count for software redundancy will fit on pair of BE6000M (M5) appliances.
    • Two appliances are sufficient to provide hardware redundancy.
    • Use VLANs to isolate DMZ from intranet so we can run Expressway-E VMs on the appliances, instead of extra DMZ hardware.
  • Follow Business Edition 6000 ordering guide for quoting instructions (toplevel product ID BE6M-M5-K9).

  • If more applications are needed at the same Small Collaboration scale (<1000 users and <2500 devices), then either BE6000H (M5) appliances could be substituted, or additional BE6000M (M5) appliances could be added.
  • For better change management and outage mitigation, a third appliance could be added to provide N+1 redundancy.
  

Derivation of Cisco Appliance Example Hardware for Medium Collaboration

  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Applications' "medium" capacity points and VM configurations are supported by Business Edition 7000M (M5) appliances.
    • Required VM count for software redundancy will fit on pair of BE7000M (M5) appliances.
    • Two appliances are sufficient to provide hardware redundancy.
  • Follow Business Edition 7000 ordering guide for quoting instructions (toplevel product ID BE7M-M5-K9).

  • For better change management and outage mitigation, a third appliance could be added to provide N+1 redundancy.
  

Derivation of Cisco Appliance Example Hardware for Large Collaboration

  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Applications' "large" capacity points ("medium" for Expressway) and VM configurations are supported by Business Edition 7000H (M5) appliances.
    • Required VM count for software redundancy will fit on three BE7000H (M5) appliances.
      • Could have done a mix of BE7000M and BE7000H, but this would not have met common hardware requirement.
      • Could have considered all BE7000M, but this would not have met minimal server count requirement.
    • Three appliances at each site are sufficient to provide hardware redundancy and geographic redundancy.
  • Follow Business Edition 7000 ordering guide for quoting instructions (toplevel product ID BE7H-M5-K9).

  Datacenter A

  Datacenter B


Derivation of Cisco Hyperconverged Example Hardware for Collaboration

  

Derivation of Cisco HyperFlex Edge Example Hardware for Small Collaboration

  • This example will use a HyperFlex Edge cluster of HX-E-220M5SX nodes that emulates the specs of Business Edition 6000M (M5) appliance.
    • Lookup Cisco HyperFlex HX-E-220M5SX Edge Node Spec Sheet.
      • HyperFlex Edge will allow a minimum 2-node cluster and Intersight management to avoid requirement for external Fabric Interconnect switches.
      • HyperFlex All-Flash (AF) models provide better storage performance, but hybrid storage is sufficient for the needs of the VMs in this example.
      • HX240 nodes provide higher maximum usable storage, but HX220c maximum is sufficient for the VMs in this example.
    • Each HyperFlex node will require a HyperFlex Data Platform (HXDP) storage controller VM of 8vcpu, in addition to the application VMs. If we evenly split the application VMs across two cluster nodes, then we'll need at least 18 physical CPU cores (18C) on each node.
    • For simplicity, this example will use a pair of Intel Xeon 4114 CPUs (10C/2.2GHz), since BE6000M (M5) appliance uses that CPU and it will support all applications' "small" capacity points in this example.
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 2S/10C.
      • Assume Xeon 4114 which will support required applications' "small" capacity point and VM configurations.
      • Required VM count for software redundancy will fit on 2 HX Edge nodes.
      • Two HX Edge nodes are sufficient to provide hardware redundancy.

  • To translate to hardware BOM, use QuoteCollab's tallied required hardware specs with the Cisco HyperFlex HX-E-220M5SX Edge Node Spec Sheet to help with hardware SKU selection.
    • Select the toplevel SKU for a new HX Edge cluster (which will contain HX Edge nodes and required software subscriptions) and cluster node model HX-E-220M5SX, quantity 2.
    • On each node...
      • Select CPU = dual Intel Xeon 4114.
      • Minimum required memory is 102GB. Follow HX DIMM population rules for 4x32GB=128GB.
      • For contribution to cluster's shared storage, follow guidelines for Cisco HyperFlex:
        • Select a dedicated high-performance storage controller.
        • For capacity drives, required usable space is 736GB. This example will use a minimum build of 3x SAS 1.2 TB. You can run the HyperFlex Sizer (partner-level access) to doublecheck if HX Edge cluster will accommodate.
        • For cache, system and boot disks, any option is usually fine.
      • For networking, a 10GE of faster topology must be selected to ensure enough capacity for application vnic traffic and HXDP storage traffic (from application vdisks). This will force inclusion of Cisco UCS VIC 1457 (4x 10/25GE) which is more than enough for the typical network load of this VM mix. If later we find we need more links, faster links or different interconnect type, we can always add additional NICs or Cisco VICs later.
      • Redundant power supplies are recommended. Rack mounting / cable management hardware may be selected.
    • A Cisco HyperFlex Data Platform Edge Edition 1 Yr Subscription is required for each node in the cluster.
  

Derivation of Cisco HyperFlex Edge Example Hardware for Medium Collaboration

  • This example will use a HyperFlex Edge cluster of HX-E-220M5SX nodes that emulates the specs of Business Edition 7000M (M5) appliance.
    • Lookup Cisco HyperFlex HX-E-220M5SX Edge Node Spec Sheet.
      • HyperFlex Edge will allow a minimum 2-node cluster and Intersight management to avoid requirement for external Fabric Interconnect switches.
      • HyperFlex All-Flash (AF) models provide better storage performance, but hybrid storage is sufficient for the needs of the VMs in this example.
      • HX240 nodes provide higher maximum usable storage, but HX220c maximum is sufficient for the VMs in this example.
    • Each HyperFlex node will require a HyperFlex Data Platform (HXDP) storage controller VM of 8vcpu, in addition to the application VMs. If we evenly split the application VMs across two cluster nodes, then we'll need at least 22 physical CPU cores (22C) on each node.
    • For simplicity, this example will use a pair of Intel Xeon 6126 CPUs (12C/2.6GHz)which will support all applications' "medium" capacity points in this example.
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 2S/12C.
    • Assume Xeon 6126 which will support required applications' "medium" capacity point and VM configurations.
    • Required VM count for software redundancy will fit on 2 HX Edge nodes.
    • Two HX Edge nodes are sufficient to provide hardware redundancy.

  • To translate to hardware BOM, use QuoteCollab's tallied required hardware specs with the Cisco HyperFlex HX-E-220M5SX Edge Node Spec Sheet to help with hardware SKU selection.
    • Select the toplevel SKU for a new HX Edge cluster (which will contain HX Edge nodes and required software subscriptions) and cluster node model HX-E-220M5SX, quantity 2.
    • On each node..
      • Select CPU = dual Intel Xeon 6126.
      • Minimum required memory is ~122GB. Follow HX DIMM population rules for 8x16GB=128GB.
      • For contribution to cluster's shared storage, follow guidelines for Cisco HyperFlex:
        • Select a dedicated high-performance storage controller.
        • For capacity drives, required usable space is ~1TB. This example will use a minimum build of 3x SAS 1.2 TB. You can run the HyperFlex Sizer (partner-level access) to doublecheck if HX Edge cluster will accommodate, and add more capacity disks as needed.
        • For cache, system and boot disks, any option is usually fine.
  

Derivation of Cisco HyperFlex Example Hardware for Large Collaboration

  • This example will use a HyperFlex hybrid storage cluster of HX220C-M5SX nodes that emulates the specs of Business Edition 7000H (M5) appliance.
    • Lookup Cisco HyperFlex HX220c M5 Node (HYBRID) Spec Sheet.
      • HyperFlex will allow a minimum 3-node cluster. Management may be via either Intersight or external Fabric Interconnect switches (not included in this example).
      • HyperFlex All-Flash (AF) could be substituted to provide better storage performance.
      • HX240 nodes (2RU) could be substituted to provide higher maximum usable storage.
    • Each HyperFlex node will require a HyperFlex Data Platform (HXDP) storage controller VM of 8vcpu, in addition to the application VMs. If we evenly split the application VMs across three cluster nodes, then we'll need at least 29 physical CPU cores (29C) on each node.
    • For simplicity, this example will use a pair of Intel Xeon 6242 CPUs (16C/2.8 GHz) which will support all applications' "large" capacity points ("medium" for Expressway) in this example.
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 2S/16C.
    • Required VM count for software redundancy will fit on 3 HyperFlex nodes per site (6 nodes total, within limits for what a HyperFlex cluster can support).
    • Three HyperFlex nodes per site are sufficient to provide hardware redundancy.
    • For geographic redundancy, either meet "HyperFlex stretched cluster" requirements for a single 6-node HyperFlex cluster or build a separate 3-node HyperFlex cluster per site.

  Datacenter A

  Datacenter B

  • To translate to hardware BOM, use QuoteCollab's tallied required hardware specs with the Cisco HyperFlex HX220c M5 Node (HYBRID) Spec Sheet to help with hardware SKU selection.
    • Select the toplevel SKU for a new HX Edge cluster (which will contain HX Edge nodes and required software subscriptions) and cluster node model HX220C-M5SX, quantity 6.
    • On each node..
      • Select CPU = dual Intel Xeon 6242.
      • Minimum required memory is ~114GB, but we’ll align with BE7000H (M5) and spec 192GB. Follow CPU compatibility and HX DIMM population rules for 4x64GB=192GB.
      • For contribution to cluster's shared storage, follow guidelines for Cisco HyperFlex:
        • Select a dedicated high-performance storage controller.
        • For capacity drives, required usable space is ~1.6TB. This example will use a minimum build of 6x SAS 1.2 TB. You can run the HyperFlex Sizer (partner-level access) to doublecheck if HX cluster will accommodate, then add more capacity disks as needed.
        • For cache, system and boot disks, any option is usually fine.

Derivation of Cisco Blade Server Example Hardware for Collaboration

  

Derivation of Cisco Blade Server Example Hardware for Small Collaboration

  • Plan for a B200 M5 blade that emulates the specs of Business Edition 6000M (M5) appliance.
    • Lookup Cisco UCS B200 M5 Blade Server Spec Sheet.
    • BE6000M (M5) appliance uses Intel Xeon 4114 CPU (10C/2.2GHz). Closest match that is compliant with application CPU Requirements is Intel Xeon 4210 (10C/2.2GHz).
    • Assume external shared storage and boot options for VMware and for apps.
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 1S/10C.
    • Assume Xeon 4210 which will support required applications' "small" capacity point and VM configurations.
    • Required VM count for software redundancy will fit on pair of blade servers.
    • Two blades are sufficient to provide hardware redundancy.

  • To translate to hardware BOM, use QuoteCollab's tallied required hardware specs with the Cisco UCS B200 M5 Blade Server Spec Sheet to help with hardware SKU selection.
    • Select Intel Xeon 4210 CPU.
    • Minimum required memory is 30GB. Follow UCS DIMM population rules for 2x16GB=32GB (ignore options like Memory Mirroring).
    • Assume blade will boot ESXi and applications from shared storage, so no local storage or RAID controller is required.
    • Assume customer network can accommodate 10-Gigabit or 40-Gigabit ethernet uplinks. Select a Cisco VIC model that supports the customer's network switch/fabric elements. Example uses VIC 1440 mLOM.
  

Derivation of Cisco Blade Server Example Hardware for Medium Collaboration

  • Plan for a B200 M5 blade that emulates the specs of Business Edition 7000M (M5) appliance.
    • Lookup Cisco UCS B200 M5 Blade Server Spec Sheet.
    • BE7000M (M5) appliance uses Intel Xeon 6132 CPU (14C/2.6GHz).
    • Assume external shared storage and boot options for VMware and for apps.
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 1S/14C.
    • Assume Xeon 6132 which will support required applications' "medium" capacity point and VM configurations.
    • Required VM count for software redundancy will fit on pair of blade servers.
    • Two blades are sufficient to provide hardware redundancy.

  • To translate to hardware BOM, use QuoteCollab's tallied required hardware specs with the Cisco UCS B200 M5 Blade Server Spec Sheet to help with hardware SKU selection.
    • Select Intel Xeon 6132 CPU.
    • Minimum required memory is ~50GB. Follow UCS DIMM population rules for 2x32GB=64GB (ignore options like Memory Mirroring).
    • Assume blade will boot ESXi and applications from shared storage, so no local storage or RAID controller is required.
    • Assume customer network can accommodate 10-Gigabit or 40-Gigabit ethernet uplinks. Select a Cisco VIC model that supports the customer's network switch/fabric elements. Example uses VIC 1440 mLOM.
  

Derivation of Cisco Blade Server Example Hardware for Large Collaboration

  • Plan for a B200 M5 blade that emulates the specs of Business Edition 7000H (M5) appliance.
    • Lookup Cisco UCS B200 M5 Blade Server Spec Sheet.
    • BE7000H (M5) appliance uses dual Intel Xeon 6132 CPU (14C/2.6GHz). For the particular VM mix in this example, could also have used dual Intel Xeon 6126 (12C/2.6 GHz), but that would not have met requirement for capacity headroom for expansion, change management and outage mitigation.
    • Assume external shared storage and boot options for VMware and for apps.
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 2S/14C.
    • Assume Xeon 6132 which will support required applications' "large" capacity point (“medium” for Expressway) and VM configurations.
    • Required VM count for software redundancy will fit on three blade servers per site.
    • Three blade server per site are sufficient to provide hardware redundancy and geographic redundancy (note that each site will require its own blade server chassis).

  Datacenter A

  Datacenter B

  • To translate to hardware BOM, use QuoteCollab's tallied required hardware specs with the Cisco UCS B200 M5 Blade Server Spec Sheet to help with hardware SKU selection.
    • Select dual Intel Xeon 6132 CPU.
    • Minimum required memory is ~44GB for this example’s VM mix, but during change management or outage mitigation, other VMs may need to run on the server which will drive up required RAM. So we will align with BE7000H (M5) and instead spec 192 GB. Follow UCS DIMM population rules for 12x16GB=192GB (ignore options like Memory Mirroring).
    • Assume blade will boot ESXi and applications from shared storage, so no local storage or RAID controller is required.
    • Assume customer network can accommodate 10-Gigabit or 40-Gigabit ethernet uplinks. Select a Cisco VIC model that supports the customer's network switch/fabric elements. Example uses VIC 1440 mLOM.

Derivation of Cisco Rack Server Example Hardware for Collaboration

  

Derivation of Cisco Rack Server Example Hardware for Small Collaboration

  • Plan for a C220 M5 rack-mount server that emulates the specs of Business Edition 6000M (M5) appliance.
    • Lookup Cisco UCS C220 M5 Rack Server (Small Form Factor Disk Drive Model) Spec Sheet.
      • While there is also a Large Form Factor spec sheet, those models are optimized for different use cases so will not be used in this example for Collaboration.
      • C220 M5SX chassis will be used for up to 10 hard disk slots, which allows lower-cost HDD to be used (vs. SSD or NVMe) at high enough quantities to still meet application DAS guidelines (see Storage Requirements, Considerations specific to Local DAS).
    • BE6000M (M5) appliance uses Intel Xeon 4114 CPU (10C/2.2GHz), so select that (all applications' small capacity point will support Xeon with base frequency 2.20 GHz).
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 1S/10C.
    • Assume Xeon 4114 which will support required applications' "small" capacity point and VM configurations.
    • Required VM count for software redundancy will fit on pair of rack servers.
    • Two rack servers are sufficient to provide hardware redundancy.

  • To translate to hardware BOM, use QuoteCollab;s tallied required hardware specs with the Cisco UCS C220 M5 Rack Server (Small Form Factor Disk Drive Model) Spec Sheet to help with hardware SKU selection.
    • Select Chassis = UCS C220 M5SX.
    • Select CPU = single Intel Xeon 4114.
    • Minimum required memory is 30GB. Follow UCS DIMM population rules for 2x16GB=32GB (ignore options like Memory Mirroring). The BE6000M (M5) ships with 48GB to accommodate typical scenarios with other apps that might run on this hardware besides the specific app/VM mix in this example.
    • For storage, follow guidelines for local DAS.
      • Plan for single RAID5 volume. All apps and ESXi will boot from this volume.
      • Select a dedicated high-performance RAID controller that supports RAID5.
      • Hard drives = SAS 10kbps 300GB .
      • For latency/performance, rule of thumb would be 10 disks (1 HDD per physical CPU cores), but BE6000 testing has shown 6 will be enough for typical app/VM mixes at this capacity point.
      • Required usable space is 664GB. You can run a 3rd-party RAID calculator to double-check if 6x300GB in a single RAID5 will accommodate.
    • For network, spec sheet indicates C220 M5SX includes 2 x 10Gbase-T Intel x550 embedded (on the motherboard) LOM ports. This will be sufficient for the typical network load of this VM mix. If later we find we need more links, faster links or different interconnect type, we can always add a NIC or Cisco VIC later.
    • A redundant power supply may be selected. As well as rack mounting / cable management hardware.
  

Derivation of Cisco Rack Server Example Hardware for Medium Collaboration

  • Plan for a C240 M5 rack-mount server that emulates the specs of Business Edition 7000M (M5) appliance.
    • Lookup Cisco UCS C240 M5 Rack Server (Small Form Factor Disk Drive Model) Spec Sheet.
      • While there is also a Large Form Factor spec sheet, those models are optimized for different use cases so will not be used in this example for Collaboration.
      • C240 M5SX chassis will be used for up to 24 hard disk slots, which allows lower-cost HDD to be used (vs. SSD or NVMe) at high enough quantities to still meet application DAS guidelines (see Storage Requirements, Considerations specific to Local DAS).
    • BE7000M (M5) appliance uses Intel Xeon 6132 CPU (14C/2.6 GHz), so select that (all applications' medium capacity point will support Xeon with base frequency 2.60 GHz).
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 1S/14C.
    • Assume Xeon 6132 which will support required applications' "medium" capacity point and VM configurations.
    • Required VM count for software redundancy will fit on pair of rack servers.
    • Two rack servers are sufficient to provide hardware redundancy.

  • To translate to hardware BOM, use QuoteCollab's tallied required hardware specs with the Cisco UCS C240 M5 Rack Server (Small Form Factor Disk Drive Model) Spec Sheet to help with hardware SKU selection.
    • Select Chassis = UCS C240 M5SX.
    • Select CPU = single Intel Xeon 6132.
    • Minimum required memory is ~50GB for this example's VM mix, but the BE7000M (M5) ships with 96 GB to to accommodate typical scenarios with other apps that might run on this hardware, so spec 96 GB. Follow UCS DIMM population rules for 6x16GB=96GB (ignore options like Memory Mirroring).
    • For storage, follow guidelines for local DAS.
      • Plan for dual RAID5 volumes. All apps and ESXi will boot from this volume.
      • Select a dedicated high-performance RAID controller that supports RAID5.
      • Hard drives = SAS 10kbps 300GB.
      • For latency/performance, rule of thumb would be 14 disks (1 HDD per physical CPU cores).
      • Required usable space is ~954GB. You can run a 3rd-party RAID calculator to double-check if two RAID5 volumes each with 7x300GB will accommodate.
    • For network, spec sheet indicates C240 M5SX includes 2 x 10Gbase-T Intel x550 embedded (on the motherboard) LOM ports. Like the BE7000M (M5), we will also spec dual quad-port 1GbE NICs to provide extra links and accommodate typical needs for NIC teaming and/or VLAN trunking.
    • Redundant power supplies will be used, as well as a rack-mount kit.
  

Derivation of Cisco Rack Server Example Hardware for Large Collaboration

  • Plan for a C240 M5 rack-mount server that emulates the specs of Business Edition 7000H (M5) appliance.
    • Lookup Cisco UCS C240 M5 Rack Server (Small Form Factor Disk Drive Model) Spec Sheet.
      • While there is also a Large Form Factor spec sheet, those models are optimized for different use cases so will not be used in this example for Collaboration.
      • C240 M5SX chassis will be used for up to 24 hard disk slots, which allows lower-cost HDD to be used (vs. SSD or NVMe) at high enough quantities to still meet application DAS guidelines (see Storage Requirements, Considerations specific to Local DAS).
    • BE7000H (M5) appliance uses dual Intel Xeon 6132 CPU (14C/2.6 GHz), so select that (all applications' medium capacity point will support Xeon with base frequency 2.60 GHz). For the particular VM mix in this example, could also have used dual Intel Xeon 6126 (12C/2.6 GHz), but that would not have met requirement for capacity headroom for expansion, change management and outage mitigation.
  • In QuoteCollab tool, model VM placement and note tallied hardware requirements for each hardware node.
    • Use a specs-based server with 2S/14C.
    • Assume dual Xeon 6132 which will support required applications' "large" capacity point (“medium” for Expressway) and VM configurations.
    • Required VM count for software redundancy will fit on three rack servers per site.
    • Three rack servers per site are sufficient to provide hardware redundancy and geographic redundancy.

  Datacenter A

  Datacenter B

  • To translate to hardware BOM, use QuoteCollab's tallied required hardware specs with the Cisco UCS C240 M5 Rack Server (Small Form Factor Disk Drive Model) Spec Sheet to help with hardware SKU selection.
    • Select Chassis = UCS C240 M5SX.
    • Select CPU = dual Intel Xeon 6132.
    • Minimum required memory is ~44GB for this example's VM mix, but during change management or outage mitigation, other VMs may need to run on the server which will drive up required RAM. So we will align with BE7000H (M5) and instead spec 192 GB. Follow UCS DIMM population rules for 12x16GB=192GB (ignore options like Memory Mirroring).
    • For storage, follow guidelines for local DAS.
      • Plan for quad RAID5 volumes. All apps and ESXi will boot from this volume.
      • Select a dedicated high-performance RAID controller that supports RAID5.
      • Hard drives = SAS 10kbps 300GB.
      • For latency/performance, rule of thumb would be 24 disks (ideally 1 HDD per physical CPU cores).
      • Required usable space is ~1.5TB (more during change management or outage mitigation). You can run a 3rd-party RAID calculator to double-check if four RAID5 volumes each with 6x300GB will accommodate.
    • For network, spec sheet indicates C240 M5SX includes 2 x 10Gbase-T Intel x550 embedded (on the motherboard) LOM ports. Like the BE7000H (M5), we will also spec dual quad-port 1GbE NICs to provide extra links and accommodate typical needs for NIC teaming and/or VLAN trunking.
    • Redundant power supplies will be used, as well as a rack-mount kit.