SlideShare a Scribd company logo
1 of 57
Download to read offline
Juniper Networks
SDN and NFV products
for Service Providers Networks
Evgeny Bugakov
Senior Systems Engineer, JNCIE-SP
21 April 2015
Moscow, Russia
AGENDA
Virtualization strategy and goals1
vMX product overview and performance2
vMX Roadmap and licensing4
vMX Use cases and deployment models3
Northstar WAN SDN Controller5
Virtualization strategy and goals
Branch
Office
HQ
Carrier Ethernet
Switch
Cell Site
Router
Mobile &
Packet GWs
Aggregation
Router/
Metro Core
DC/CO Edge
Router
Service Edge
Router
Core
Enterprise Edge/Mobile Edge Aggregation/Metro/Metro core
Service Provider Edge/Core
and EPC
VCPE, Enterprise Router Virtual PE, Hardware Virtualization
Virtual Route Reflector
MX SDN Gateway
Control Plane and OS: Virtual JUNOS, Forwarding Plane: Virtualized Trio
vPE, vCPE
Data center/Central Office
MX Virtualization Strategy
Leverage R&D effort and JUNOS feature velocity across all physical & virtualization initiatives
Physical vs. Virtual
Physical Virtual
High throughput, high density Flexibility to reach higher scale in control plane and
service plane
Guarantee of SLA Agile, quick to start
Low power consumption per throughput Low power consumption per control plan and service
Scale up Scale out
Higher entry cost in $ and longer time to deploy Lower entry cost in $ and shorter time to deploy
Distributed or centralized model Optimal in centralized cloud-centric deployment
Well development network mgmt system, OSS/BSS Same platform mgmt as Physical, plus same VM
mgmt as a SW on server in the cloud
Variety of network interfaces for flexibility Cloud centric, Ethernet-only
Excellent price per throughput ratio Ability to apply “pay as you grow” model
Each option has its own strength, and it is
created with different focus
Type of deployments with virtual platform
Traditional
function, 1:1
form
replacement
New applications
where physical is
not feasible or ideal
A whole new
approach to
a traditional
concept
Cloud CPE
Cloud based VPN
Service
Chaining GW
Virtual Private Cloud GW
Multi-function, multi-layer integration
w/ routing as a plug-in
SDN GW
Route Reflector
Services appliances
Lab & POC
Branch Router
DC GW
CPE
PE
Wireless LAN GW
Mobile Sec GW
Mobile GW
vMX Product Overview
vMX overview
Efficient separation of control and data-plane
– Data packets are switched within vTRIO
– Multi-threaded SMP implementation allows core elasticity
– Only control packets forwarded to JUNOS
– Feature parity with JUNOS (CLI, interface model, service configuration)
– NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0)
Guest OS (Linux) Guest OS (JUNOS)
Hypervisor
x86 Hardware
CHASSISD
RPD
LC-
Kernel
DCD
SNMP
Virtual TRIO
VFP VCP
Intel DPDK
Virtual and Physical MX
PFE VFP
Microcode
crosscompiled
X86
instructions
CONTROL
PLANE
DATA
PLANE
ASIC/HARD
WARE
Cross compilation creates high leverage of features between Virtual and Physical with minimal re-work
TRIO
UCODE
Virtualization techniques: deployment with hypervisors
Application
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, XEN,VMWare ESXi
Physical layer
VirtIO drivers
Device emulation
Para-virtualization (VirtIO, VMXNET3)
• Guest and Hypervisor work together to make emulation
efficient
• Offers flexibility for multi-tenancy but with lower I/O
performance
• NIC resource is not tied to any one application and can be
shared across multiple applications
• vMotion like functionality possible
PCI-Pass through with SR-IOV
• Device drivers exist in user space
• Best for I/O performance but has dependency on NIC type
• Direct I/O path between NIC and user-space application
bypassing hypervisor
• vMotion like functionality not possible
Application
Virtual NICs
Guest VM#2
VirtIO drivers
Application
Virtual NICs
Physical NICs
Guest VM#1
Hypervisor: KVM, XEN, VMWare ESXi
Physical layer
Device emulation
Application
Virtual NICs
Guest VM#2
Device emulation
PCIPass-through
SR-IOV
Virtualization techniques: containers deployment
Application 1
Virtual NICs
Physical NICs
Physical layer
Containers (Docker, LXC)
• No hypervisor layer. Much less memory and compute resource
overhead
• No need for PCI-pass through or special NIC emulation
• Offers high I/O performance
• Offers flexibility for multi-tenancy
Application 2
Virtual NICs
Container engine (Docker, LXC)
Virtual TRIO Packet Flow
Physical nics
Virtual nics
DPDK
br-int
172.16.0.3
vpfe0 eth0 :
172.16.0.2
fxp0:
<any address>
vre0
rpd chasd
VMXT = microkernel
vTRIO
br-ext
<any address>
eth1 :
<any address>
em1:
172.16.0.1
vpfe1
vre1
VCP
VFP
vMX Performance
vMX Environment
Description Value
Sample system configuration
Intel Xeon E5-2667 v2 @ 3.30GHz 25 MB Cache.
NIC: Intel 82599 (for SR-IOV only)
Memory
Minimum: 8 GB
(2GB for vRE, 4GB for vPFE, 2GB for Host OS)
Storage Local or NAS
Sample system configuration
Sample configuration for number of CPUs
Use-cases Requirement
VMX with up to 100Mbps performance
Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].
Min # of Cores: 2 [ 1 core for VFP and 1 core for VCP]. Min memory 8G.
VirtIO NIC only.
VMX with up 3G of performance @ 512 bytes
Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP].
Min # of Cores: 4 [ 2 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G.
VirtIO or SR-IOV NIC.
VMX with 10G and beyond (assuming min 2 ports of 10G)
Min # of vCPUs: 5 [1 vCPU for VCP and 4 vCPUs for VFP].
Min # of Cores: 5 [ 3 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G.
SR-IOV only NIC.
vMX Baseline Performance
VMX performance in Gbps
# of cores for packet processing *
Frame size (Bytes) 3 4 6 8 10
256 2 3.8 7.2 9.3 12.6
512 3.7 7.3 13.5 18.4 19.8
1500 10.7 20 20 20 20
2 x 10G ports
4 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 10
256 2.1 4.2 6.8 9.6 13.3
512 4.0 7.9 13.8 18.6 26
1500 11.3 22.5 39.1 40 40
6 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 10
256 2.2 4.0 6.8 9.8
512 4.1 8.1 14 19.0 27.5
1500 11.5 22.9 40 53.2 60
*Number of cores includes cores for packet processing and associated host functionality. For each 10G port there is a dedicated core not included in this number.
8 x 10G ports
# of cores for packet processing*
Frame size (Bytes) 3 4 6 8 12
66 4.8
128 8.3
256 14.4
512 31
1500 78.5
IMIX 35.3
vMX use cases
and deployment models
Service Provider VMX use case – virtual PE (vPE)
DC/CO
Gateway
Provider MPLS
cloudCPE
L2 PE
L3 PE
CPE
Peering
Internet
SMB
CPE
Pseudowire
L3VPN
IPSEC/Overlay technology
Branch
Offic
e
Branch
Offic
e
DC/CO Fabric
vPE
• Scale-out deployment
scenarios
• Low bandwidth, high control
plane scale customers
• Dedicated PE for new
services and faster time-to-
market
Market Requirement
• VMX is a virtual extension of
a physical MX PE
• Orchestration and
management capabilities
inherent to any virtualized
application apply
VMX Value Proposition
VMX as a DC Gateway – virtual USGW
VM VM VM
ToR (IP)
ToR (L2)
Non Virtualized
environment (L2)
VXLAN
Gateway
(VTEP)
VTEP
VM VM VM
VTEP
Virtualized Server Virtualized Server
VPN Cust A VPN Cust B
VRF A
VRF B
MPLS Cloud
VPN
Gateway
(L3VPN)
VMX
Virtual Network B Virtual Network A
VM VM VM VM VM VM
Data Center/ Central Office
• Service Providers need a
gateway router to connect the
virtual networks to the physical
network
• Gateway should be capable of
supporting different DC overlay,
DC Interconnect and L2
technologies in the DC such as
GRE, VXLAN, VPLS and EVPN
Market Requirement
• VMX supports all the overlay, DCI and
L2 technologies available on MX
• Scale-out control plane to scale up
VRF instances and number of VPN
routes
VMX Value Proposition
Reflection from physical to virtual world
Proof of concept lab validation or SW certification
• Perfect mirroring effect between carrier
grade physical platform & virtual router
• Can provide reflection effect of an actual
deployment in virtual environment
• Ideal to support
• Proof of Concept lab
• New service configuration/operation
preparation
• SW release validation for an actual
deployment
• Training lab for operational team
• Troubleshoot environment for a real network
issue
• CAPEX or OPEX reduction for lab
• Quick turn around when lab network
scale is required
Virtual
Physical
deployment
Virtual BNG cluster in a data center
BNG cluster
10K~100K subscribers
Data Center or CO
vMX as vBNG
vMX vMX vMX vMX vMX
• Potentially BNG function can be virtualized, and vMX can help form a BNG cluster at the DC or CO (Roadmap item, not at FRS);
• Suitable to perform heavy load BNG control-plane work while there is little BW needed;
• Pay-as-you-grow model;
• Rapid Deployment of new BNG router when needed;
• Scale-out works well due to S-MPLS architecture, leverages Inter-Domain L2VPN, L3VPN, VPLS;
vMX Route Reflector feature set
Route Reflectors are characterized by RIB scale (available memory) and BGP
Performance (Policy Computation, route resolver, network I/O - determined by CPU
speed)
Memory drives route reflector scaling
• Larger memory means that RRs can hold more RIB routes
• With higher memory an RR can control larger network segments – lower
number of RRs required in a network
CPU speed drives faster BGP performance
• Faster CPU clock means faster convergence
• Faster RR CPUs allow larger network segments controlled by one RR - lower
numbers of RRs required in a network
vRR product addresses these pain point by running Junos image as an RR application
on faster CPUs and with memory on standard servers/appliances
VRR Scaling Results
 * The convergence numbers also improve with higher clock CPU
Tested with 32G vRR instance
Address
Family
# of
advertizing
peers
active routes Total Routes
Memory
Utilization(for
receive all
routes)
Time taken
to receive all
routes
# of receiving
peers
Time taken to advertise
the routes and Mem Utils.
IPv4 600 4.2 million 42Mil (10path) 60% 11min 600 20min(62%)
IPv4 600 2 million 20Mil (10path) 33% 6min 600 6min(33%)
IPv6 600 4 million 40Mil (10path) 68% 26min 600 26min(68%)
VPNv4 600 2Mil 4Mil (2 paths ) 13% 3min 600 3min(13%)
VPNv4 600 4.2Mil
8.4Mil
(2 paths )
19% 5min 600 23min(24%)
VPNv4 600 6Mil 12Mil (2 paths ) 24% 8min 600 36min(32%)
VPNv6 600 6Mil 12Mil (2 paths ) 30% 11min 600 11min(30%)
VPNv6 600 4.2Mil
8.4Mil
(2 paths )
22% 8min 600 8min(22%)
CLOUD Based Virtual Route Reflector DESIGN
Solving the best path selection problem for cloud virtual route reflector
VRR 1
Region 1
Regional
Network 2
VRR 2
Region 2Data Center
Cloud
Backbone
GRE, IGP
VRR 2 selects path
based on R1 view
R1
R2
VRR 2 selects path
based on R2 view
 vRR as an “Application” hosted in DC
 GRE tunnel is originated from gre.X (control plane interface)
 VRR behaves like it is locally attached to R1 (requires resolution RIB config)
Client 2
Client 1
Regional
Network 1
Client 3
iBGP
Cloud Overlay w/ Contrail or
VMWare
VMX to offer managed CPE/centralized CPE
vMX as vCPE
(IPSec, NAT)
vSRX
(Firewall)
Branch
Offic
e
Switch
Provider MPLS
cloud
DC/CO GW
Branch
Offic
e
Switch
Provider MPLS
cloud
DC/CO Fabric + Contrail overlay
vMX as
vPE
Branch
Offic
e
Switch
L2 PE
L2 PE
PE
Internet
Contrail
Controller
 Service providers want to offer a managed
CPE service and centralize the CPE functionality
to avoid “truck rolls”
 Large enterprises want a centralized CPE
offering to manage all their branch sites
 Both SPs and enterprises want the ability to
offer new services without changing the CPE
device
Market Requirement
 VMX with service chaining can offer best of
breed routing and L4-L7 functionality
 Service chaining offers the flexibility to add
new services in a scale-out manner
VMX Value Proposition
Cloud Based CPE with vMX
• A Simplified CPE
• Remove CPE barriers to service
innovation
• Lower complexity & cost
DHCPFirewallRouting / IP
ForwardingNAT
Modem / ONTSwitchAccess
Point
VoiceMoCA/ HPAV/
HPNA3
Typical CPE Functions
DHCP
FWRouting / IP
Forwarding
NATModem / ONTSwitchAccess
Point
VoiceMoCA/ HPAV/
HPNA3
Simplified L2 CPE
In Network CPE functions
 Leverage & integrate with other network
services
 Centralize & consolidate
 Seamless integrate with mobile & cloud
based services
Direct Connect
 Extend reach & visibility into the
home
 Per device awareness & state
 Simplified user experience
 Simplify the device required on the customer premise
 Centralize key CPE functions & integrate them into the network edge
BNG / PE in SP
Network
More use cases? The limit is our imagination
• Virtual platform is one more tool for network provider, and the use cases are
up to users to define
VPC GW for private,
public and hybrid cloud
Virtual Route Reflector
NFV plug-in for multi-
function consolidation
SW certification, lab validation, network
planning & troubleshooting, proof of concept
Distributed NFV Service Complex
Virtual BNG cluster
Virtual Mobile service
control GW
And more…
Cloud based VPN
vGW for service chaining
vMX FRS features
vMX Products family
Characteristics Target customer Availability
Trial
• Up to 90 day trial
• No limit on capacity
• Inclusive of all features
• Potential customers who want to
try-out VMX in their lab or
qualify VMX
• Early availability by
end of Feb 2015
Lab
simulation/Educ
ation
• No time-limit enforced
• Forwarding plane limited to
50Mbps
• Inclusive of all features
• Customer wants to simulate
production network in lab
• New customer to gain JUNOS
and MX experience
• Early availability by
end of Feb 2015
GA product
• Bandwidth driven licenses
• Two modes for features:
BASE or
ADVANCE/PREMIUM
• Production deployment for VMX • 14.1R6 (June 2015)
VMX FRS product
• Official FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R6.
• High level overview of FRS product
• DPDK integration. Min 80G throughput per VMX instance.
• OpenStack integration.
• 1:1 mapping between VFP and VCP
• Hypervisor support: KVM, VMWare ESXi, Xen
• High level feature support for FRS
• Full IP capabilities
• MPLS: LDP, RSVP
• MPLS applications: L3VPN, L2VPN, L2Circuit
• IP and MPLS multicast
• Tunneling: GRE, LT
• OAM: BFD
• QoS: Intel DPDK QoS feature-set
vMX Roadmap
vMX with vRouter and Orchestration
Contrail
controller
NFV orchestrator
Template
based config
• vMX with vRouter integration
• VirtIO utilized for Para-virtualized drivers
• Contrail OpenStack for
• VM management
• Setting up overlay network
• NFV Orchestrator (OpenStack Heat templates)
utilized to easily create and replicate VMX
instances
vMX Licensing
vMX Pricing philosophy
Value based pricing
Elastic pricing model
• Price as a platform and not just on cost of bandwidth
• Each VMX instance is a router with its own control-plane,
data-plane and administrative domain
• The value lies in the ability to instantiate routers easily
• Bandwidth based pricing
• Pay as you grow model
Application package functionality mapping
Application package Functionality Use cases
BASE • IP routing with 32K IP routes in FIB
• Basic L2 functionality: L2 Bridging and
switching
• No VPN capabilities: No L2VPN, VPLS,
EVPN and L3VPN
• Low end CPE or Layer3
Gateway
ADVANCED (-IR) • Full IP FIB
• Full L2 capabilities includes L2VPN,
VPLS, L2Circuit
• VXLAN
• EVPN
• IP Multicast
• L2vPE
• Full IP vPE
• Virtual DC GW
PREMIUM (-R) • BASE
• L3VPN for IP and Multicast
• L3VPN vPE
• Virtual Private Cloud
GW
Note: Application packages exclude IPSec, BNG and VRR functionality.
Bandwidth License SKUs
• Bandwidth based licenses for each application package for the following processing capacity limits:
100M, 250M, 500M, 1G, 5G, 10G, 40G. Note for 100M, 250M and 500M there is a combined SKU with
all applications included.
100M 250M 500M
1G BASE
1G ADV
1G PRM
5G BASE
5G ADV
5G PRM
10G BASE
10G ADV
10G PRM
40G BASE
40G ADV
40G PRM
BASE
ADVANCE
PREMIUM
• Application tiers are additive i.e ADV tier encompasses BASE functionality
VMX software License SKUs
SKU Description
VMX-100M 100M perpetual license. Includes all features in full scale
VMX-250M 250M perpetual license. Includes all features in full scale
VMX-500M 500M perpetual license. Includes all features in full scale
VMX-BASE-1G 1G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features
VMX-BASE-5G 5G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features
VMX-BASE-10G 10G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features
VMX-BASE-40G 40G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features
VMX-ADV-1G 1G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-5G 5G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-10G 10G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-ADV-40G 40G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances
VMX-PRM-1G 1G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-5G 5G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-10G 10G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
VMX-PRM-40G 40G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
Juniper NorthStar Controller
CHALLENGES WITH CURRENT NETWORKS
How to Make the Best Use of the Installed Infrastructure?
2
3
1? How do I use my network resources efficiently?
1? How can I make my network application aware?
1? How do I get complete & real-time visibility?
PCE ARCHITECTURE
A Standards-based Approach for Carrier SDN
 Path Computation Element (PCE): Computes
the path
 Path computation Client (PCC): Receives the
path and applies it in the network. Paths are
still signaled with RSVP-TE.
 PCE protocol (PCEP): Protocol for PCE/PCC
communication
PCEP
PCC
PCC
PCC
A path Computation Element (PCE) is a system
component, application, or network node that is
capable of determining and finding a suitable
route for conveying data between a source and
a destination
What are the components?What is it?
PCE
ACTIVE STATEFUL PCE
A centralized network controller
The original PCE drafts (of the mid-2000s) were mainly focused
around passive stateless PCE architectures:
 More recently, there’s a need for a more ‘Active’ and ‘Stateful’ PCE
 NorthStar is an active stateful PCE
 This fits well to the SDN paradigm of a centralized network controller
What makes an active Stateful PCE different:
 The PCE is synchronized, in real-time, with the network​ via standard
networking protocols; IGP, PCEP
 The PCE has visibility into the network state; b/w availability, LSP attributes
 The PCE can take ‘control’ and create ‘state’ within the MPLS network
 The PCE dictates the order of operations network-wide​.
 Report LSP state
 Create LSP state
NorthStar
MPLS Network
SOFTWARE-DRIVEN POLICY
Topology Discovery Path Computation State Installation
NORTHSTAR COMPONENTS & WORKFLOW
PCEP
 TE LSP discovery
IGP-TE, BGP-LS
 TED discovery (BGP-LS, IGP)
 LSDB discovery (OSPF, ISIS)
PCEP
 Create/Modify TE LSP
 One session per LER(PCC)
ANALYZE OPTIMIZE VIRTUALIZE
Routing PCEPApplication Specific Alg’s
RSVP
signaling
OPEN
APIs
NORTHSTAR MAJOR COMPONENTS
NorthStar consists of several major components:
 JUNOS Virtual Machine (VM)
 Path Computation Server (PCS)
 Topology Server
 REST Server
Component functional responsibilities:
 The JUNOS VM, is used to collect the TE-database & LSDB
 A new JUNOS daemon, NTAD, is used remote ‘flash’ the lsdist0
table to the PCS
 The PCS has multiple functions:
 Peers with each PCC using PCEP for LSP state collection &
modification
 Runs application specific Algs for computing LSP paths
 The REST server is the interface into the APIs
PCEJUNOS VM
NTAD
RPD
PCS
REST_Server
KVM Hypervisor
Centos 6.5
MPLS Network
PCC
BGP-LS/IGP PCEP
Topo_Server
Standard, custom, & 3rd party Applications
Topology Discovery Path Computation Path Installation
Topology API Path computation API Path provisioning API
PCEP PCEPApplication specific algorithmsIGP-TE / BGP-LS
REST REST REST
NorthStar pre-packaged applications
Bandwidth Calendaring, Path Diversity, Premium
path, auto-bandwidth / TE++, etc…
NORTHSTAR NORTHBOUND API
Integration with 3rd Party Tools and Custom Applications
NORTHSTAR 1.0 HIGH AVAILABILITY (HA)
Active / Standby for delegated LSPs
NorthStar 1.0 supports a high availability model only for
delegated LSPs:
 Controllers are not actively synced with each-other
Active / standby PCE model with up to 16 back-up
controllers:
 PCE-group: All PCE belonging to the same group
LSPs are delegated to the primary PCE
 Primary PCE is the controller with the highest delegation priority
 Other controllers cannot make changes to the LSPs
 If a PCC looses connection with its primary PCE, it will immediately
use the PCE with next highest delegation priority as its new
primary PCE
 ALL PCCs MUST use the same primary PCE
[configuration protocols pcep]
pce-group pce {
pce-type active stateful;
lsp-provisioning;
delegation-cleanup-timeout 600;
}
pce jnc1 {
pce-group pce;
delegation-priority 100;
}
pce jnc2 {
pce-group pce;
delegation-priority 50;
jnc1 jnc2
PCC
PCEPPCEP
JUNOS PCE CLIENT IMPLEMENTATION
New JUNOS daemon, pccd
Enables a PCE application to set parameters for a traditionally configured TE LSPs and
create ephemeral LSPs
 PCCD is the relay/message translator between the PCE & RPD
 LSP parameters, such as the path & bandwidth, & LSP creation instructions are received from the
PCE are communicated to RPD via PCCD
 RPD then signals the LSP using RSVP-TE
PCE
PCEP
PCCD
PCEP
RPD
MPLS Network
PCEP
JUNOS
IPC
RSVP-TE
Topology Discovery MPLS capacity planning
‘Full’ Offline Network
Planning
NorthStar NorthStar Simulation IP/MPLSview
LSP Control/Modification FCAPs (PM, CM, FM)Exhaustive Failure Analysis
REAL-TIME NETWORK
FUNCTIONS
 Dynamic Topology updates via
BGP-LS / IGP-TE
 Dynamic LSP state updates via
PCEP
 Real-time modification of LSP
attributes via PCEP (ERO, B/W,
pre-emption, …)
MPLS LSP PLANNING &
DESIGN
 Topology acquisition via
NorthStar REST API (snapshot)
 LSP provisioning via REST API
 Exhaustive failure analysis &
capacity planning for MPLS LSPs
 MPLS LSP design (P2MP, FRR,
JUNOS config’let, …)
OFFLINE NETWORK PLANNING
& MANAGEMENT
 Topology acquisition &
equipment discovery via CLI,
SNMP, NorthStar REST API
 Exhaustive failure analysis &
capacity planning (IP & MPLS)
 Inventory, provisioning, &
performance management
NORTHSTAR SIMULATION MODE
NorthStar vs. IP/MPLSview
DIVERSE PATH COMPUTATION
Automated Computation of end-to-end diverse paths
Network-wide visibility allows NorthStar to support end-to-end LSP path diversity:
 Wholly disjoint path computations; Options for link, node and SRLG diversity
 Pair of diverse LSPs with the same end-points or with different end-points
 SRLG information learned from the IGP dynamically
 Supported for PCE created LSPs(at time of provisioning) and delegated LSPs(though manual
creation of diversity group)
Warning!
Shared Risk Shared Risk
Eliminated
Primary Link
Secondary Link
CE
CE
CE
CE
NorthStar
PCE CREATED SYMMETRIC LSPS
Local association of LSP symmetry constraint
Symmetric
LSPs
NorthStar
NorthStar supports creating symmetric LSPs:
 Does not leverage GMPLS extensions for co-routed or associated bi-directional LSPs
 Unidirectional LSPs (identical names) are created from nodeA to nodeZ & nodeZ to nodeA
 Symmetry constraint is maintained locally on NorthStar (attribute: pair =<value>)
Symmetric LSP
creation
MAINTENANCE-MODE RE-ROUTING
Automated Path Re-computation, Re-signaling and Restoration
Automate re-routing of traffic before a scheduled maintenance window:
 Simplifies planning and preparation before and during a maintenance window
 Eliminate the risk that traffic is mistakenly affected when a node / link goes into maintenance mode
 Reduced need for spare capacity through the optimum use of resources available during the
maintenance window
 After the maintenance window finished paths are automatically restored to the (new) optimum path
1
Maintenance mode tagged: LSP
paths are re-computed assuming
affected resources are not
available
X
X
X
2
In maintenance mode: LSP
paths are automatically
(make-before-break)
re-signaled
3
Maintenance mode removed: all
LSP paths are re-stored to their
(new) optimal path
NorthStar
GLOBAL CONCURRENT OPTIMIZATION
Optimized LSP placement
NorthStar enhances traffic engineering through LSP placement based on a network
wide visibility of the topology and LSP parameters:
 CSPF ordering can be user-defined, i.e. the operator can select which parameters such as LSP priority
and LSP bandwidth influence the order of placement
 Net Groom:
- Triggered on demand
- User can choose LSPs to be optimized
- LSP priority is not taken into account
- No pre-emption
 Path Optimization:
- Triggered on demand or on scheduled
intervals (with optimization timer)
- Global re-optimization toward all LSPs
- LSP priority is taken into account
- Preemption may happen
High priority LSP
Low priority LSP
Global re-
optimization
NorthStarBandwidth
bottleneck!
CSPF
failure
New Path
request
INTER-DOMAIN TRAFFIC-ENGINEERING
Optimal Path Computation & LSP Placement
LSP [delegation, creation, optimization] of inter-domain LSPs
 Single active PCE across domains, BGP-LS for topology acquisition
 JUNOS Inter-AS requirements & constraints
http://www.juniper.net/techpubs/en_US/junos13.3/topics/usage-guidelines/mpls-enabling-inter-as-traffic-engineering-for-
lsps.html
Inter-AS Traffic-Engineering
NorthStar
NorthStar
Inter-Area Traffic-Engineering
AS 100
AS 200
Area 1
Area 2
Area 3Area 0
NORTHSTAR SIMULATION MODE
Offline Network Planning & Modeling
NorthStar builds a near real-time network model for visualization and off-line planning through
dynamic topology / LSP acquisition:
 Export of topology and LSP state to NorthStar simulation mode for ‘off-line’ MPLS network modeling
 Add/delete links/nodes/LSPs for future network planning
 Exhaustive failure analysis, P2MP LSP design/planning, LSP design/planning, FRR design/planning
 JUNOS LSP config’let generation
NorthStar-Simulation
Year 1
Year 3
Year 5
ExtensionYear 1
A REAL CUSTOMER EXAMPLE – PCE VALUE
Centralized vs. distributed path computationLinkUtilization(%)
0,00%
20,00%
40,00%
60,00%
80,00%
100,00%
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133 136 139 142 145 148 151 154 157 160 163 166 169 172
Distributed CSPF PCE centralized CSPF
 TE-LSP operational routes are used for
distributed CSPF
 RSVP-TE Max Reservable BW set BW set
to 92%
 Modeling was performed with the exact
operation LSP paths
 Convert all TE-LSPS to EROs via PCE
design action
 Objective function is Min Max link
utilizations
 Only Primary EROS & Online Bypass LSPS
 Modeling was performed with 100% of
TE LSPS being computed by PCE
Up to 15% reduction in RSVP reserved B/W
Distributed CSPF Assumptions Centralized Path Calculation Assumptions
NORTHSTAR 1.0
FRS delivery
NorthStar FRS is targeted for March-23rd:
 (Beta) trials / evaluations already ongoing
 First customer wins in place
Target JUNOS releases:
 14.2R3 Special *
 14.2R4* / 15.1R1* / 15.2R1*
Supported platforms at FRS:
 PTX (3K, 5K),
 MX (80, 104, 240/480/960, 2010/2020, vMX)
 Additional platform support in NorthStar 2.0
* Pending TRD Process
NorthStar packaging & platform:
 Bare metal application only
 No VM support at FRS
 Runs on any x86 64bit machine that is supported
by Red Hat 6 or Centos 6
 Single hybrid ISO for installation
 Based on Juniper SCL 6.5R3.0
Recommended minimum hardware
requirements:
 64-bit dual x86 processor or dual 1.8GHz Intel
Xeon E5 family equivalent
 32 GB RAM
 1TB storage
 2 x 1G/10G network interface
Questions?
How to get more?
• Join us at Facebook page: Juniper.CIS.SE (Juniper techpubs ru)
Thank You!

More Related Content

What's hot

OpenStack MeetUp - OpenContrail Presentation
OpenStack MeetUp - OpenContrail PresentationOpenStack MeetUp - OpenContrail Presentation
OpenStack MeetUp - OpenContrail PresentationStacy Véronneau
 
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof MazepaPLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof MazepaPROIDEA
 
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld
 
Reference design for v mware nsx
Reference design for v mware nsxReference design for v mware nsx
Reference design for v mware nsxsolarisyougood
 
Cubro subprocessor appliance in nic format
Cubro subprocessor appliance in nic formatCubro subprocessor appliance in nic format
Cubro subprocessor appliance in nic formatChristian Ferenz
 
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:Tony Antony
 
Vxlan deep dive session rev0.5 final
Vxlan deep dive session rev0.5   finalVxlan deep dive session rev0.5   final
Vxlan deep dive session rev0.5 finalKwonSun Bae
 
Designing Multi-tenant Data Centers Using EVPN
Designing Multi-tenant Data Centers Using EVPNDesigning Multi-tenant Data Centers Using EVPN
Designing Multi-tenant Data Centers Using EVPNAnas
 
General bypass application v1.4 2016
General bypass application v1.4 2016General bypass application v1.4 2016
General bypass application v1.4 2016Christian Ferenz
 
Mondaygeneralhankinsvpn2 140605100226-phpapp01 (1)
Mondaygeneralhankinsvpn2 140605100226-phpapp01 (1)Mondaygeneralhankinsvpn2 140605100226-phpapp01 (1)
Mondaygeneralhankinsvpn2 140605100226-phpapp01 (1)Gade Gowtham
 
Cisco data center support
Cisco data center supportCisco data center support
Cisco data center supportKrunal Shah
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Cloud Native Day Tel Aviv
 
Nexus 7000 Series Innovations: M3 Module, DCI, Scale
Nexus 7000 Series Innovations: M3 Module, DCI, ScaleNexus 7000 Series Innovations: M3 Module, DCI, Scale
Nexus 7000 Series Innovations: M3 Module, DCI, ScaleTony Antony
 
Cloudstack conference open_contrail v4
Cloudstack conference open_contrail v4Cloudstack conference open_contrail v4
Cloudstack conference open_contrail v4ozkan01
 
Operationalizing EVPN in the Data Center: Part 2
Operationalizing EVPN in the Data Center: Part 2Operationalizing EVPN in the Data Center: Part 2
Operationalizing EVPN in the Data Center: Part 2Cumulus Networks
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
 

What's hot (20)

OpenStack MeetUp - OpenContrail Presentation
OpenStack MeetUp - OpenContrail PresentationOpenStack MeetUp - OpenContrail Presentation
OpenStack MeetUp - OpenContrail Presentation
 
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof MazepaPLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
PLNOG16: IOS XR – 12 lat innowacji, Krzysztof Mazepa
 
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
VMworld 2013: Designing Network Virtualization for Data-Centers: Greenfield D...
 
Reference design for v mware nsx
Reference design for v mware nsxReference design for v mware nsx
Reference design for v mware nsx
 
Cubro subprocessor appliance in nic format
Cubro subprocessor appliance in nic formatCubro subprocessor appliance in nic format
Cubro subprocessor appliance in nic format
 
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
 
Vxlan deep dive session rev0.5 final
Vxlan deep dive session rev0.5   finalVxlan deep dive session rev0.5   final
Vxlan deep dive session rev0.5 final
 
Cisco data center training for ibm
Cisco data center training for ibmCisco data center training for ibm
Cisco data center training for ibm
 
Contrail Enabler for agile cloud services
Contrail Enabler for agile cloud servicesContrail Enabler for agile cloud services
Contrail Enabler for agile cloud services
 
Designing Multi-tenant Data Centers Using EVPN
Designing Multi-tenant Data Centers Using EVPNDesigning Multi-tenant Data Centers Using EVPN
Designing Multi-tenant Data Centers Using EVPN
 
General bypass application v1.4 2016
General bypass application v1.4 2016General bypass application v1.4 2016
General bypass application v1.4 2016
 
Inf net2227 heath
Inf net2227 heathInf net2227 heath
Inf net2227 heath
 
Mondaygeneralhankinsvpn2 140605100226-phpapp01 (1)
Mondaygeneralhankinsvpn2 140605100226-phpapp01 (1)Mondaygeneralhankinsvpn2 140605100226-phpapp01 (1)
Mondaygeneralhankinsvpn2 140605100226-phpapp01 (1)
 
Cisco data center support
Cisco data center supportCisco data center support
Cisco data center support
 
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
Erez Cohen & Aviram Bar Haim, Mellanox - Enhancing Your OpenStack Cloud With ...
 
Nexus 7000 Series Innovations: M3 Module, DCI, Scale
Nexus 7000 Series Innovations: M3 Module, DCI, ScaleNexus 7000 Series Innovations: M3 Module, DCI, Scale
Nexus 7000 Series Innovations: M3 Module, DCI, Scale
 
Cloudstack conference open_contrail v4
Cloudstack conference open_contrail v4Cloudstack conference open_contrail v4
Cloudstack conference open_contrail v4
 
Operationalizing EVPN in the Data Center: Part 2
Operationalizing EVPN in the Data Center: Part 2Operationalizing EVPN in the Data Center: Part 2
Operationalizing EVPN in the Data Center: Part 2
 
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*
 
Mellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDNMellanox Approach to NFV & SDN
Mellanox Approach to NFV & SDN
 

Similar to Решения NFV в контексте операторов связи

Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualizationSDN Hub
 
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld
 
6WINDGate™ - Enabling Cloud RAN Virtualization
6WINDGate™ - Enabling Cloud RAN Virtualization6WINDGate™ - Enabling Cloud RAN Virtualization
6WINDGate™ - Enabling Cloud RAN Virtualization6WIND
 
Summit 16: How to Compose a New OPNFV Solution Stack?
Summit 16: How to Compose a New OPNFV Solution Stack?Summit 16: How to Compose a New OPNFV Solution Stack?
Summit 16: How to Compose a New OPNFV Solution Stack?OPNFV
 
Sharing High-Performance Interconnects Across Multiple Virtual Machines
Sharing High-Performance Interconnects Across Multiple Virtual MachinesSharing High-Performance Interconnects Across Multiple Virtual Machines
Sharing High-Performance Interconnects Across Multiple Virtual Machinesinside-BigData.com
 
DPDK Summit 2015 - RIFT.io - Tim Mortsolf
DPDK Summit 2015 - RIFT.io - Tim MortsolfDPDK Summit 2015 - RIFT.io - Tim Mortsolf
DPDK Summit 2015 - RIFT.io - Tim MortsolfJim St. Leger
 
VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld
 
#IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" #IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" Brocade
 
SD-WAN Catalyst a brief Presentation of solution
SD-WAN Catalyst a brief  Presentation of solutionSD-WAN Catalyst a brief  Presentation of solution
SD-WAN Catalyst a brief Presentation of solutionpepegaston2030
 
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX VMworld
 
LEGaTO Heterogeneous Hardware
LEGaTO Heterogeneous HardwareLEGaTO Heterogeneous Hardware
LEGaTO Heterogeneous HardwareLEGATO project
 
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...SkillFactory
 
Openstack v4 0
Openstack v4 0Openstack v4 0
Openstack v4 0sprdd
 
NFV Linaro Connect Keynote
NFV Linaro Connect KeynoteNFV Linaro Connect Keynote
NFV Linaro Connect KeynoteLinaro
 
VMware vSphere 4.1 deep dive - part 2
VMware vSphere 4.1 deep dive - part 2VMware vSphere 4.1 deep dive - part 2
VMware vSphere 4.1 deep dive - part 2Louis Göhl
 
Development, test, and characterization of MEC platforms with Teranium and Dr...
Development, test, and characterization of MEC platforms with Teranium and Dr...Development, test, and characterization of MEC platforms with Teranium and Dr...
Development, test, and characterization of MEC platforms with Teranium and Dr...Michelle Holley
 
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.ioFast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.ioOPNFV
 
 Network Innovations Driving Business Transformation
 Network Innovations Driving Business Transformation Network Innovations Driving Business Transformation
 Network Innovations Driving Business TransformationCisco Service Provider
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreinside-BigData.com
 
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'OpenStack Korea Community
 

Similar to Решения NFV в контексте операторов связи (20)

Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
 
VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead VMworld 2013: Extreme Performance Series: Network Speed Ahead
VMworld 2013: Extreme Performance Series: Network Speed Ahead
 
6WINDGate™ - Enabling Cloud RAN Virtualization
6WINDGate™ - Enabling Cloud RAN Virtualization6WINDGate™ - Enabling Cloud RAN Virtualization
6WINDGate™ - Enabling Cloud RAN Virtualization
 
Summit 16: How to Compose a New OPNFV Solution Stack?
Summit 16: How to Compose a New OPNFV Solution Stack?Summit 16: How to Compose a New OPNFV Solution Stack?
Summit 16: How to Compose a New OPNFV Solution Stack?
 
Sharing High-Performance Interconnects Across Multiple Virtual Machines
Sharing High-Performance Interconnects Across Multiple Virtual MachinesSharing High-Performance Interconnects Across Multiple Virtual Machines
Sharing High-Performance Interconnects Across Multiple Virtual Machines
 
DPDK Summit 2015 - RIFT.io - Tim Mortsolf
DPDK Summit 2015 - RIFT.io - Tim MortsolfDPDK Summit 2015 - RIFT.io - Tim Mortsolf
DPDK Summit 2015 - RIFT.io - Tim Mortsolf
 
VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture VMworld 2013: Advanced VMware NSX Architecture
VMworld 2013: Advanced VMware NSX Architecture
 
#IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal" #IBMEdge: "Not all Networks are Equal"
#IBMEdge: "Not all Networks are Equal"
 
SD-WAN Catalyst a brief Presentation of solution
SD-WAN Catalyst a brief  Presentation of solutionSD-WAN Catalyst a brief  Presentation of solution
SD-WAN Catalyst a brief Presentation of solution
 
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
VMworld 2013: Bringing Network Virtualization to VMware Environments with NSX
 
LEGaTO Heterogeneous Hardware
LEGaTO Heterogeneous HardwareLEGaTO Heterogeneous Hardware
LEGaTO Heterogeneous Hardware
 
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
Конференция Brocade. 4. Развитие технологии Brocade VCS, новое поколение комм...
 
Openstack v4 0
Openstack v4 0Openstack v4 0
Openstack v4 0
 
NFV Linaro Connect Keynote
NFV Linaro Connect KeynoteNFV Linaro Connect Keynote
NFV Linaro Connect Keynote
 
VMware vSphere 4.1 deep dive - part 2
VMware vSphere 4.1 deep dive - part 2VMware vSphere 4.1 deep dive - part 2
VMware vSphere 4.1 deep dive - part 2
 
Development, test, and characterization of MEC platforms with Teranium and Dr...
Development, test, and characterization of MEC platforms with Teranium and Dr...Development, test, and characterization of MEC platforms with Teranium and Dr...
Development, test, and characterization of MEC platforms with Teranium and Dr...
 
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.ioFast datastacks - fast and flexible nfv solution stacks leveraging fd.io
Fast datastacks - fast and flexible nfv solution stacks leveraging fd.io
 
 Network Innovations Driving Business Transformation
 Network Innovations Driving Business Transformation Network Innovations Driving Business Transformation
 Network Innovations Driving Business Transformation
 
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and moreAdvanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
Advanced Networking: The Critical Path for HPC, Cloud, Machine Learning and more
 
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
[OpenStack Day in Korea 2015] Track 2-3 - 오픈스택 클라우드에 최적화된 네트워크 가상화 '누아지(Nuage)'
 

More from TERMILAB. Интернет - лаборатория

Профессиональные сервисы для Центров Обработки Данных
Профессиональные сервисы для Центров Обработки Данных Профессиональные сервисы для Центров Обработки Данных
Профессиональные сервисы для Центров Обработки Данных TERMILAB. Интернет - лаборатория
 
Жизненный цикл сети и сервисные предложения Juniper Networks
Жизненный цикл сети и сервисные предложения Juniper NetworksЖизненный цикл сети и сервисные предложения Juniper Networks
Жизненный цикл сети и сервисные предложения Juniper NetworksTERMILAB. Интернет - лаборатория
 
Обновление продуктовой линейки Juniper Networks. Маршрутизация. Коммутация. Б...
Обновление продуктовой линейки Juniper Networks. Маршрутизация. Коммутация. Б...Обновление продуктовой линейки Juniper Networks. Маршрутизация. Коммутация. Б...
Обновление продуктовой линейки Juniper Networks. Маршрутизация. Коммутация. Б...TERMILAB. Интернет - лаборатория
 

More from TERMILAB. Интернет - лаборатория (14)

УПРАВЛЕНИЕ ПРОЕКТАМИ – от задумки до внедрения
УПРАВЛЕНИЕ ПРОЕКТАМИ – от задумки до внедренияУПРАВЛЕНИЕ ПРОЕКТАМИ – от задумки до внедрения
УПРАВЛЕНИЕ ПРОЕКТАМИ – от задумки до внедрения
 
Новые коммутаторы QFX10000. Технология JunOS Fusion
Новые коммутаторы QFX10000. Технология JunOS FusionНовые коммутаторы QFX10000. Технология JunOS Fusion
Новые коммутаторы QFX10000. Технология JunOS Fusion
 
Стратегия Juniper в контексте Web 2.0
Стратегия Juniper в контексте Web 2.0Стратегия Juniper в контексте Web 2.0
Стратегия Juniper в контексте Web 2.0
 
Профессиональные сервисы для Центров Обработки Данных
Профессиональные сервисы для Центров Обработки Данных Профессиональные сервисы для Центров Обработки Данных
Профессиональные сервисы для Центров Обработки Данных
 
Professional Services в действии. Истории успеха
Professional Services в действии. Истории успеха Professional Services в действии. Истории успеха
Professional Services в действии. Истории успеха
 
Обзор продукта Juniper Secure Analytics
Обзор продукта Juniper Secure AnalyticsОбзор продукта Juniper Secure Analytics
Обзор продукта Juniper Secure Analytics
 
Управление сервисами дата-центра
Управление сервисами дата-центраУправление сервисами дата-центра
Управление сервисами дата-центра
 
VMware NSX и интеграция с продуктами Juniper
VMware NSX и интеграция с продуктами JuniperVMware NSX и интеграция с продуктами Juniper
VMware NSX и интеграция с продуктами Juniper
 
Решения Mobile Backhaul и Mobile Backhaul Security
Решения Mobile Backhaul и Mobile Backhaul SecurityРешения Mobile Backhaul и Mobile Backhaul Security
Решения Mobile Backhaul и Mobile Backhaul Security
 
VMware NSX и интеграция с продуктами Juniper
VMware NSX и интеграция с  продуктами JuniperVMware NSX и интеграция с  продуктами Juniper
VMware NSX и интеграция с продуктами Juniper
 
Архитектура Метафабрика. Универсальный шлюз SDN.
Архитектура Метафабрика. Универсальный шлюз SDN.Архитектура Метафабрика. Универсальный шлюз SDN.
Архитектура Метафабрика. Универсальный шлюз SDN.
 
Технологии ЦОД. Virtual Chassis Fabric
Технологии ЦОД. Virtual Chassis FabricТехнологии ЦОД. Virtual Chassis Fabric
Технологии ЦОД. Virtual Chassis Fabric
 
Жизненный цикл сети и сервисные предложения Juniper Networks
Жизненный цикл сети и сервисные предложения Juniper NetworksЖизненный цикл сети и сервисные предложения Juniper Networks
Жизненный цикл сети и сервисные предложения Juniper Networks
 
Обновление продуктовой линейки Juniper Networks. Маршрутизация. Коммутация. Б...
Обновление продуктовой линейки Juniper Networks. Маршрутизация. Коммутация. Б...Обновление продуктовой линейки Juniper Networks. Маршрутизация. Коммутация. Б...
Обновление продуктовой линейки Juniper Networks. Маршрутизация. Коммутация. Б...
 

Recently uploaded

Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 

Recently uploaded (20)

Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 

Решения NFV в контексте операторов связи

  • 1. Juniper Networks SDN and NFV products for Service Providers Networks Evgeny Bugakov Senior Systems Engineer, JNCIE-SP 21 April 2015 Moscow, Russia
  • 2. AGENDA Virtualization strategy and goals1 vMX product overview and performance2 vMX Roadmap and licensing4 vMX Use cases and deployment models3 Northstar WAN SDN Controller5
  • 4. Branch Office HQ Carrier Ethernet Switch Cell Site Router Mobile & Packet GWs Aggregation Router/ Metro Core DC/CO Edge Router Service Edge Router Core Enterprise Edge/Mobile Edge Aggregation/Metro/Metro core Service Provider Edge/Core and EPC VCPE, Enterprise Router Virtual PE, Hardware Virtualization Virtual Route Reflector MX SDN Gateway Control Plane and OS: Virtual JUNOS, Forwarding Plane: Virtualized Trio vPE, vCPE Data center/Central Office MX Virtualization Strategy Leverage R&D effort and JUNOS feature velocity across all physical & virtualization initiatives
  • 5. Physical vs. Virtual Physical Virtual High throughput, high density Flexibility to reach higher scale in control plane and service plane Guarantee of SLA Agile, quick to start Low power consumption per throughput Low power consumption per control plan and service Scale up Scale out Higher entry cost in $ and longer time to deploy Lower entry cost in $ and shorter time to deploy Distributed or centralized model Optimal in centralized cloud-centric deployment Well development network mgmt system, OSS/BSS Same platform mgmt as Physical, plus same VM mgmt as a SW on server in the cloud Variety of network interfaces for flexibility Cloud centric, Ethernet-only Excellent price per throughput ratio Ability to apply “pay as you grow” model Each option has its own strength, and it is created with different focus
  • 6. Type of deployments with virtual platform Traditional function, 1:1 form replacement New applications where physical is not feasible or ideal A whole new approach to a traditional concept Cloud CPE Cloud based VPN Service Chaining GW Virtual Private Cloud GW Multi-function, multi-layer integration w/ routing as a plug-in SDN GW Route Reflector Services appliances Lab & POC Branch Router DC GW CPE PE Wireless LAN GW Mobile Sec GW Mobile GW
  • 8. vMX overview Efficient separation of control and data-plane – Data packets are switched within vTRIO – Multi-threaded SMP implementation allows core elasticity – Only control packets forwarded to JUNOS – Feature parity with JUNOS (CLI, interface model, service configuration) – NIC interfaces (eth0) are mapped to JUNOS interfaces (ge-0/0/0) Guest OS (Linux) Guest OS (JUNOS) Hypervisor x86 Hardware CHASSISD RPD LC- Kernel DCD SNMP Virtual TRIO VFP VCP Intel DPDK
  • 9. Virtual and Physical MX PFE VFP Microcode crosscompiled X86 instructions CONTROL PLANE DATA PLANE ASIC/HARD WARE Cross compilation creates high leverage of features between Virtual and Physical with minimal re-work TRIO UCODE
  • 10. Virtualization techniques: deployment with hypervisors Application Virtual NICs Physical NICs Guest VM#1 Hypervisor: KVM, XEN,VMWare ESXi Physical layer VirtIO drivers Device emulation Para-virtualization (VirtIO, VMXNET3) • Guest and Hypervisor work together to make emulation efficient • Offers flexibility for multi-tenancy but with lower I/O performance • NIC resource is not tied to any one application and can be shared across multiple applications • vMotion like functionality possible PCI-Pass through with SR-IOV • Device drivers exist in user space • Best for I/O performance but has dependency on NIC type • Direct I/O path between NIC and user-space application bypassing hypervisor • vMotion like functionality not possible Application Virtual NICs Guest VM#2 VirtIO drivers Application Virtual NICs Physical NICs Guest VM#1 Hypervisor: KVM, XEN, VMWare ESXi Physical layer Device emulation Application Virtual NICs Guest VM#2 Device emulation PCIPass-through SR-IOV
  • 11. Virtualization techniques: containers deployment Application 1 Virtual NICs Physical NICs Physical layer Containers (Docker, LXC) • No hypervisor layer. Much less memory and compute resource overhead • No need for PCI-pass through or special NIC emulation • Offers high I/O performance • Offers flexibility for multi-tenancy Application 2 Virtual NICs Container engine (Docker, LXC)
  • 12. Virtual TRIO Packet Flow Physical nics Virtual nics DPDK br-int 172.16.0.3 vpfe0 eth0 : 172.16.0.2 fxp0: <any address> vre0 rpd chasd VMXT = microkernel vTRIO br-ext <any address> eth1 : <any address> em1: 172.16.0.1 vpfe1 vre1 VCP VFP
  • 14. vMX Environment Description Value Sample system configuration Intel Xeon E5-2667 v2 @ 3.30GHz 25 MB Cache. NIC: Intel 82599 (for SR-IOV only) Memory Minimum: 8 GB (2GB for vRE, 4GB for vPFE, 2GB for Host OS) Storage Local or NAS Sample system configuration Sample configuration for number of CPUs Use-cases Requirement VMX with up to 100Mbps performance Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP]. Min # of Cores: 2 [ 1 core for VFP and 1 core for VCP]. Min memory 8G. VirtIO NIC only. VMX with up 3G of performance @ 512 bytes Min # of vCPUs: 4 [1 vCPU for VCP and 3 vCPUs for VFP]. Min # of Cores: 4 [ 2 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G. VirtIO or SR-IOV NIC. VMX with 10G and beyond (assuming min 2 ports of 10G) Min # of vCPUs: 5 [1 vCPU for VCP and 4 vCPUs for VFP]. Min # of Cores: 5 [ 3 cores for VFP, 1 core for host, 1 core for VCP]. Min memory 8G. SR-IOV only NIC.
  • 15. vMX Baseline Performance VMX performance in Gbps # of cores for packet processing * Frame size (Bytes) 3 4 6 8 10 256 2 3.8 7.2 9.3 12.6 512 3.7 7.3 13.5 18.4 19.8 1500 10.7 20 20 20 20 2 x 10G ports 4 x 10G ports # of cores for packet processing* Frame size (Bytes) 3 4 6 8 10 256 2.1 4.2 6.8 9.6 13.3 512 4.0 7.9 13.8 18.6 26 1500 11.3 22.5 39.1 40 40 6 x 10G ports # of cores for packet processing* Frame size (Bytes) 3 4 6 8 10 256 2.2 4.0 6.8 9.8 512 4.1 8.1 14 19.0 27.5 1500 11.5 22.9 40 53.2 60 *Number of cores includes cores for packet processing and associated host functionality. For each 10G port there is a dedicated core not included in this number. 8 x 10G ports # of cores for packet processing* Frame size (Bytes) 3 4 6 8 12 66 4.8 128 8.3 256 14.4 512 31 1500 78.5 IMIX 35.3
  • 16. vMX use cases and deployment models
  • 17. Service Provider VMX use case – virtual PE (vPE) DC/CO Gateway Provider MPLS cloudCPE L2 PE L3 PE CPE Peering Internet SMB CPE Pseudowire L3VPN IPSEC/Overlay technology Branch Offic e Branch Offic e DC/CO Fabric vPE • Scale-out deployment scenarios • Low bandwidth, high control plane scale customers • Dedicated PE for new services and faster time-to- market Market Requirement • VMX is a virtual extension of a physical MX PE • Orchestration and management capabilities inherent to any virtualized application apply VMX Value Proposition
  • 18. VMX as a DC Gateway – virtual USGW VM VM VM ToR (IP) ToR (L2) Non Virtualized environment (L2) VXLAN Gateway (VTEP) VTEP VM VM VM VTEP Virtualized Server Virtualized Server VPN Cust A VPN Cust B VRF A VRF B MPLS Cloud VPN Gateway (L3VPN) VMX Virtual Network B Virtual Network A VM VM VM VM VM VM Data Center/ Central Office • Service Providers need a gateway router to connect the virtual networks to the physical network • Gateway should be capable of supporting different DC overlay, DC Interconnect and L2 technologies in the DC such as GRE, VXLAN, VPLS and EVPN Market Requirement • VMX supports all the overlay, DCI and L2 technologies available on MX • Scale-out control plane to scale up VRF instances and number of VPN routes VMX Value Proposition
  • 19. Reflection from physical to virtual world Proof of concept lab validation or SW certification • Perfect mirroring effect between carrier grade physical platform & virtual router • Can provide reflection effect of an actual deployment in virtual environment • Ideal to support • Proof of Concept lab • New service configuration/operation preparation • SW release validation for an actual deployment • Training lab for operational team • Troubleshoot environment for a real network issue • CAPEX or OPEX reduction for lab • Quick turn around when lab network scale is required Virtual Physical deployment
  • 20. Virtual BNG cluster in a data center BNG cluster 10K~100K subscribers Data Center or CO vMX as vBNG vMX vMX vMX vMX vMX • Potentially BNG function can be virtualized, and vMX can help form a BNG cluster at the DC or CO (Roadmap item, not at FRS); • Suitable to perform heavy load BNG control-plane work while there is little BW needed; • Pay-as-you-grow model; • Rapid Deployment of new BNG router when needed; • Scale-out works well due to S-MPLS architecture, leverages Inter-Domain L2VPN, L3VPN, VPLS;
  • 21. vMX Route Reflector feature set Route Reflectors are characterized by RIB scale (available memory) and BGP Performance (Policy Computation, route resolver, network I/O - determined by CPU speed) Memory drives route reflector scaling • Larger memory means that RRs can hold more RIB routes • With higher memory an RR can control larger network segments – lower number of RRs required in a network CPU speed drives faster BGP performance • Faster CPU clock means faster convergence • Faster RR CPUs allow larger network segments controlled by one RR - lower numbers of RRs required in a network vRR product addresses these pain point by running Junos image as an RR application on faster CPUs and with memory on standard servers/appliances
  • 22. VRR Scaling Results  * The convergence numbers also improve with higher clock CPU Tested with 32G vRR instance Address Family # of advertizing peers active routes Total Routes Memory Utilization(for receive all routes) Time taken to receive all routes # of receiving peers Time taken to advertise the routes and Mem Utils. IPv4 600 4.2 million 42Mil (10path) 60% 11min 600 20min(62%) IPv4 600 2 million 20Mil (10path) 33% 6min 600 6min(33%) IPv6 600 4 million 40Mil (10path) 68% 26min 600 26min(68%) VPNv4 600 2Mil 4Mil (2 paths ) 13% 3min 600 3min(13%) VPNv4 600 4.2Mil 8.4Mil (2 paths ) 19% 5min 600 23min(24%) VPNv4 600 6Mil 12Mil (2 paths ) 24% 8min 600 36min(32%) VPNv6 600 6Mil 12Mil (2 paths ) 30% 11min 600 11min(30%) VPNv6 600 4.2Mil 8.4Mil (2 paths ) 22% 8min 600 8min(22%)
  • 23. CLOUD Based Virtual Route Reflector DESIGN Solving the best path selection problem for cloud virtual route reflector VRR 1 Region 1 Regional Network 2 VRR 2 Region 2Data Center Cloud Backbone GRE, IGP VRR 2 selects path based on R1 view R1 R2 VRR 2 selects path based on R2 view  vRR as an “Application” hosted in DC  GRE tunnel is originated from gre.X (control plane interface)  VRR behaves like it is locally attached to R1 (requires resolution RIB config) Client 2 Client 1 Regional Network 1 Client 3 iBGP Cloud Overlay w/ Contrail or VMWare
  • 24. VMX to offer managed CPE/centralized CPE vMX as vCPE (IPSec, NAT) vSRX (Firewall) Branch Offic e Switch Provider MPLS cloud DC/CO GW Branch Offic e Switch Provider MPLS cloud DC/CO Fabric + Contrail overlay vMX as vPE Branch Offic e Switch L2 PE L2 PE PE Internet Contrail Controller  Service providers want to offer a managed CPE service and centralize the CPE functionality to avoid “truck rolls”  Large enterprises want a centralized CPE offering to manage all their branch sites  Both SPs and enterprises want the ability to offer new services without changing the CPE device Market Requirement  VMX with service chaining can offer best of breed routing and L4-L7 functionality  Service chaining offers the flexibility to add new services in a scale-out manner VMX Value Proposition
  • 25. Cloud Based CPE with vMX • A Simplified CPE • Remove CPE barriers to service innovation • Lower complexity & cost DHCPFirewallRouting / IP ForwardingNAT Modem / ONTSwitchAccess Point VoiceMoCA/ HPAV/ HPNA3 Typical CPE Functions DHCP FWRouting / IP Forwarding NATModem / ONTSwitchAccess Point VoiceMoCA/ HPAV/ HPNA3 Simplified L2 CPE In Network CPE functions  Leverage & integrate with other network services  Centralize & consolidate  Seamless integrate with mobile & cloud based services Direct Connect  Extend reach & visibility into the home  Per device awareness & state  Simplified user experience  Simplify the device required on the customer premise  Centralize key CPE functions & integrate them into the network edge BNG / PE in SP Network
  • 26. More use cases? The limit is our imagination • Virtual platform is one more tool for network provider, and the use cases are up to users to define VPC GW for private, public and hybrid cloud Virtual Route Reflector NFV plug-in for multi- function consolidation SW certification, lab validation, network planning & troubleshooting, proof of concept Distributed NFV Service Complex Virtual BNG cluster Virtual Mobile service control GW And more… Cloud based VPN vGW for service chaining
  • 28. vMX Products family Characteristics Target customer Availability Trial • Up to 90 day trial • No limit on capacity • Inclusive of all features • Potential customers who want to try-out VMX in their lab or qualify VMX • Early availability by end of Feb 2015 Lab simulation/Educ ation • No time-limit enforced • Forwarding plane limited to 50Mbps • Inclusive of all features • Customer wants to simulate production network in lab • New customer to gain JUNOS and MX experience • Early availability by end of Feb 2015 GA product • Bandwidth driven licenses • Two modes for features: BASE or ADVANCE/PREMIUM • Production deployment for VMX • 14.1R6 (June 2015)
  • 29. VMX FRS product • Official FRS target date for VMX Phase-1 is targeted for Q1 2015 with JUNOS release 14.1R6. • High level overview of FRS product • DPDK integration. Min 80G throughput per VMX instance. • OpenStack integration. • 1:1 mapping between VFP and VCP • Hypervisor support: KVM, VMWare ESXi, Xen • High level feature support for FRS • Full IP capabilities • MPLS: LDP, RSVP • MPLS applications: L3VPN, L2VPN, L2Circuit • IP and MPLS multicast • Tunneling: GRE, LT • OAM: BFD • QoS: Intel DPDK QoS feature-set
  • 31. vMX with vRouter and Orchestration Contrail controller NFV orchestrator Template based config • vMX with vRouter integration • VirtIO utilized for Para-virtualized drivers • Contrail OpenStack for • VM management • Setting up overlay network • NFV Orchestrator (OpenStack Heat templates) utilized to easily create and replicate VMX instances
  • 33. vMX Pricing philosophy Value based pricing Elastic pricing model • Price as a platform and not just on cost of bandwidth • Each VMX instance is a router with its own control-plane, data-plane and administrative domain • The value lies in the ability to instantiate routers easily • Bandwidth based pricing • Pay as you grow model
  • 34. Application package functionality mapping Application package Functionality Use cases BASE • IP routing with 32K IP routes in FIB • Basic L2 functionality: L2 Bridging and switching • No VPN capabilities: No L2VPN, VPLS, EVPN and L3VPN • Low end CPE or Layer3 Gateway ADVANCED (-IR) • Full IP FIB • Full L2 capabilities includes L2VPN, VPLS, L2Circuit • VXLAN • EVPN • IP Multicast • L2vPE • Full IP vPE • Virtual DC GW PREMIUM (-R) • BASE • L3VPN for IP and Multicast • L3VPN vPE • Virtual Private Cloud GW Note: Application packages exclude IPSec, BNG and VRR functionality.
  • 35. Bandwidth License SKUs • Bandwidth based licenses for each application package for the following processing capacity limits: 100M, 250M, 500M, 1G, 5G, 10G, 40G. Note for 100M, 250M and 500M there is a combined SKU with all applications included. 100M 250M 500M 1G BASE 1G ADV 1G PRM 5G BASE 5G ADV 5G PRM 10G BASE 10G ADV 10G PRM 40G BASE 40G ADV 40G PRM BASE ADVANCE PREMIUM • Application tiers are additive i.e ADV tier encompasses BASE functionality
  • 36. VMX software License SKUs SKU Description VMX-100M 100M perpetual license. Includes all features in full scale VMX-250M 250M perpetual license. Includes all features in full scale VMX-500M 500M perpetual license. Includes all features in full scale VMX-BASE-1G 1G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features VMX-BASE-5G 5G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features VMX-BASE-10G 10G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features VMX-BASE-40G 40G perpetual license. Includes limited IP FIB and basic L2 functionality. No VPN features VMX-ADV-1G 1G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances VMX-ADV-5G 5G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances VMX-ADV-10G 10G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances VMX-ADV-40G 40G perpetual license. Includes full scale L2/L2.5, L3 features. Includes EVPN and VXLAN. Only 16 L3VPN instances VMX-PRM-1G 1G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features. VMX-PRM-5G 5G perpetual license. Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features. VMX-PRM-10G 10G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features. VMX-PRM-40G 40G perpetual license.Includes all features in BASE (L2/L2.5, L3, EVPN, VXLAN) and full scale L3VPN features.
  • 38. CHALLENGES WITH CURRENT NETWORKS How to Make the Best Use of the Installed Infrastructure? 2 3 1? How do I use my network resources efficiently? 1? How can I make my network application aware? 1? How do I get complete & real-time visibility?
  • 39. PCE ARCHITECTURE A Standards-based Approach for Carrier SDN  Path Computation Element (PCE): Computes the path  Path computation Client (PCC): Receives the path and applies it in the network. Paths are still signaled with RSVP-TE.  PCE protocol (PCEP): Protocol for PCE/PCC communication PCEP PCC PCC PCC A path Computation Element (PCE) is a system component, application, or network node that is capable of determining and finding a suitable route for conveying data between a source and a destination What are the components?What is it? PCE
  • 40. ACTIVE STATEFUL PCE A centralized network controller The original PCE drafts (of the mid-2000s) were mainly focused around passive stateless PCE architectures:  More recently, there’s a need for a more ‘Active’ and ‘Stateful’ PCE  NorthStar is an active stateful PCE  This fits well to the SDN paradigm of a centralized network controller What makes an active Stateful PCE different:  The PCE is synchronized, in real-time, with the network​ via standard networking protocols; IGP, PCEP  The PCE has visibility into the network state; b/w availability, LSP attributes  The PCE can take ‘control’ and create ‘state’ within the MPLS network  The PCE dictates the order of operations network-wide​.  Report LSP state  Create LSP state NorthStar MPLS Network
  • 41. SOFTWARE-DRIVEN POLICY Topology Discovery Path Computation State Installation NORTHSTAR COMPONENTS & WORKFLOW PCEP  TE LSP discovery IGP-TE, BGP-LS  TED discovery (BGP-LS, IGP)  LSDB discovery (OSPF, ISIS) PCEP  Create/Modify TE LSP  One session per LER(PCC) ANALYZE OPTIMIZE VIRTUALIZE Routing PCEPApplication Specific Alg’s RSVP signaling OPEN APIs
  • 42. NORTHSTAR MAJOR COMPONENTS NorthStar consists of several major components:  JUNOS Virtual Machine (VM)  Path Computation Server (PCS)  Topology Server  REST Server Component functional responsibilities:  The JUNOS VM, is used to collect the TE-database & LSDB  A new JUNOS daemon, NTAD, is used remote ‘flash’ the lsdist0 table to the PCS  The PCS has multiple functions:  Peers with each PCC using PCEP for LSP state collection & modification  Runs application specific Algs for computing LSP paths  The REST server is the interface into the APIs PCEJUNOS VM NTAD RPD PCS REST_Server KVM Hypervisor Centos 6.5 MPLS Network PCC BGP-LS/IGP PCEP Topo_Server
  • 43. Standard, custom, & 3rd party Applications Topology Discovery Path Computation Path Installation Topology API Path computation API Path provisioning API PCEP PCEPApplication specific algorithmsIGP-TE / BGP-LS REST REST REST NorthStar pre-packaged applications Bandwidth Calendaring, Path Diversity, Premium path, auto-bandwidth / TE++, etc… NORTHSTAR NORTHBOUND API Integration with 3rd Party Tools and Custom Applications
  • 44. NORTHSTAR 1.0 HIGH AVAILABILITY (HA) Active / Standby for delegated LSPs NorthStar 1.0 supports a high availability model only for delegated LSPs:  Controllers are not actively synced with each-other Active / standby PCE model with up to 16 back-up controllers:  PCE-group: All PCE belonging to the same group LSPs are delegated to the primary PCE  Primary PCE is the controller with the highest delegation priority  Other controllers cannot make changes to the LSPs  If a PCC looses connection with its primary PCE, it will immediately use the PCE with next highest delegation priority as its new primary PCE  ALL PCCs MUST use the same primary PCE [configuration protocols pcep] pce-group pce { pce-type active stateful; lsp-provisioning; delegation-cleanup-timeout 600; } pce jnc1 { pce-group pce; delegation-priority 100; } pce jnc2 { pce-group pce; delegation-priority 50; jnc1 jnc2 PCC PCEPPCEP
  • 45. JUNOS PCE CLIENT IMPLEMENTATION New JUNOS daemon, pccd Enables a PCE application to set parameters for a traditionally configured TE LSPs and create ephemeral LSPs  PCCD is the relay/message translator between the PCE & RPD  LSP parameters, such as the path & bandwidth, & LSP creation instructions are received from the PCE are communicated to RPD via PCCD  RPD then signals the LSP using RSVP-TE PCE PCEP PCCD PCEP RPD MPLS Network PCEP JUNOS IPC RSVP-TE
  • 46. Topology Discovery MPLS capacity planning ‘Full’ Offline Network Planning NorthStar NorthStar Simulation IP/MPLSview LSP Control/Modification FCAPs (PM, CM, FM)Exhaustive Failure Analysis REAL-TIME NETWORK FUNCTIONS  Dynamic Topology updates via BGP-LS / IGP-TE  Dynamic LSP state updates via PCEP  Real-time modification of LSP attributes via PCEP (ERO, B/W, pre-emption, …) MPLS LSP PLANNING & DESIGN  Topology acquisition via NorthStar REST API (snapshot)  LSP provisioning via REST API  Exhaustive failure analysis & capacity planning for MPLS LSPs  MPLS LSP design (P2MP, FRR, JUNOS config’let, …) OFFLINE NETWORK PLANNING & MANAGEMENT  Topology acquisition & equipment discovery via CLI, SNMP, NorthStar REST API  Exhaustive failure analysis & capacity planning (IP & MPLS)  Inventory, provisioning, & performance management NORTHSTAR SIMULATION MODE NorthStar vs. IP/MPLSview
  • 47. DIVERSE PATH COMPUTATION Automated Computation of end-to-end diverse paths Network-wide visibility allows NorthStar to support end-to-end LSP path diversity:  Wholly disjoint path computations; Options for link, node and SRLG diversity  Pair of diverse LSPs with the same end-points or with different end-points  SRLG information learned from the IGP dynamically  Supported for PCE created LSPs(at time of provisioning) and delegated LSPs(though manual creation of diversity group) Warning! Shared Risk Shared Risk Eliminated Primary Link Secondary Link CE CE CE CE NorthStar
  • 48. PCE CREATED SYMMETRIC LSPS Local association of LSP symmetry constraint Symmetric LSPs NorthStar NorthStar supports creating symmetric LSPs:  Does not leverage GMPLS extensions for co-routed or associated bi-directional LSPs  Unidirectional LSPs (identical names) are created from nodeA to nodeZ & nodeZ to nodeA  Symmetry constraint is maintained locally on NorthStar (attribute: pair =<value>) Symmetric LSP creation
  • 49. MAINTENANCE-MODE RE-ROUTING Automated Path Re-computation, Re-signaling and Restoration Automate re-routing of traffic before a scheduled maintenance window:  Simplifies planning and preparation before and during a maintenance window  Eliminate the risk that traffic is mistakenly affected when a node / link goes into maintenance mode  Reduced need for spare capacity through the optimum use of resources available during the maintenance window  After the maintenance window finished paths are automatically restored to the (new) optimum path 1 Maintenance mode tagged: LSP paths are re-computed assuming affected resources are not available X X X 2 In maintenance mode: LSP paths are automatically (make-before-break) re-signaled 3 Maintenance mode removed: all LSP paths are re-stored to their (new) optimal path NorthStar
  • 50. GLOBAL CONCURRENT OPTIMIZATION Optimized LSP placement NorthStar enhances traffic engineering through LSP placement based on a network wide visibility of the topology and LSP parameters:  CSPF ordering can be user-defined, i.e. the operator can select which parameters such as LSP priority and LSP bandwidth influence the order of placement  Net Groom: - Triggered on demand - User can choose LSPs to be optimized - LSP priority is not taken into account - No pre-emption  Path Optimization: - Triggered on demand or on scheduled intervals (with optimization timer) - Global re-optimization toward all LSPs - LSP priority is taken into account - Preemption may happen High priority LSP Low priority LSP Global re- optimization NorthStarBandwidth bottleneck! CSPF failure New Path request
  • 51. INTER-DOMAIN TRAFFIC-ENGINEERING Optimal Path Computation & LSP Placement LSP [delegation, creation, optimization] of inter-domain LSPs  Single active PCE across domains, BGP-LS for topology acquisition  JUNOS Inter-AS requirements & constraints http://www.juniper.net/techpubs/en_US/junos13.3/topics/usage-guidelines/mpls-enabling-inter-as-traffic-engineering-for- lsps.html Inter-AS Traffic-Engineering NorthStar NorthStar Inter-Area Traffic-Engineering AS 100 AS 200 Area 1 Area 2 Area 3Area 0
  • 52. NORTHSTAR SIMULATION MODE Offline Network Planning & Modeling NorthStar builds a near real-time network model for visualization and off-line planning through dynamic topology / LSP acquisition:  Export of topology and LSP state to NorthStar simulation mode for ‘off-line’ MPLS network modeling  Add/delete links/nodes/LSPs for future network planning  Exhaustive failure analysis, P2MP LSP design/planning, LSP design/planning, FRR design/planning  JUNOS LSP config’let generation NorthStar-Simulation Year 1 Year 3 Year 5 ExtensionYear 1
  • 53. A REAL CUSTOMER EXAMPLE – PCE VALUE Centralized vs. distributed path computationLinkUtilization(%) 0,00% 20,00% 40,00% 60,00% 80,00% 100,00% 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 97 100 103 106 109 112 115 118 121 124 127 130 133 136 139 142 145 148 151 154 157 160 163 166 169 172 Distributed CSPF PCE centralized CSPF  TE-LSP operational routes are used for distributed CSPF  RSVP-TE Max Reservable BW set BW set to 92%  Modeling was performed with the exact operation LSP paths  Convert all TE-LSPS to EROs via PCE design action  Objective function is Min Max link utilizations  Only Primary EROS & Online Bypass LSPS  Modeling was performed with 100% of TE LSPS being computed by PCE Up to 15% reduction in RSVP reserved B/W Distributed CSPF Assumptions Centralized Path Calculation Assumptions
  • 54. NORTHSTAR 1.0 FRS delivery NorthStar FRS is targeted for March-23rd:  (Beta) trials / evaluations already ongoing  First customer wins in place Target JUNOS releases:  14.2R3 Special *  14.2R4* / 15.1R1* / 15.2R1* Supported platforms at FRS:  PTX (3K, 5K),  MX (80, 104, 240/480/960, 2010/2020, vMX)  Additional platform support in NorthStar 2.0 * Pending TRD Process NorthStar packaging & platform:  Bare metal application only  No VM support at FRS  Runs on any x86 64bit machine that is supported by Red Hat 6 or Centos 6  Single hybrid ISO for installation  Based on Juniper SCL 6.5R3.0 Recommended minimum hardware requirements:  64-bit dual x86 processor or dual 1.8GHz Intel Xeon E5 family equivalent  32 GB RAM  1TB storage  2 x 1G/10G network interface
  • 56. How to get more? • Join us at Facebook page: Juniper.CIS.SE (Juniper techpubs ru)