Professional Documents
Culture Documents
Modified: 2019-09-05
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc. in the United States
and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective
owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
https://support.juniper.net/support/eula/. By downloading, installing or using such software, you agree to the terms and conditions of
that EULA.
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at https://www.juniper.net/books.
Documentation Conventions
Caution Indicates a situation that might result in loss of data or hardware damage.
Laser warning Alerts you to the risk of personal injury from a laser.
Table 2 on page x defines the text and syntax conventions used in this guide.
Bold text like this Represents text that you type. To enter configuration mode, type the
configure command:
user@host> configure
Fixed-width text like this Represents output that appears on the user@host> show chassis alarms
terminal screen.
No alarms currently active
Italic text like this • Introduces or emphasizes important • A policy term is a named structure
new terms. that defines match conditions and
• Identifies guide names. actions.
• Junos OS CLI User Guide
• Identifies RFC and Internet draft titles.
• RFC 1997, BGP Communities Attribute
Italic text like this Represents variables (options for which Configure the machine’s domain name:
you substitute a value) in commands or
configuration statements. [edit]
root@# set system domain-name
domain-name
Text like this Represents names of configuration • To configure a stub area, include the
statements, commands, files, and stub statement at the [edit protocols
directories; configuration hierarchy levels; ospf area area-id] hierarchy level.
or labels on routing platform • The console port is labeled CONSOLE.
components.
< > (angle brackets) Encloses optional keywords or variables. stub <default-metric metric>;
# (pound sign) Indicates a comment specified on the rsvp { # Required for dynamic MPLS only
same line as the configuration statement
to which it applies.
[ ] (square brackets) Encloses a variable for which you can community name members [
substitute one or more values. community-ids ]
GUI Conventions
Bold text like this Represents graphical user interface (GUI) • In the Logical Interfaces box, select
items you click or select. All Interfaces.
• To cancel the configuration, click
Cancel.
> (bold right angle bracket) Separates levels in a hierarchy of menu In the configuration editor hierarchy,
selections. select Protocols>Ospf.
Documentation Feedback
We encourage you to provide feedback so that we can improve our documentation. You
can use either of the following methods:
• Online feedback system—Click TechLibrary Feedback, on the lower right of any page
on the Juniper Networks TechLibrary site, and do one of the following:
• Click the thumbs-up icon if the information on the page was helpful to you.
• Click the thumbs-down icon if the information on the page was not helpful to you
or if you have suggestions for improvement, and use the pop-up form to provide
feedback.
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active Juniper Care or Partner Support
Services support contract, or are covered under warranty, and need post-sales technical
support, you can access our tools and resources online or open a case with JTAC.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
• Find solutions and answer questions using our Knowledge Base: https://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://entitlementsearch.juniper.net/entitlementsearch/
• Visit https://myjuniper.juniper.net.
Overview
vSRX Overview
vSRX is a virtual security appliance that provides security and networking services at the
perimeter or edge in virtualized private or public cloud environments. vSRX runs as a
virtual machine (VM) on a standard x86 server. vSRX is built on the Junos operating
system (Junos OS) and delivers networking and security features similar to those available
on the software releases for the SRX Series Services Gateways.
The vSRX provides you with a complete Next-Generation Firewall (NGFW) solution,
including core firewall, VPN, NAT, advanced Layer 4 through Layer 7 security services
such as Application Security, intrusion detection and prevention (IPS), and UTM features
including Enhanced Web Filtering and Anti-Virus. Combined with Sky ATP, the vSRX
offers a cloud-based advanced anti-malware service with dynamic analysis to protect
against sophisticated malware, and provides built-in machine learning to improve verdict
efficacy and decrease time to remediation.
vSRX VM
MGD RPD
Management Routing Protocol Advanced Services
Daemon Daemon
Flow Processing
Packet Forwarding
Junos Kernel
QEMU/KVM
DPDK
Data Plane Development Kit
HYPERVISORS/CLOUD ENVIRONMENTS
Memory Storage
g004195
Physical x86
vSRX includes the Junos control plane (JCP) and the packet forwarding engine (PFE)
components that make up the data plane. vSRX uses one virtual CPU (vCPU) for the
JCP and at least one vCPU for the PFE. Starting in Junos OS Release 15.1X49-D70 and
Junos OS Release 17.3R1, multi-core vSRX supports scaling vCPUs and GB virtual RAM
(vRAM). Additional vCPUs are applied to the data plane to increase performance.
Junos OS Release 18.4R1 supports a new software architecture vSRX 3.0 that removes
dual OS and nested virtualization requirement of existing vSRX architecture.
In vSRX 3.0 architecture, FreeBSD 11.x is used as the guest OS and the Routing Engine
and Packet Forwarding Engine runs on FreeBSD 11.x as single virtual machine for improved
performance and scalability. vSRX 3.0 uses DPDK to process the data packets in the
data plane. A direct Junos upgrade from vSRX to vSRX 3.0 software is not supported.
• Improved boot time and enhanced responsiveness of the control plane during
management operations.
Figure 2 on page 17 shows the high-level software architecture for vSRX 3.0
vSRX VM
DPDK
(Data Plane Development Kit)
Junos OS
(64-bit SMP, FreeBSD 11.x)
Memory Storage
g300161
Physical x86
Some of the key benefits of vSRX in a virtualized private or public cloud multitenant
environment include:
• Content security features (including Anti Virus, Web Filtering, Anti Spam, and Content
Filtering)
• Centralized management with Junos Space Security Director and local management
with J-Web Interface
The VMware vSphere Web Client is used to deploy the vSRX VM.
Figure 3 on page 18 shows an example of how vSRX can be deployed to provide security
for applications running on one or more virtual machines. The vSRX virtual switch has a
connection to a physical adapter (the uplink) so that all application traffic flows through
the vSRX VM to the external network.
You can scale the performance and capacity of a vSRX instance by increasing the number
of vCPUs and the amount of vRAM allocated to the vSRX. The multi-core vSRX
automatically selects the appropriate vCPUs and vRAM values at boot time, as well as
the number of Receive Side Scaling (RSS) queues in the NIC. If the vCPU and vRAM
settings allocated to a vSRX VM do not match what is currently available, the vSRX
scales down to the closest supported value for the instance. For example, if a vSRX VM
has 3 vCPUs and 8 GB of vRAM, vSRX boots to the smaller vCPU size, which requires a
minimum of 2 vCPUs. You can scale up a vSRX instance to a higher number of vCPUs
and amount of vRAM, but you cannot scale down an existing vSRX instance to a smaller
setting.
NOTE: The number of RSS queues typically matches with the number of
data plane vCPUs of a vSRX instance. For example, a vSRX with 4 data plane
vCPUs should have 4 RSS queues.
With the ability to increase the session numbers by increasing the memory, you can enable
vSRX to:
• Deliver the performance that service providers require to scale and protect their
networks.
Run the show security flow session summary | grep maximum command to view the
maximum number of sessions.
Starting in Junos OS Release 18.4R1, the number of flow sessions supported on a vSRX
instance is increased based on the vRAM size used.
Starting in Junos OS Release 19.2R1, the number of flow sessions supported on a vSRX
3.0 instance is increased based on the vRAM size used.
2 4 GB 0.5 M
2 6 GB 1M
2/5 8 GB 2M
2/5 10 GB 2M
2/5 12 GB 2.5 M
2/5 14 GB 3M
2/5/9 16 GB 4M
2/5/9 20 GB 6M
2/5/9 24 GB 8M
2/5/9 28 GB 10 M
2/5/9/17 32 GB 12 M
2/5/9/17 40 GB 16 M
2/5/9/17 48 GB 20 M
2/5/9/17 56 GB 24 M
2/5/9/17 64 GB 28 M
19.2R1 Starting in Junos OS Release 19.2R1, the number of flow sessions supported
on a vSRX 3.0 instance is increased based on the vRAM size used.
18.4R1 Starting in Junos OS Release 18.4R1, the number of flow sessions supported
on a vSRX instance is increased based on the vRAM size used.
Software Specifications
Table 5 on page 21 lists the system software requirement specifications when deploying
vSRX on VMware. The table outlines the Junos OS release in which a particular software
specification for deploying vSRX on VMware was introduced. You must need to download
a specific Junos OS release to take advantage of certain features.
Hypervisor VMware ESXi 5.1, 5.5, or 6.0 Junos OS Release 15.1X49-D15 and Junos
support OS Release 17.3R1
VMware ESXi 5.5, 6.0, 6.5 Junos OS Release 17.4R1, 18.1R1, 18.2R1,
18.3R1
Disk space 16 GB (IDE or SCSI drives) Junos OS Release 15.1X49-D15 and Junos
OS Release 17.3R1
• VMNET3
• SR-IOV (Mellanox
ConnectX-3/ConnectX-3 Pro and
Mellanox ConnectX-4
EN/ConnectX-4 Lx EN) is required if
you intend to scale the performance
and capacity of a vSRX VM to 9 or 17
vCPUs and 16 or 32 GB vRAM.
• The DPDK version has been upgraded
from 17.02 to 17.11.2 to support the
Mellanox Family Adapters.
Table 6 on page 23 lists the specifications on the vSRX virtual machine (VM).
Junos OS Release
vCPU vRAM DPDK Hugepage vNICs vDisk Introduced
Junos OS Release
vCPU vRAM DPDK Hugepage vNICs vDisk Introduced
Hardware Specifications
Table 7 on page 24 lists the hardware specifications for the host machine that runs the
vSRX VM.
Component Specification
NUMA Nodes
The x86 server architecture consists of multiple sockets and multiple cores within a
socket. Each socket also has memory that is used to store packets during I/O transfers
from the NIC to the host. To efficiently read packets from memory, guest applications
and associated peripherals (such as the NIC) should reside within a single socket. A
penalty is associated with spanning CPU sockets for memory accesses, which might
result in nondeterministic performance. For vSRX, we recommend that all vCPUs for the
vSRX VM are in the same physical non-uniform memory access (NUMA) node for optimal
performance.
CAUTION: The Packet Forwarding Engine (PFE) on the vSRX will become
unresponsive if the NUMA nodes topology is configured in the hypervisor to
spread the instance’s vCPUs across multiple host NUMA nodes. vSRX requires
that you ensure that all vCPUs reside on the same NUMA node.
We recommend that you bind the vSRX instance with a specific NUMA node
by setting NUMA node affinity. NUMA node affinity constrains the vSRX VM
resource scheduling to only the specified NUMA node.
If the node on which vSRX is running is different from the node to which the Intel PCI NIC
is connected, then packets will have to traverse an additional hop in the QPI link, and this
will reduce overall throughput. Use the esxtop command to view information about
relative physical NIC locations. On some servers where this information is not available,
refer to the hardware documentation for the slot-to-NUMA node topology.
• In standalone mode:
• In cluster mode:
• Any of the traffic interfaces can be specified as the fabric links, such as ge-0/0/0
for fab0 on node 0 and ge-7/0/0 for fab1 on node 1.
Table 8 on page 26 shows the interface names and mappings for a standalone vSRX
VM.
Network
Adapter Interface Name in Junos OS
1 fxp0
2 ge-0/0/0
3 ge-0/0/1
4 ge-0/0/2
5 ge-0/0/3
6 ge-0/0/4
7 ge-0/0/5
8 ge-0/0/6
Table 9 on page 26 shows the interface names and mappings for a pair of vSRX VMs in
a cluster (node 0 and node 1).
Network
Adapter Interface Name in Junos OS
3 ge-0/0/0 (node 0)
ge-7/0/0 (node 1)
4 ge-0/0/1 (node 0)
ge-7/0/1 (node 1)
5 ge-0/0/2 (node 0)
ge-7/0/2 (node 1)
6 ge-0/0/3 (node 0)
ge-7/0/3 (node 1)
7 ge-0/0/4 (node 0)
ge-7/0/4 (node 1)
8 ge-0/0/5 (node 0)
ge-7/0/5 (node 1)
NOTE: For the management interface, fxp0, VMware uses the VMXNET 3
vNIC and requires promiscuous mode on the vSwitch.
Table 10 on page 27 lists the factory default settings for the vSRX security policies.
To determine the Junos OS features supported on vSRX, use the Juniper Networks Feature
Explorer, a Web-based application that helps you to explore and compare Junos OS
feature information to find the right software release and hardware platform for your
network. Find Feature Explorer at: Feature Explorer: vSRX .
Feature Description
Chassis cluster Generally, on SRX Series devices, the cluster ID and node ID are
written into EEPROM. For the vSRX VM, the IDs are saved in
boot/loader.conf and read during initialization.
Feature Description
1. Click Security>IDP>Policy>Add.
2. On the Add IPS Rule page, select All instead of Any for the
Direction field to list all the FTP attacks.
Transparent mode The known behaviors for transparent mode support on vSRX are:
Some Junos OS software features require a license to activate the feature. To understand
more about vSRX Licenses, see, Licenses for vSRX. Please refer to the Licensing Guide for
general information about License Management. Please refer to the product Data Sheets
for further details, or contact your Juniper Account Team or Juniper Partner.
NOTE: Support for chassis clustering to provide network node redundancy is only available on a
vSRX deployment in Contrail, VMware, KVM, and Windows Hyper-V Server 2016.
Class of service
High-priority queue on SPC Not supported
Diagnostic tools
Flow monitoring cflowd Not supported
version 9
DNS proxy
Dynamic DNS Not supported
Interface family
Interfaces
Aggregated Ethernet Not supported
interface
IPv6 support
DS-Lite concentrator (also Not supported
called Address Family
Transition Router [AFTR])
J-Web
Enhanced routing Not supported
configuration
Miscellaneous
GPRS Not supported
MPLS
Crcuit cross-connect (CCC) Not supported
and translational
cross-connect (TCC)
Packet capture
Packet capture Only supported on physical
interfaces and tunnel interfaces,
such as gr, ip, and st0. Packet
capture is not supported on
redundant Ethernet interfaces
(reth).
Routing
BGP Flowspec Not supported
Switching
Layer 3 Q-in-Q VLAN tagging Not supported
Transparent mode
UTM Not supported
User interfaces
NSM Not supported
The following procedure describes how to install vSRX and connect vSRX interfaces to
the virtual switches for the appropriate applications. Only the vSRX virtual switch has a
connection to a physical adapter (the uplink) so that all application traffic flows through
the vSRX VM to the external network.
1. Download the vSRX software package for VMware from the Juniper Networks website.
2. Validate the vSRX .ova file if required. For more information, see “Validating the vSRX
.ova File for VMware” on page 43.
4. Select a host or other valid parent for a virtual machine and click Actions > All vCenter
Actions > Deploy OVF Template.
NOTE: The Client Integration Plug-in must be installed before you can
deploy OVF templates (see your VMware documentation).
5. Click Browse to locate the vSRX software package, and then click Next.
7. Click Accept in the End User License Agreement window, and then click Next.
8. Change the default vSRX VM name in the Name box and click Next. It is advisable to
keep this name the same as the hostname you intend to give to the VM.
• Datastore
• Available Space
Table 13 on page 36 lists the disk formats available to store the virtual disk. You can
choose one of the three options listed.
NOTE: For detailed information on the disk formats, see Virtual Disk
Provisioning.
Thick Provision Lazy Zeroed Allocates disk space to the virtual disk without erasing the
previously stored data. The previous data is erased when the VM
is used for the first time.
Thick Provision Eager Erases the previously stored data completely and then allocates
Zeroed the disk space to the virtual disk. Creation of disks in this format is
time consuming.
Thin Provision Allocates only as much datastore space as the disk needs for its
initial operations. Use this format to save storage space.
10. Select a datastore to store the configuration file and virtual disk files in OVF template,
and then click Next.
11. Select your management network from the list, and then click Next. The management
network is assigned to the first network adapter, which is reserved for the management
interface (fxp0).
13. Open the Edit Settings page of the vSRX VM and select a virtual switch for each
network adapter. Three network adapters are created by default. Network adapter 1
is for the management network (fxp0). To add a fourth adapter, select Network from
New device list at the bottom of the page. To add more adapters, see “Adding vSRX
Interfaces” on page 47.
In Figure 4 on page 37, network adapter 2 uses the management network for the uplink
to the external network.
1. Select the host where the vSRX VM is installed, and select Manage > Networking
> Virtual switches.
2. In the list of virtual switches, select vSwitch0 to view the topology diagram for the
management network connected to network adapter 1.
3. Click the Edit icon at the top of the list, select Security, and select Accept next to
Promiscuous mode. Click OK.
On the Manage tab, select Settings > VM Hardware and expand CPU to verify that the
Hardware virtualization option is shown as Enabled.
Starting in Junos OS Release 15.1X49-D40 and Junos OS Release 17.3R1, you can use a
mounted ISO image to pass the initial startup Junos OS configuration to a vSRX VM. This
ISO image contains a file in the root directory called juniper.conf. The configuration file
uses curly brackets ({) and indentation to display the hierarchical structure of the
configuration. Terminating or leaf statements in the configuration hierarchy are displayed
with a trailing semicolon (;) to define configuration details, such as root password,
management IP address, default gateway, and other configuration statements.
NOTE: The juniper.conf file must be in the format same as displayed using
show configuration command and it cannot be in set command format.
system {
host-name iso-mount-test;
root-authentication {
encrypted-password
"$5$wCXP/Ma4$aqMJBhy82.wI643ijb73yHKKl9TXApPycGKKn.PjpA8"; ## SECRET-DATA
}
login {
user regress {
uid 2001;
class super-user;
authentication {
encrypted-password
"$6$FGJM2YEb$KTGIehvNt9Mf.u3ESWGB1aSQeXrSeg6zoCWZw0D6M6vnmWb8DAWsprNXyKZeW6M3kErFFTFtAuNpGjDjfwX4t.";
## SECRET-DATA
}
}
}
services {
ssh {
root-login allow;
}
telnet;
web-management {
http {
interface fxp0.0;
}
}
}
syslog {
user * {
any emergency;
}
file messages {
any any;
authorization info;
}
file interactive-commands {
interactive-commands any;
}
}
license {
autoupdate {
url https://ae1.juniper.net/junos/key_retrieval;
}
}
}
security {
forwarding-options {
family {
inet6 {
mode flow-based;
}
}
}
policies {
default-policy {
permit-all;
}
}
zones {
security-zone AAA {
interfaces {
all;
}
}
}
}
interfaces {
ge-0/0/0 {
vlan-tagging;
unit 0 {
vlan-id 77;
family inet {
address 10.1.1.0/24 {
arp 10.1.1.10 mac 00:10:12:34:12:34;
}
}
}
}
ge-0/0/1 {
vlan-tagging;
unit 0 {
vlan-id 1177;
family inet {
address 10.1.1.1/24 {
arp 10.1.1.10 mac 00:10:22:34:22:34;
}
}
}
}
fxp0 {
unit 0 {
family inet {
address 192.168.70.9/19;
}
}
}
}
routing-options {
static {
route 0.0.0.0/0 next-hop 192.168.64.1;
}
}
4. Boot or reboot the vSRX VM. vSRX will boot using the juniper.conf file included in the
mounted ISO image.
NOTE: If you do not unmount the ISO image after the initial boot or reboot,
all subsequent configuration changes to the vSRX are overwritten by the ISO
image on the next reboot.
1. Create a configuration file in plaintext with the Junos OS command syntax and save
in a file called juniper.conf.
NOTE: The juniper.conf file must contain the full vSRX configuration. The
ISO bootstrap process overwrites any existing vSRX configuration.
1. On the VMware vSphere Web Client, select the datastore you want to upload the file
to.
2. Select the folder where you want to store the file and click Upload a File from the task
bar.
1. From VMware vSphere client, select the host server where the vSRX VM resides.
2. Right-click the vSRX VM and select Edit Settings. The Edit Setting dialogue box appears.
3. Select the Hardware tab and click Add. The Add Hardware dialog box opens.
6. Click Datastore ISO File, browse to your boostrap ISO image, and click Next.
9. Right-click the vSRX VM and select Power>Power On to boot the vSRX VM.
10. After the vSRX boots, verify the configuration and then select Power> Power down to
shut down the vSRX so you can remove the ISO image.
11. Select the CD/DVD drive from the Hardware tab in the VMWare vSphere client.
12. Select the CD drive for the ISO file and click Remove to remove your boostrap ISO
image.
14. Right-click the vSRX VM and select Power>Power On to boot the vSRX VM.
15.1X49-D80 Starting in Junos OS Release 15.1X49-D40 and Junos OS Release 17.3R1, you can
use a mounted ISO image to pass the initial startup Junos OS configuration to a
vSRX VM. This ISO image contains a file in the root directory called juniper.conf.
The configuration file uses curly brackets ({) and indentation to display the
hierarchical structure of the configuration. Terminating or leaf statements in the
configuration hierarchy are displayed with a trailing semicolon (;) to define
configuration details, such as root password, management IP address, default
gateway, and other configuration statements.
The vSRX open virtual application (OVA) image is securely signed. You can validate the
OVA image, if necessary, but you can install or upgrade vSRX without validating the OVA
image.
Before you validate the OVA image, ensure that the Linux/UNIX PC or Windows PC on
which you are performing the validation has the following utilities available: tar, openssl,
and ovftool. See the OVF Tool Documentation for details about the VMware Open
Virtualization Format (OVF) tool, including a Software Download link.
1. Download the vSRX OVA image and the Juniper Networks Root certificate file
(JuniperRootRSACA.pem) from the vSRX Juniper Networks Software Download page.
NOTE: You need to download the Juniper Networks Root certificate file
only once; you can use the same file to validate OVA images for future
releases of vSRX.
2. (Optional) If you downloaded the OVA image and the certificate file to a PC running
Windows, copy the two files to a temporary directory on a PC running Linux or UNIX.
You can also copy the OVA image and the certificate file to a temporary directory
(/var/tmp or /tmp) on a vSRX node.
Ensure that the OVA image file and the Juniper Networks Root certificate file are not
modified during the validation procedure. You can do this by providing write access
to these files only to the user performing the validation procedure. This is especially
important if you use an accessible temporary directory, such as /tmp or /var/tmp,
because such directories can be accessed by several users. Take precautions to ensure
that the files are not modified by other users during the validation procedure.
-bash-4.1$ ls
JuniperRootCA.pem junos-vsrx-15.1X49-DXX.4-domestic.ova
4. Unpack the OVA image by running the following command: tar xf ova-filename
-bash-4.1$ cd tmp
5. Verify that the unpacked OVA image contains a certificate chain file (certchain.pem)
and a signature file (vsrx.cert).
-bash-4.1$ ls
certchain.pem junos-vsrx-15.1X49-DXX.4-domestic.cert
junos-vsrx-15.1X49-DXX.4-domestic-disk1.vmdk
junos-vsrx-15.1X49-DXX.4-domestic.mf junos-vsrx-15.1X49-DXX.4-domestic.ovf
6. Validate the unpacked OVF file (extension .ovf) by running the following command:
ovftool ovf-filename
where ovf-filename is the filename of the unpacked OVF file contained within the
previously downloaded OVA image.
https://www.juniper.net/us/en/products-services/software/security/vsrxseries/
Vendor URL: https://www.juniper.net/
Download Size: 227.29 MB
Deployment Sizes:
Flat disks: 2.00 GB
Sparse disks: 265.25 MB
Networks:
Name: VM Network
Description: The VM Network network
Virtual Machines:
Name: Juniper Virtual SRX
Disks:
Index: 0
Instance ID: 5
Capacity: 2.00 GB
Disk Types: IDE
NICs:
Adapter Type: E1000
Connection: VM Network
Deployment Options:
Id: 2GvRAM
Label: 2G vRAM
Description:
2G Memory
7. Validate the signing certificate with the Juniper Networks Root CA file by running the
following command:
junos-vsrx-15.1X49-DXX.4-domestic.cert: OK
a. Determine if the contents of the OVA image have been modified. If the contents
have been modified, download the OVA image from the vSRX downloads page.
c. Retry the preceding validation steps using one or both new files.
vSRX VM Management
The network adapter for each interface uses SR-IOV or VMXNET 3 as the adapter type.
The first network adapter is for the management interface (fxp0) and must use VMXNET
3. All additional network adapters should have the same adapter type. The three network
adapters created by default use VMXNET 3.
• The DPDK version has been upgraded from 17.02 to 17.11.2 to support the
Mellanox Family Adapters .
The network adapters are mapped sequentially to the vSRX interfaces, as shown in
“Requirements for vSRX on VMware” on page 21.
NOTE: If you have used the interface mapping workaround required for prior
Junos releases, you do not need to make any changes when you upgrade to
Junos Release 15.1X49-D70 for vSRX.
Use the following procedure to locate available VFs and add PCI devices:
a. Use SSH to log in to the ESXi server and enter the following command to view the
VFs for vmnic6 (or another vNIC):
Choose one or more VF IDs that are not active, such as 3 through 6. Note that a VF
assigned to a VM that is powered off is shown as inactive.
b. Enter the lspci command to view the VF number of the chosen VF IDs. In the
following example, find the entry that ends with [vmnic6], scroll down to the next
entry ending in VF_3, and note the associated VF number 05:10.6. Note that the
next VF_3 entry is for vmnic7.
# lspci
05:10.6.
0000:05:10.7 Network controller: Intel Corporation 82599 Ethernet Controller
Virtual Function [PF_0.5.1_VF_3] ----- VF ID 3 on vmnic7.
a. Power off the vSRX VM and open the Edit Settings page. By default there are three
network adapters using VMXNET 3.
b. Add one or more PCI devices on the Virtual Hardware page. For each device, you
must select an entry with an available VF number from Step 1. For example:
c. Click OK and open the Edit Settings page to verify that the new network adaptors
are shown on the Virtual Hardware page (one VMXNET 3 network adapter and up
to nine SR-IOV interfaces as PCI devices).
To view the SR-IOV interface MAC addresses, select the VM Options tab, click
Advanced in the left frame, and then click Edit Configuration. In the parameters
pciPassthruN.generatedMACAddress, N indicates the PCI device number (0 through
9).
d. Power on the vSRX VM and log in to the VM to verify that VMXNET 3 network
adapter 1 is mapped to fxp0, PCI device 0 is mapped to ge-0/0/0, PCI device 1 is
mapped to ge-0/0/1, and so on.
NOTE: A vSRX VM with SR-IOV interfaces cannot be cloned. You must deploy
a new vSRX VM and add the SR-IOV interfaces as described here.
1. Power off the vSRX VM and open the Edit Settings page on vSphere Web Client.
2. Add network adapters on the Virtual Hardware page. For each network adapter, select
Network from New device list at the bottom of the page, expand New Network, and
select VMXNET 3 as the adapter type.
3. Click OK and open the Edit Settings page to verify that the new network adaptors are
shown on the Virtual Hardware page.
4. Power on the vSRX VM and log in to the VM to verify that network adapter 1 is mapped
to fxp0, network adapter 2 is mapped to ge-0/0/0, and so on. Use the show interfaces
terse CLI command to verify that the fxp0 and ge-0/0/n interfaces are up.
Starting in Junos OS Release 15.1X49-70 and Junos OS Release 17.3R1, you can scale the
performance and capacity of a vSRX instance by increasing the number of vCPUs and
the amount of vRAM allocated to the vSRX. See “Requirements for vSRX on VMware”
on page 21 for the software requirement specifications of a vSRX VM.
NOTE: You cannot scale down the number of vCPUs or decrease the amount
of vRAM for an existing vSRX VM.
To gracefully shutdown the vSRX instance with VMware vSphere Web Client:
1. On VMware vSphere Web Client, Select Edit Settings to open the powered down
vSRX VM to open the virtual machine details window.
4. Click Power On. The VM manager launches the vSRX VM with the new vCPU and vRAM
settings.
NOTE: vSRX scales down to the closest supported value if the vCPU or vRAM
settings do not match what is currently available.
1. For memory, select the NUMA node that line cards connect to.
a. Disable hyper-threading.
4. For vNICs, use either 2 vNICs or 4 vNICs if you want to scale the performance of the
vSRX VM.
15.1X49-D70 Starting in Junos OS Release 15.1X49-70 and Junos OS Release 17.3R1, you
can scale the performance and capacity of a vSRX instance by increasing
the number of vCPUs and the amount of vRAM allocated to the vSRX.
This section provides an overview of the various tools available to configure and manage
a vSRX VM once it has been successfully deployed.
Built into Junos OS, Junos script automation is an onboard toolset available on all Junos
OS platforms, including routers, switches, and security devices running Junos OS (such
as a vSRX instance).
You can use the Junos OS CLI and the Junos OS scripts to configure, manage, administer,
and troubleshoot vSRX.
• Security Director
root#cli
root@>
configure
[edit]
root@#
[edit]
root@# set system root-authentication plain-text-password
New password: password
Retype new password: password
[edit]
root@# set system host-name host-name
[edit]
root@# set interfaces fxp0 unit 0 family inet dhcp-client
[edit]
root@# set interfaces ge-0/0/0 unit 0 family inet dhcp-client
[edit]
root@# set security zones security-zone trust interfaces ge-0/0/0.0
[edit]
root@# commit check
configuration check succeeds
[edit]
root@# commit
commit complete
12. Optionally, use the show command to display the configuration to verify that it is
correct.
• Configure a default route if the fxp0 IP address is on a different subnet than the host
server.
system {
services {
web-management {
http {
interface fxp0.0;
}
}
}
}
4. Click Log In, and select the Configuration Wizards tab from the left navigation panel.
The J-Web Setup wizard page opens.
5. Click Setup.
You can use the Setup wizard to configure the vSRX VM or edit an existing
configuration.
• Select Edit Existing Configuration if you have already configured the wizard using
the factory mode.
• Select Create New Configuration to configure the vSRX VM using the wizard.
• Basic
Select basic to configure the vSRX VM name and user account information as
shown in Table 14 on page 57.
Field Description
Instance name Type the name of the instance. For example: vSRX.
• Super User: This user has full system administration rights and can add,
modify, and delete settings and users.
• Operator: This user can perform system operations such as a system
reset but cannot change the configuration or add or modify users.
• Read only: This user can only access the system and view the
configuration.
• Disabled: This user cannot access the system.
• Select either Time Server or Manual. Table 15 on page 57 lists the system time
options.
Field Description
Time Server
Host Name Type the hostname of the time server. For example:
ntp.example.com.
Manual
Date Click the current date in the calendar.
• Expert
Select Expert to configure the basic options as well as the following advanced
options:
1. Review and ensure that the configuration settings are correct, and click Next. The
Commit Configuration page appears.
3. Check the connectivity to vSRX, as you might lose connectivity if you have changed
the management zone IP. Click the URL for reconnection instructions on how to
reconnect to the instance.
After successful completion of the setup, you are redirected to the J-Web interface.
CAUTION: After you complete the initial setup, you can relaunch the J-Web
Setup wizard by clicking Configuration>Setup. You can either edit an
existing configuration or create a new configuration. If you create a new
configuration, the current configuration in vSRX will be deleted.
Managing Security Policies for Virtual Machines Using Junos Space Security Director
When you finish creating and verifying your security configurations from Security Director,
you can publish these configurations and keep them ready to be pushed to all security
devices, including vSRX instances, from a single interface.
The Configure tab is the workspace where all of the security configuration happens. You
can configure firewall, IPS, NAT, and UTM policies; assign policies to devices; create and
apply policy schedules; create and manage VPNs; and create and manage all the shared
objects needed for managing your network security.
NOTE: If you configure a chassis cluster on vSRX nodes across two physical
hosts, disable igmp-snooping on the bridge that each host physical interface
belongs to that the control vNICs use. This ensures that the control link
heartbeat is received by both nodes in the chassis cluster.
The chassis cluster data plane operates in active/active mode. In a chassis cluster, the
data plane updates session information as traffic traverses either device, and it transmits
information between the nodes over the fabric link to guarantee that established sessions
are not dropped when a failover occurs. In active/active mode, traffic can enter the cluster
on one node and exit from the other node.
• Resilient system architecture, with a single active control plane for the entire cluster
and multiple Packet Forwarding Engines. This architecture presents a single device
view of the cluster.
• Monitoring of physical interfaces, and failover if the failure parameters cross a configured
threshold.
• Support for generic routing encapsulation (GRE) and IP-over-IP (IP-IP) tunnels used
to route encapsulated IPv4 or IPv6 traffic by means of two internal interfaces, gr-0/0/0
and ip-0/0/0, respectively. Junos OS creates these interfaces at system startup and
uses these interfaces only for processing GRE and IP-IP tunnels.
At any given instant, a cluster node can be in one of the following states: hold, primary,
secondary-hold, secondary, ineligible, or disabled. Multiple event types, such as interface
monitoring, Services Processing Unit (SPU) monitoring, failures, and manual failovers,
can trigger a state transition.
Prerequisites
Ensure that your vSRX instances comply with the following prerequisites before you
enable chassis clustering:
• Use show version in Junos OS to ensure that both vSRX instances have the same
software version.
• Use show system license in Junos OS to ensure that both vSRX instances have the
same licenses installed.
You can deploy up to 255 chassis clusters in a Layer 2 domain. Clusters and nodes are
identified in the following ways:
On SRX Series devices, the cluster ID and node ID are written into EEPROM. On the vSRX
VM, vSRX stores and reads the IDs from boot/loader.conf and uses the IDs to initialize
the chassis cluster during startup.
The chassis cluster formation commands for node 0 and node 1 are as follows:
• On vSRX node 0:
• On vSRX node 1:
NOTE: Use the same cluster ID number for each node in the cluster.
NOTE: The vSRX interface naming and mapping to vNICs changes when you
enable chassis clustering.
After reboot, on node 0, configure the fabric (data) ports of the cluster that are used to
pass real-time objects (RTOs):
•
user@vsrx0# set interfaces fab0 fabric-options member-interfaces ge-0/0/0
user@vsrx0# set interfaces fab1 fabric-options member-interfaces ge-7/0/0
2. Enter the vSRX username and password, and click Log In. The J-Web dashboard
appears.
3. Click Configuration Wizards>Chassis Cluster from the left panel. The Chassis Cluster
Setup wizard appears. Follow the steps in the setup wizard to configure the cluster
ID and the two nodes in the cluster, and to verify connectivity.
NOTE: Use the built-in Help icon in J-Web for further details on the Chassis
Cluster Setup wizard.
Table 18 on page 67 explains how to add or edit the HA Cluster Interfaces table.
Table 19 on page 68 explains how to add or edit the HA Cluster Redundancy Groups table.
Field Function
Node Settings
Backup Router Displays the router used as a gateway while the Routing Engine is
in secondary state for redundancy-group 0 in a chassis cluster.
Member Interfaces/IP Displays the member interface name or IP address configured for
Address an interface.
Gratuitous ARP Count Displays the number of gratuitous Address Resolution Protocol
(ARP) requests that a newly elected primary device in a chassis
cluster sends out to announce its presence to the other network
devices.
Node Priority Displays the assigned priority for the redundancy group on that
node. The eligible node with the highest priority is elected as
primary for the redundant group.
Node Settings
Host Name Specifies the name of the host. Enter the name of the host.
Backup Router Displays the device used as a gateway while Enter the IP address of the
the Routing Engine is in the secondary state backup router.
for redundancy-group 0 in a chassis cluster.
Destination
Interface
Interface Specifies the interfaces available for the router. Select an option.
Redundant Ethernet
Redundancy Group Specifies the redundancy group name. Enter the redundancy group name.
Allow preemption of Allows a node with a better priority to initiate a failover for a –
primaryship redundancy group.
Gratuitous ARP Count Specifies the number of gratuitous Address Resolution Protocol Enter a value from 1 to 16. The
requests that a newly elected primary sends out on the active default is 4.
redundant Ethernet interface child links to notify network
devices of a change in mastership on the redundant Ethernet
interface links.
node0 priority Specifies the priority value of node0 for a redundancy group. Enter the node priority number as 0.
node1 priority Specifies the priority value of node1 for a redundancy group. Select the node priority number as
1.
Interface Monitor
Interface Specifies the number of redundant Ethernet interfaces to be Select an interface from the list.
created for the cluster.
Weight Specifies the weight for the interface to be monitored. Enter a value from 1 to 125.
Add Adds interfaces to be monitored by the redundancy group along Click Add.
with their respective weights.
Delete Deletes interfaces to be monitored by the redundancy group Select the interface from the
along with their respective weights. configured list and click Delete.
IP Monitoring
Weight Specifies the global weight for IP monitoring. Enter a value from 0 to 255.
Threshold Specifies the global threshold for IP monitoring. Enter a value from 0 to 255.
Retry Count Specifies the number of retries needed to declare reachability Enter a value from 5 to 15.
failure.
Retry Interval Specifies the time interval in seconds between retries. Enter a value from 1 to 30.
IP Specifies the IPv4 addresses to be monitored for reachability. Enter the IPv4 addresses.
Weight Specifies the weight for the redundancy group interface to be Enter the weight.
monitored.
Interface Specifies the logical interface through which to monitor this IP Enter the logical interface address.
address.
Secondary IP address Specifies the source address for monitoring packets on a Enter the secondary IP address.
secondary link.
Delete Deletes the IPv4 address to be monitored. Select the IPv4 address from the list
and click Delete.
• Cluster fabric links (fab0 and fab1). For example, you can specify ge-0/0/0 as fab0
on node0 and ge-7/0/0 as fab1 on node1.
Initially, the VM has only two interfaces. A cluster requires three interfaces (two for the
cluster and one for management) and additional interfaces to forward data. You can
add interfaces through the VMware vSphere Web Client.
1. On the VMware vSphere Web Client, click Edit Virtual Machine Settings for each VM
to create additional interfaces.
2. Click Add Hardware and specify the attributes in Table 20 on page 70.
Attribute Description
Connect at power on Ensure that there is a check mark next to this option.
• Connection Type
• Virtual Machines
• Network Access
• No physical adapters
NOTE:
Port groups are not VLANs. The port group does not segment the vSwitch
into separate broadcast domains unless the domains have different
VLAN tags.
• To use a VLAN as a dedicated vSwitch, you can use the default VLAN
tag (0) or specify a VLAN tag.
3. Right-click on the control network, click Edit Settings, and select Security.
4. Set the promiscuous mode to Accept, and click OK, as shown in Figure 5 on page 71.
NOTE: You must enable promiscuous mode on the control vSwitch for
chassis cluster.
You can use the vSwitch default settings for the remaining parameters.
5. Click Edit Settings for both vSRX VMs to add the control interface (Network adapter
2) into the control vSwitch.
See Figure 6 on page 72 for vSwitch properties and Figure 7 on page 72 for VM properties
for the control vSwitch.
The control interface will be connected through the control vSwitch. See
Figure 8 on page 73.
• Connection Type
• Virtual Machines
• Network Access
• No physical adapters
NOTE:
Port groups are not VLANs. The port group does not segment the vSwitch
into separate broadcast domains unless the domains have different
VLAN tags.
• To use a VLAN as a dedicated vSwitch, you can use the default VLAN
tag (0) or specify a VLAN tag.
• To use VLAN as a shared vSwitch and use a port group, assign a VLAN
tag on the port group for each chassis cluster link.
• MTU: 9000
3. Click Edit Settings for both vSRX VMs to add the fabric interface into the fabric vSwitch.
See Figure 9 on page 74 for vSwitch properties and Figure 10 on page 75 for VM properties
for the fabric vSwitch.
The fabric interface will be connected through the fabric vSwitch. See Figure 11 on page 75.
• Connection Type
• Virtual Machines
• Network Access
• No physical adapters
The data interface will be connected through the data vSwitch using the above procedure.
root#cli
root@>
configure
[edit]
root@#
4. Copy the following commands and paste them into the CLI:
set groups node0 interfaces fxp0 unit 0 family inet address 192.168.42.81/24
set groups node0 system hostname vsrx-node0
set groups node1 interfaces fxp0 unit 0 family inet address 192.168.42.82/24
set groups node1 system hostname vsrx-node1
set apply-groups "${node}"
7. To enable IPv6:
user@host#commit
commit complete
9. When you have finished configuring the device, exit configuration mode.
user@host#exit
After reboot, the two nodes are reachable on interface fxp0 with SSH. If the configuration
is operational, the show chassis cluster status command displays output similar to that
shown in the following sample output.
Cluster ID: 1
Node Priority Status Preempt Manual failover
A cluster is healthy when the primary and secondary nodes are present and both have a
priority greater than 0.
Deploying vSRX Chassis Cluster Nodes Across Different ESXi Hosts Using dvSwitch
Before you deploy the vSRX chassis cluster nodes for ESXi 6.0 (or greater) hosts using
distributed virtual switch (dvSwitch), ensure that you make the following configuration
settings from the vSphere Web Client to ensure that the high-availability cluster control
link works properly between the two nodes:
• In the dvSwitch switch settings of the vSphere Web Client, disable IGMP snooping for
Multicast filtering mode (see Multicast Snooping on a vSphere Distributed Switch).
• In the dvSwitch port group configuration of the vSphere Web Client, enable promiscuous
mode (see Configure the Security Policy for a Distributed Port Group or Distributed Port).
This chassis cluster method uses the private virtual LAN (PVLAN) feature of dvSwitch
to deploy the vSRX chassis cluster nodes at different ESXi hosts. There is no need to
change the external switch configurations.
On the VMware vSphere Web Client, for dvSwitch, there are two PVLAN IDs for the
primary and secondary VLANs. Select Community in the menu for the secondary VLAN
ID type.
Use the two secondary PVLAN IDs for the vSRX control and fabric links. See
Figure 12 on page 78 and Figure 13 on page 79.
You can also use regular VLAN on a distributed switch to deploy vSRX chassis cluster
nodes at different ESXi hosts using dvSwitch. Regular VLAN works similarly to a physical
switch. If you want to use regular VLAN instead of PVLAN, disable IGMP snooping for
chassis cluster links.
NOTE: When the vSRX cluster across multiple ESXi hosts communicates
through physical switches, then you need to consider the other Layer 2
parameters at:
https://kb.juniper.net/library/CUSTOMERSERVICE/GLOBAL_JTAC/NT21/
LAHAAppNotev4.pdf.
Troubleshooting
You need the software serial number to open a support case or to renew a vSRX license.
1. Use the show system license command to find the vSRX software serial number.
License usage:
Licenses Licenses Licenses Expiry
Feature name used installed needed
Virtual Appliance 1 1 0 58 days
Licenses installed:
License identifier: E420588955
License version: 4
Software Serial Number: 20150625
Customer ID: vSRX-JuniperEval
Features:
Virtual Appliance - Virtual Appliance
count-down, Original validity: 60 days